Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

 The server console is responsible for booting the system.

 To improve performance, the ESX server employs para-virtualization architecture in which


the VM kernel interacts directly with the hardware without involving host OS.

Advantages of para-virtualization:

 As a guest Os can directly communicate with hypervisor


 This is efficient virtualization. It allows the users to make use of new or modified device
drivers.
Disadvantages:

 Para virtualization requires the guest OS to be modified in order to interact with para
virtualization interfaces.
 It requires significant support and maintainability issues in production environment.
2.3 Virtualization of CPU, Memory and I/O devices.

2.3.1 Hardware support for virtualization

 To support virtualization processors can employ a special running mode and instructions,
know as hardware assisted virtualization. So VMM and guest OS run in different modes.
 The components to consider when selecting virtualization hardware include, CPU, Memory
and Network I/O devices.
 These are all critical for workload consolidation issues. The issues with CPU pertain to
either clock speed or the number of cores held by CPU
 Hardware virtualization allows to run several OSs on a unique machine. This is done due to
the specific software called “Virtual Machine Monitor/Manager (VMM)”.
 In hardware virtualization, there are two things, like, Host machine and Guest machine.
 The software that creates a VM on host hardware is called hypervisor or VMM.
 Modern OSs and processors permit multiple processes to run simultaneously.
 If there is no protection mechanism in a processor, all instructions from different processes
will access the hardware directly and cause system crash,
 Therefore, all processors have at least two modes, user mode and supervisor mode to ensure
controlled access of critical hardware.
 Instructions running in supervisor mode are called privileged instructions. Other instructions
are unprivileged instructions.
 In a virtualized environment it is more difficult to make OSs and applications run correctly
because there are more layers in the machine stack.
 The following figure shows hardware support for virtualization in the Intel x86 processor.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 10


 For processor virtualization, Intel offers the VT-x or VT-i technique. VT-x adds a privileged
mode and some instructions to processors. This enhancement traps all sensitive instructions
in the VMM automatically.
 For memory virtualization, Intel offers the EPT, which translates the virtual address to the
machine’s physical addresses to improve performance.
 For I/O virtualization, Intel implements VT-d or VT-c to support this.

2.3.2 CPU Virtualization

 A VM is a duplicate of an existing computer system in which majority of the VM


instructions are executed on the host processor in native mode. Thus, unprivileged
instructions of VMs run directly on host machine for higher efficiency.
 The critical instructions are divided into 3 categories.
1. Privileged instructions
2. Control-sensitive instructions
3. Behaviour-sensitive instructions
 Privileged instructions execute in a privileged mode and will be trapped if executed outside
this mode
 Control-sensitive instructions attempt to change the configuration of resources used.
 Behaviour-sensitive instructions have different behaviours depending on the configuration
of resources, including the load and store operations over the virtual memory.
 CPU architecture is virtualizable if it supports ability to run VM’s privileged and
unprivileged instructions in CPU’s user mode while VMM runs in supervisor mode.

2.3.2.1 Hardware Assisted CPU virtualization

 This technique attempts to simplify virtualization because full or para virtualization is


complicated.
 Intel and AMD add an additional mode called privilege mode level (Ring-1) to x86
processors.
 Therefore OS can still run at Ring 0 and hypervisor can run at Ring – 1.
 All the privileged and sensitive instructions are trapped in the hypervisor automatically.
 So this technique removes the difficulty of implementing binary translation of full
virtualization and also allows OS to run in VMs without modifications.
 The following figure shows Intel Hardware-assisted CPU virtualization

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 11


 Intel’s VT-x technology is an example of hardware assisted virtualization, VT-x is one of
the two versions of Intel’s virtualization technology used for x86 processors.
 Intel calls the privilege level of x86 processor the VMX Root Mode.
 In order to control start and stop of a VM and allocate a memory page to maintain the CPU
state for VMs, a set of additional instructions is added.
 Xen, VMware, and the Microsoft Virtual PC all implement their hypervisor by using this
VT-x technology.

2.3.3 Memory Virtualization:

 In a traditional execution environment, the OS maintains of virtual memory to machine


memory using page tables, which is a one-stage mapping from virtual memory to machine
memory.
 However, each page table of guest OS has a separate page table in VMM corresponding to
it, the VMM page table is called shadow page table.
 All modern x86 CPUs include a memory management unit (MMU) and a translation look a
side buffer (TLB) to optimize virtual memory performance. So this MMU handles virtual to
physical translations as defined by OS
 However, in virtual execution environment, virtual memory virtualization involves sharing
the physical system memory in RAM and dynamically allocating it to the physical memory
of the VMs
 This means that two-stage mapping process should be maintained by guest OS and the
VMM respectively, i.e virtual memory to physical memory and physical memory to
machine memory.
 VMware uses shadow page table to perform this two stage mapping process.
 Processors use TLB to map virtual memory directly to machine memory to avoid the two
levels of translation on every access.
 The following figure shows the two-level mapping procedure

 The guest OS continues to control the mapping of virtual addresses to physical memory
addresses of VMs. However the guest OS can’t directly access the actual machine memory.
 The VMM is responsible for mapping the guest physical memory to the actual machine
memory.

2.3.4 I/O Virtualization:

 By this I/O virtualization, a single hardware device can be shared by multiple VMs that run
concurrently.
 This I/O virtualization involves managing the routing of I/O request between virtual devices
and the shared physical hardware.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 12


 There are three ways to implement I/O virtualization, like Full device emulation, para-
virtualization, and Direct I/O.
 Full device emulation is first approach for I/O virtualization. Generally, this approach
emulates well-known, real-world devices.
 All the functions of a device or bus infrastructure, such as device enumeration,
identification, interrupts and DMA, are replicated in software. This software is located in
the VMM and acts as a virtual device.
 The para-virtualization method of I/O virtualization is used in Xen. It is also known as split
driver model consisting of frontend driver and a backend driver.
 The frontend driver manages I/O requests of the guest OSs running in Domain U and the
backend driver running in Domain 0, is responsible for managing real I/O devices and
multiplexing the I/O data of different VMs.
 Direct I/O virtualization allows the VM to access devices directly. It can achieve close to
native performance without high CPU costs.
 The following figure shows the device emulation for I/O virtualization.

2.3.5 Virtualization in Muti-core Processors:

 Virtualizing a multi-core processor is relatively more complicated than virutalizing uni-core


processor.
 Multi-core processors are claimed to have higher performance by integrating multiple processor
cores in a single chip.
 However, multi-core virtualization has raised some new challenges to computer architects,
compiler constructors, system designers and application programmers,
 There are mainly two difficulties, application programs must be parallelized to use all cores
fully, and software must explicitly assign tasks to the cores, which is a very complex problem.

Physical Vs Virtual Processor cores:

 The physical processor cores a physical unit of CPU and virtual processor core is also called as
VCPU or virtual processor which is also physical unit that is assigned to VM.
 The multi-core virtualization method allow hardware designers to get an abstraction of the low
level details of the processor cores. So this virtualization in multi-cores method alleviates the
burden and inefficiency of managing hardware resources by software. It is illustrated in the
following figure

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 13


 This method exposes four VCPU to the software, when only three cores are actually present.

Virtual Hierarchy:

 Instead of supporting time-sharing jobs on one or few cores, we can use cores in a space-sharing,
where single or multi threaded jobs are assigned to separate groups of cores for long time
intervals.
 Virtual hierarchies can be created to overlay a coherence and caching hierarchy onto a physical
processor.
 Unlike a fixed physical hierarchy, a virtual hierarchy is a cache hierarchy that can adapt to fit the
workload or mix of workloads.
 The first level of hierarchy locates data blocks close to the cores needing them for faster access,
establishes a shared- cache domain, and establishes a point of coherence for faster
communication.

2.4 Virtual clusters and resource management


Cluster: A cluster is group of servers and other resources that act like a single system and enable
high availability and, in some cases, load balancing and parallel processing.

2.4.1 Physical Vs Virtual clusters:

 Cluster is a group of computers put together. A physical cluster is a collection of servers


(physical machines) interconnected by a physical network such as LAN.
 Virtual clusters are built with VMs installed at distributed servers from one or more physical
clusters. So VMs in a virtual cluster are interconnected logically by a virtual network across
several physical clusters.
 Each virtual cluster is formed with physical machines or VMs hosted by multiple physical
clusters.
 In a virtual cluster, virtual machines are grouped and configured for high performance
computing or parallel computing
 When virtual cluster is created, different cluster features can be used such as failover, load
balancing, live migration of virtual machiches across physical hosts.

Virtual cluster properties:

The provisioning of VMs to a virtual cluster is done dynamically to have the following interesting
properties:

 The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running
with different OSs can be deployed on the same physical node,
 A VM runs with guest OS, which is often different form host OS, that manages the resources in
physical machine where the VM is implemented.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 14


 The purpose of using VMs is to consolidate multiple functionalities on the same server. This
will greatly enhance server utilization and application flexibility.
 VMs can be replicated in multiple servers for the pupose of promoting distributed parallelism,
fault tolerance and disaster recovery.
 The size of virtual cluster can grow or shrink dynamically similar to the way the overlay
network varies in size in P2P network.
 The failure of any physical node may disable some VMs installed on the failing nodes. But the
failure of VMs will not pull down the host system.

2.4.1.1 Fast development and Effective Scheduling:

 The system should have the capability of fast deployment.


 The deployment means two things,
1) To construct and distribute software stacks to physical node inside the clusters as fast as
possible and
2) To quickly switch runtime environment from one user’s virtual cluster to another user’s
virtual cluster.
 If one user finishes the work then virtual cluster should shutdown or suspend quickly to save
resources to run other VMs for other users. So the advantage of this is load balancing of
applications in a virtual cluster.
 There are 4 steps to deploy a group of VMs onto a target cluster:
1. Preparing the disk image
2. Configuring the VMs
3. Choosing the destination nodes and
4. Executing the deployment command on every host

2.4.1.2 High performance Virtual Storage:

 It is also important to manage the disk space occupied by software packages.


 Some storage architecture design can be applied to reduce duplicated blocks in a distributed file
system of virtual clusters.
 Hash values are used to compare the contents of the data blocks.
 Every VM is configured with a name, disk image, network setting, and allocated CPU and
memory. However one needs to record each VM configuration into a file.
 This method is inefficient when managing a large group of VMs. VMs with the same
configurations could use preedited profiles to simplify the process. i.e system configures VMs
according to chosen profile.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 15


There are 3 critical design issues of virtual clusters, like Live migration of VM, Memory, File
and network resource migration and dynamic deployment of VM

2 4.2 Live VM Migration steps and Performance effects:

 Live Migration refers to process of moving a running virtual machine or application between
different physical machines without disconnecting the client or application.
 Memory, storage, and network connectivity of virtual machine are transferred from original
guest machine to destination
 The live migration of VMs allow workloads of one node to transfer to another node. However
it does not guarantee that VMs can randomly migrate among themselves.

Live migration allow us to:

 Automatically optimize virtual machines within resource pools.


 Perform hardware maintenance without scheduling downtime or disrupting business operations.
 When a VM fails, its role could be replaced by another VM on a different node, as long as they
both run with the same guest OS.
 VMs can be live-migrated from one physical machine to another, in case of failure, one VM can
be replaced by another VM.
 The potential drawback is that a VM must stop playing its role if it residing host node fails.
However, this problem can be mitigated with VM life migration.

2.4.3 Migration of Memory, Files and Network Resources:

 This is also one of important aspects of VM migration.


 Moving the memory instance of VM from one physical host to another can be approached in
any no. of ways.
 Memory migration can be in a range of hundreds of megabytes to a few gigabytes in a typical
system today, and it needs to be done in an efficient manner.
 The Internet Suspend-Resume (ISR) technique exploit temporal locality as memory states are
likely to have considerable overlap in the suspended and resumed instance of a VM.
 To exploit temporal locality, each file in file system is represented as a tree of small sub files.
A copy of this tree exists in both suspended and resumed VM instance.

File System Migration:

 To support VM migration, a system must provide each VM with a consistent location


independent view of file system that is available to all hosts.
 A simple way to achieve this is to provide each VM with its own virtual disk. However due to
current trend of high capacity disk, migration of contents of entire disk over a network is not a
viable solution.
 So another way is to have a global file system across all machines where a VM could be
located. This can remove the need to copy files from one machine to another because all files
are network accessible.

Network Migration:

 It involves moving data and programs from one network to another as an upgrade or add-on to
a network system.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 16


 The process of migration makes it possible to set up migrated files on a new network or to
blend two independent networks together.
 The need for network migration may result from security issues, corporate restructuring,
increased storage needs, and many others.
 A migrating VM should maintain all open network connections without relying on forwarding
mechanisms on the original host.
 Each VM must be assigned a virtual IP that known to other entities. This address should
distinct from IP of host machine where VM is currently located.
 Each VM can also have its own distinct virtual MAC address. The VMM maintains a mapping
of virtual IP and MAC address to their corresponding VMs.
 So migrating VM includes all the protocol states and carries its IP address with it.

2.4.4 Dynamic deployment of Virtual Clusters:

 Lightweight Directory Access Protocol (LDAP) is a set of open protocols used to access and
modify centrally stored information over a network.
 Dynamic Host Configuration Protocol (DHCP) is a protocol that provides quick, automatic,
and central management for the distribution of IP addresses within a network.

2.5 Virtualization for Data Centre Automation:


 Data centre automation means that huge volumes of hardware, software and database resources
and these can be allocated dynamically to millions of internet users simultaneously with
guaranteed QoS and cost effectiveness.
 Google, Yahoo, Amazon, Microsoft, Hp, Apple and IBM companies have invested billions of
dollars in data centre construction and automation.
 Virtualization of data centre highlights high availability (HA), backup services, workload
balancing and further increases in client bases.

2.5.1 Sever consolidation in Data Centres:

 In data centres, a large number of heterogeneous workloads can run on servers at various times.
These workloads can be roughly divided into 2 categories.
1. Chatty workloads and
2. Non interactive workloads

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 17


1. Chatty workload: It may burst at some point and return to a silent state at some other point. A
web video service is an example of this, where by a lot of people use it at night and few people use
it during the day. One more example is workload on university result server.

2. Non interactive workload: This workloads don’t require people’s effort to make progress after
they are submitted. High performance computing is a typical example of this. At various stages,
the requirements for resources of these workloads are dramatically different. A workload will
always be satisfied with all demand levels, the workload is statically allocated enough resources so
that peak demand is stratified.

Need & Advantages of Server Consolidation:

 It is common that most servers in data centres are underutilized. A large amount of hardware,
space, power and management cost of these servers is wasted.
 Server consolidation is an approach to improve the low utility ratio of hardware resources by
reducing the no. of physical servers.
 Among several server consolidation techniques, such as centralized and physical consolidation,
virtualized based server consolidation is most powerful. Data centres need to optimize their
resource management.
 Consolidation enhances hardware utilization. Many unutilized servers are consolidated into
fewer servers to enhance resource utilization. Consolidation is also facilitates backup services
and disaster recovery.
 This approach enables more agile provisioning and deployment of resources. In a virtual
environment, the images of guest OSs and their applications are readily cloned and reused.
 The total cost of ownership is reduced. In this sense, server virtualization causes differed
purchases of new servers, a smaller data centre footpoint, lower maintenance costs and lower
power, cooling and cabling requirements.
 This approach improves availability and business continuity. The crash of guest OS has no
effect on the host OS or any other guest OS. It becomes easier to transfer a VM from one
server to another because virtual servers unaware of underlying hardware.

2.5.2 Virtual Storage Management

 In system virtualization, virtual storage includes the storage managed by VMMs and guest
OSs. Generally the data stored in this environment can be classified into two categories,
1. VM Images and
2. Application Data
 The VM images are special to the virtual environment
 The application data includes all other data which is same as the data in traditional OS
environment.
 The most important aspects of system virtualization are encapsulation and isolation
 Traditional OSs and applications running on them can be encapsulated in VMs. Only one OS
runs in virtualization while many applications run in the OS. System virtualization allows
multiple VMs to run on a physical machine and VMs are completely isolated.
 To achieve encapsulation & isolation both system software and hardware platform, such as
CPU and chipset, are rapidly updated. However the storage is lagging. The storage systems
become the main bottleneck of VM deployment.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 18

You might also like