Professional Documents
Culture Documents
Unit 2 - Final - Removed
Unit 2 - Final - Removed
Advantages of para-virtualization:
Para virtualization requires the guest OS to be modified in order to interact with para
virtualization interfaces.
It requires significant support and maintainability issues in production environment.
2.3 Virtualization of CPU, Memory and I/O devices.
To support virtualization processors can employ a special running mode and instructions,
know as hardware assisted virtualization. So VMM and guest OS run in different modes.
The components to consider when selecting virtualization hardware include, CPU, Memory
and Network I/O devices.
These are all critical for workload consolidation issues. The issues with CPU pertain to
either clock speed or the number of cores held by CPU
Hardware virtualization allows to run several OSs on a unique machine. This is done due to
the specific software called “Virtual Machine Monitor/Manager (VMM)”.
In hardware virtualization, there are two things, like, Host machine and Guest machine.
The software that creates a VM on host hardware is called hypervisor or VMM.
Modern OSs and processors permit multiple processes to run simultaneously.
If there is no protection mechanism in a processor, all instructions from different processes
will access the hardware directly and cause system crash,
Therefore, all processors have at least two modes, user mode and supervisor mode to ensure
controlled access of critical hardware.
Instructions running in supervisor mode are called privileged instructions. Other instructions
are unprivileged instructions.
In a virtualized environment it is more difficult to make OSs and applications run correctly
because there are more layers in the machine stack.
The following figure shows hardware support for virtualization in the Intel x86 processor.
The guest OS continues to control the mapping of virtual addresses to physical memory
addresses of VMs. However the guest OS can’t directly access the actual machine memory.
The VMM is responsible for mapping the guest physical memory to the actual machine
memory.
By this I/O virtualization, a single hardware device can be shared by multiple VMs that run
concurrently.
This I/O virtualization involves managing the routing of I/O request between virtual devices
and the shared physical hardware.
The physical processor cores a physical unit of CPU and virtual processor core is also called as
VCPU or virtual processor which is also physical unit that is assigned to VM.
The multi-core virtualization method allow hardware designers to get an abstraction of the low
level details of the processor cores. So this virtualization in multi-cores method alleviates the
burden and inefficiency of managing hardware resources by software. It is illustrated in the
following figure
Virtual Hierarchy:
Instead of supporting time-sharing jobs on one or few cores, we can use cores in a space-sharing,
where single or multi threaded jobs are assigned to separate groups of cores for long time
intervals.
Virtual hierarchies can be created to overlay a coherence and caching hierarchy onto a physical
processor.
Unlike a fixed physical hierarchy, a virtual hierarchy is a cache hierarchy that can adapt to fit the
workload or mix of workloads.
The first level of hierarchy locates data blocks close to the cores needing them for faster access,
establishes a shared- cache domain, and establishes a point of coherence for faster
communication.
The provisioning of VMs to a virtual cluster is done dynamically to have the following interesting
properties:
The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running
with different OSs can be deployed on the same physical node,
A VM runs with guest OS, which is often different form host OS, that manages the resources in
physical machine where the VM is implemented.
Live Migration refers to process of moving a running virtual machine or application between
different physical machines without disconnecting the client or application.
Memory, storage, and network connectivity of virtual machine are transferred from original
guest machine to destination
The live migration of VMs allow workloads of one node to transfer to another node. However
it does not guarantee that VMs can randomly migrate among themselves.
Network Migration:
It involves moving data and programs from one network to another as an upgrade or add-on to
a network system.
Lightweight Directory Access Protocol (LDAP) is a set of open protocols used to access and
modify centrally stored information over a network.
Dynamic Host Configuration Protocol (DHCP) is a protocol that provides quick, automatic,
and central management for the distribution of IP addresses within a network.
In data centres, a large number of heterogeneous workloads can run on servers at various times.
These workloads can be roughly divided into 2 categories.
1. Chatty workloads and
2. Non interactive workloads
2. Non interactive workload: This workloads don’t require people’s effort to make progress after
they are submitted. High performance computing is a typical example of this. At various stages,
the requirements for resources of these workloads are dramatically different. A workload will
always be satisfied with all demand levels, the workload is statically allocated enough resources so
that peak demand is stratified.
It is common that most servers in data centres are underutilized. A large amount of hardware,
space, power and management cost of these servers is wasted.
Server consolidation is an approach to improve the low utility ratio of hardware resources by
reducing the no. of physical servers.
Among several server consolidation techniques, such as centralized and physical consolidation,
virtualized based server consolidation is most powerful. Data centres need to optimize their
resource management.
Consolidation enhances hardware utilization. Many unutilized servers are consolidated into
fewer servers to enhance resource utilization. Consolidation is also facilitates backup services
and disaster recovery.
This approach enables more agile provisioning and deployment of resources. In a virtual
environment, the images of guest OSs and their applications are readily cloned and reused.
The total cost of ownership is reduced. In this sense, server virtualization causes differed
purchases of new servers, a smaller data centre footpoint, lower maintenance costs and lower
power, cooling and cabling requirements.
This approach improves availability and business continuity. The crash of guest OS has no
effect on the host OS or any other guest OS. It becomes easier to transfer a VM from one
server to another because virtual servers unaware of underlying hardware.
In system virtualization, virtual storage includes the storage managed by VMMs and guest
OSs. Generally the data stored in this environment can be classified into two categories,
1. VM Images and
2. Application Data
The VM images are special to the virtual environment
The application data includes all other data which is same as the data in traditional OS
environment.
The most important aspects of system virtualization are encapsulation and isolation
Traditional OSs and applications running on them can be encapsulated in VMs. Only one OS
runs in virtualization while many applications run in the OS. System virtualization allows
multiple VMs to run on a physical machine and VMs are completely isolated.
To achieve encapsulation & isolation both system software and hardware platform, such as
CPU and chipset, are rapidly updated. However the storage is lagging. The storage systems
become the main bottleneck of VM deployment.