Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 44

Hyper-V

Microsofts latest Server


Virtualization Solution

What is hyper-v
Virtualization is one of todays hottest IT
technologies, and Windows Server 2008s new native
virtualization feature, Hyper-V, is a significant new
competitor that has the potential to change the
market. VMware ESX Server is the current market
favorite. To Know what Hyper-V is , you need to
understand how the architectures of the two products
compare. In addition, Hyper-V introduces some
important new features, and youll want to see how
Hyper-V and the older Virtual Server 2005 R2 relate to
each other. Finally, to enrich your understanding of
Hyper-V Ill show you how to set it up and use it.

Hyper-V Terminology
This section summarizes key terminology
specific to VM technology
child partition: Any partition (VM) that is created by the root
partition.
device virtualization: A mechanism that lets a hardware
resource be abstracted and shared among multiple consumers.
emulated device: A virtualized device that mimics an actual
physical hardware device so that guests can use the typical
drivers for that hardware device.
Enlightenment: An optimization to a guest operating system to
make it aware of VM environments and tune its behavior for VMs.
Guest: Software that is running in a partition. It can be a fullfeatured operating system or a small, special-purpose kernel.
The hypervisor is guest-agnostic.
Hypervisor: A layer of software that sits just above the
hardware and below one or more operating systems. Its primary
job is to provide isolated execution environments called
partitions. Each partition has its own set of hardware resources
(CPU, memory, and devices). The hypervisor is responsible for
controls and arbitrates access to the underlying hardware.

Logical processor: A CPU that handles one thread of


execution (instruction stream). There can be one or
more logical processors per core and one or more
cores per processor socket. In effect, it is a physical
processor.
Passthrough disk access: A representation of an
entire physical disk as a virtual disk within the guest.
The data and commands are passed through to the
physical disk (through the root partitions native
storage stack) with no intervening processing by the
virtual stack.
Root partition: A partition that is created first and
owns all the resources that the hypervisor does not
own including most devices and system memory. It
hosts the virtualization stack and creates and manages
the child partitions.
Synthetic device: A virtualized device with no
physical hardware analog so that guests might need a
driver (virtualization service client) to that synthetic
device. The driver can use VMBus to communicate with
the virtualized device software in the root partition.

Virtual machine (VM): A virtual computer that was


created by software emulation and has the same
characteristics as a real computer.
Virtual processor: A virtual abstraction of a
processor that is scheduled to run on a logical
processor. A VM can have one or more virtual
processors.
Virtualization service client (VSC): A software
module that a guest loads to consume a resource or
service. For I/O devices, the virtualization service
client can be a device driver that the operating
system kernel loads.
Virtualization service provider (VSP): A provider,
exposed by the virtualization stack, that provides
resources or services such as I/O to a child partition.
Virtualization stack: A collection of software
components in the root partition that work together to
support VMs. The virtualization stack works with and
sits above the hypervisor. It also provides
management capabilities.

Prerequisites for Hyper-V


Unlike Microsofts Virtual Server 2005 R2, which runs on both 32bit and 64-bit systems, Hyper-V requires an x64-based system
that has either Intel-VT or AMD-V support. In addition, the host
systems CPU must have data execution protection enabled (the
Intel XD bit or the AMD NX bit). Microsoft will provide Hyper-V
virtualization technology with the following versions of the
Windows Server 2008.
Server 2008, Standard 64Bit Edition
Server 2008, Enterprise 64Bit Edition
Server 2008, Datacenter 64Bit Edition
Like the Windows Server 2003 R2, Enterprise and Datacenter
Editions, the Server 2008, Enterprise Edition allows up to four
virtual Windows instances with no additional licensing costs, and
Server 2008 Datacenter Edition allows an unlimited number of
virtual Windows instances with no additional licensing costs. You
can use Hyper-V with both the full Server 2008 installation, or
with Server Core for any of the Server 2008 editions.
In addition Microsoft will offer a standalone version called HyperV Server.

Windows Server Hyper-V


Architecture

Designed to compete with VMwares ESX Server, Hyper-V


has been built from scratch based on a new microkernel
architecture. The above figure shows an overview of the
new Server 2008 Hyper-V architecture
Unlike Virtual Servers hosted virtualization model, which
requires installing the virtualization software on top of a
host OS, Hyper-V is a virtualization layer that runs
directly on the system hardware with no intervening host
OS. The Hyper-V architecture consists of the bare metal
microkernel hypervisor and parent and child partitions
All Hyper-V implementations have one parent partition.
This partition manages the Hyper-V installation. The
Windows Server Virtualization console runs from the
parent partition. In addition, the parent partition is used
to run thread-supported legacy hardware emulation
virtual machines (VMs). These older emulation-based
VMs are essentially the same as the VMs that run under
a hosted virtualization product such as Virtual Server.

Guest VMs run on Hyper-V child partitions.


Hyper-Vs child partitions support two types of VM:
high performance VMBus-based VMs or hosted
emulation VMs.
VMBus VMs include Windows Server 2003, Windows
Vista, Server 2008, and Xen-enabled Linux. The new
VMBus architecture is essentially a high performance
in-memory pipeline that connects Virtualization
Service Clients (VSCs) in the guests with the hosts
Virtual Service Provider (VSP).
Hosted emulation VMs support guest OSs that dont
support the new VMBus architecture. These OSs
include, Windows NT and non-Xen enabled Linux, like
SUSE Linux Server Enterprise 10.

Storage
HyperV supports synthetic and emulated
storage devices in VMs, but the synthetic
devices generally can offer significantly
better throughput and response times
and reduced CPU overhead.
The exception is if a filter driver can be
loaded and reroutes I/Os to the synthetic
storage device.
Virtual hard disks (VHDs) can be backed
by three types of VHD files or raw disks.
The next slide describes the different
options

Synthetic SCSI Controller: The synthetic storage


controller provides significantly better performance on
storage I/Os with reduced CPU overhead than the
emulated IDE device.
The VM integration services include the enlightened driver
for this storage device and are required for the guest
operating system to detect it.
The operating system disk must be mounted on the IDE
device for the operating system to boot correctly, but the
VM integration services load a filter driver that reroutes
IDE device I/Os to the synthetic storage device.
Its recommended to mount the data drives directly to the
synthetic SCSI controller because that configuration has
reduced CPU overhead.
Also mount log files and the operating system paging file
directly to the synthetic SCSI controller if their expected
I/O rate is high.
For highly intensive storage I/O workloads that span
multiple data drives, its recommended to attach each VHD
to a separate synthetic SCSI controller for better overall
performance.

Virtual Hard Disk Types

There are three types of VHD files. Its recommended that production servers
use fixed-sized VHD files for better performance and also to make sure that
the virtualization server has sufficient disk space for expanding the VHD file
at run time. The following are the three VHD types:
Dynamically expanding VHD: Space for the VHD is allocated on demand.
The blocks in the disk start as zeroed blocks but are not backed by any
actual space in the file. Reads from such blocks return a block of zeros. When
a block is first written to, the virtualization stack must allocate space within
the VHD file for the block and then update the metadata. This increases the
number of necessary disk I/Os for the write and causes an increased CPU
usage. Reads and writes to existing blocks incur both disk access and CPU
overhead when looking up the blocks mapping in the metadata.
Fixed-size VHD: Space for the VHD is first allocated when the VHD file is
created. This type of VHD is less apt to fragment, which reduces the I/O
throughput when a single I/O is split into multiple I/Os. It has the lowest CPU
overhead of the three VHD types because reads and writes do not need to
look up the mapping of the block.
Differencing VHD: The VHD points to a parent VHD file. Any writes to
blocks never written to before result in space being allocated in the VHD file,
as with a dynamically expanding VHD. Reads are serviced from the VHD file
if the block has been written to. Otherwise, they are serviced from the parent
VHD file. In both cases, the metadata is read to determine the mapping of
the block. Reads and writes to this VHD can consume more CPU and result in
more I/Os than a fixed-sized VHD.
Snapshots of a VM create a differencing VHD to store the writes to the disks
since the snapshot was taken.

Passthrough Disks
The VHD in a VM can be mapped directly to a
physical disk or logical unit number (LUN), instead
of a VHD file. The benefit is that this configuration
bypasses the file system (NTFS) in the root
partition, which reduces the CPU usage of storage
I/O. The risk is that physical disk or LUNs can be
more difficult to move between machines than
VHD files.
Large data drives can be prime candidates for
passthrough disks, especially if they are I/O
intensive. VMs that can be migrated between
virtualization servers (such as quick migration)
must also use drives that reside on a LUN of a
shared storage device.

Hyper-V and Virtual Server 2008

Hyper-V introduces capabilities that arent available with


Virtual Server 2005 R2.
Running exclusively on the x64 platform, Hyper-V supports
host systems with up to 1TB of RAM, and Hyper-V doesnt
limit the number of active VMs; the only limitation comes
from the capabilities of the host server hardware.
In addition, the Hyper-V VMs are more scalable than Virtual
Server VMs. Hyper-V supports both 32-bit and 64-bit guest
OSs. Not only can guest VMs take advantage of Hyper-Vs
higher performing VMBus architecture, but guest VMs also
can use more RAM and CPU than Virtual Server offers.
Virtual Server 2005 R2 has no support for virtual SMP and is
limited to 3.6GB of RAM per VM. Hyper-V supports up to 4
virtual processors per VM and up to 32GB of RAM per VM. To
take full advantage of this support, the host system must
have at least 4 cores and more than 32GB of physical RAM.

Hyper-V provides new storage features.


Storage Area Network (SAN) support lets you boot VMs
and implement guest-to-guest failover clustering, as well
as virtual server host failover clustering.
Hyper-V also introduces the pass-through VM access
storage feature.
With Hyper-V, you can access virtual hard disk (VHD)
images without mounting the VHD image in a running
VM.
Hyper-V can also take advantage of Volume Shadow
Copy Service (VSS) for live VM backup.
On the networking side, Hyper-V includes a new virtual
switch with support for Windows Network Load Balancing
(NLB) across VMs on separate servers.
In addition, Hyper-V allows multiple snapshots of running
VMs with the ability to revert back to any of the saved
snapshots.

Installing Hyper-V
Hyper-V is not installed in Server 2008
by default.
To install Hyper-V, you use the Server
2008 Server Manager. Click Start,
Programs, Administrative Tools, and
then select the Server Manager option.
In Server Manager, add the
virtualization role by clicking Add
Roles, which displays the Add Roles
Wizard shown in

In the Add Roles Wizard, check the Windows


Server virtualization role.
Then click Next and step through the wizards
screens to learn about and configure Hyper-V.
The wizard first explains that you might need
to configure your BIOS for virtualization
support, and it provides links to Windows
Server Virtualization Online Help files.
Next, the wizard prompts you for the Local
Area Connections that you want to associate
with your virtual networks.
By default, the wizard creates one virtual
network for each physical network adapter
thats installed.
Next, youre asked to confirm your selections
and prompted to restart your system.

AMD-V systems have virtualization


support enabled by default.
In contrast, if your system uses Intel-VT
virtualization, check your systems BIOS
configuration during the boot process
and make sure that virtualization is
enabled.
For systems with Intel motherboards,
press F2 during the boot process to see
the BIOS configuration.
You can set the Enable VT option to
enable virtualization support in the
processor.

After the system reboots, the Resume


Configuration Wizard screen appears.
Use it to finish installing the Windows Server
Virtualization role.
The new Windows Server Virtualization role will
then be listed under Server Managers installed
roles node.
After the virtualization role is installed, youre
ready to fire-up some new VMs.
Unlike Virtual Server 2005 R2, which you
manage through a Web-based console, Hyper-V
is managed through a Microsoft Management
Console (MMC) 3.0-based Windows GUI.
You start Hyper-Vs Virtualization Management
Console by clicking Start, Administrative Tools,
and then selecting Windows Virtualization
Management.

Use the Wizard to Create and


Migrate VMs

Creating VMs is easy using Hyper-Vs New Virtual


Machine Wizard.
To start the wizard, click New in the Virtualization
Management Console Action pane.
The first screen prompts you for the VM name and
the location where the VM will be created.
By default, Hyper-V creates new VMs in the
C:\ProgramData Microsoft\Windows\Virtualization
directory.
To change the default location, you can use
Virtualization Settings in the Virtualization
Management Console.
Next the wizard prompts you for the amount of
memory allocated to the VM. The default value is
256MB, but you can allocate from 8MB to 32MB of
RAM per VM (limited by your systems physical RAM).

Next, the wizard asks you about networking the


VM. You can choose no network or select a virtual
network.
The wizard created virtual networks when you
first added the virtualization role.
To create virtual networks, you can also use
Virtual Network Switch Management in the
Virtualization Management Console.
You can configure the virtual network switch to
allow internal networking so that VMs can
connect with other VMs or to the Windows Server
host.
You also can create a virtual network that
connects to one or more of the hosts physical
network adapters for external network
connectivity.

The New Virtual Machine Wizard gives you the option of


creating a VHD, connecting to an existing VHD, or
attaching to a VHD later.
By default, VHDs are created in the C:\Users
Public\Documents\Virtual Hard Disks directory. To change
this default directory, you can use Virtualization Setting in
the Virtualization Management Console.
Hyper-V uses the same on-disk VHD format as Virtual
Server 2005 R2. This common format makes it easy to
migrate existing Virtual Server 2005 R2 and Virtual PC VMs
to Server 2008 Hyper-V: Select the option to use an existing
VHD and then provide the wizard with the path to the VHD
file. This attaches the existing VHD to the new Hyper-V VM.
If you chose to use a new VHD, then the next screen offers
OS installation options.
You can install the OS later or install the OS from either the
hosts CD/DVD drive or from an ISO image file.
The last screen presented by the wizard prompts you to
confirm your VM configuration settings.
Finishing the wizard creates the new VM automatically. You
have the option to start it right away or you can manually
start it later.

After a VM is created you have the option to


install the new Integration Services on the
guest.
(Before you install Integration Services we
need to uninstall the virtual server additions)
Integration Services replaces the older Virtual
Machine Additions.
Integration Services provides improved mouse
support and host time synchronization.
You can install Integration Services on the
guest OS by starting a Virtual Machine
Connection from the Virtualization
Management Console.
From the Virtual Machine Connection Action
menu, choose Insert Integration Services Disk.

Microsoft shipped a beta version of Hyper-V in


December 2007.
A prerelease version of Hyper-V was shipped with the
RTM of Server 2008.
Microsoft also released RC0 & RC1 of Hyper-V
Microsoft had stated that the final Hyper-V code will
ship within 180 days of the Windows Server 2008
release to manufacturing (RTM).
Microsoft released the RTM of Hyper-V on 26th June
2007 for download on Microsoft download centre
http://www.microsoft.com/downloads/details.aspx?F
amilyID=f3ab3d4b-63c8-4424-a738-baded34d24ed&Disp
layLang=en
The Hyper-V RTM was released via Windows Update on
8th July 2007
An update to remotely manage the Hyper-V role from
windows vista machine on the network is KB952627,
the update is available in both 32Bit & 64Bit versions

You might also like