Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Managing the information that drives the enterprise

STORAGE
FOCUS ON

Storage in a
VMware
environment
While VMware server virtualization is pretty
mature, the way the platform interacts with
storage is not. But that’s changing. Find out
how your job to support storage for VMware
is getting easier.

INSIDE
VMware storage management evolves

vSphere 5 forges stronger storage bonds

SSD satiates vSphere memory appetite

1
Managing
storage
for virtual
server
environments
VMware storage
management

vSphere 5
storage boost

Memory
problems fix
Virtual servers and storage systems don’t have

v
to exist in separate worlds; new tools and plug-ins
Sponsor
resources provide single-console management of both
virtual servers and storage. BY ERIC SIEBERT

IRTUALIZED SERVERS HAVE created plenty of problems for data


storage managers, not the least of which is keeping track of the
relationships between data storage assets and virtual servers.
Some storage management products have adapted to this new
environment, allowing users to keep track of virtual servers, the
apps they host and the storage they’re using.

Copyright 2011, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by any means, without permission in writing
2 from the publisher. For permissions or reprint information, please contact Mike Kelly, VP and Group Publisher (mkelly@storagemagazine.com).
STORAGE

FUNCTIONAL SILOS
Inside every data center there are typically silos related to specific functional
areas, each with a dedicated group responsible for management. There are
teams responsible for managing specific data center resources, such as the
network, servers, storage systems and virtualization. Each group focuses on
managing its own area and works with other groups when needed to handle
integration points between groups. If a new server requires shared storage,
the server team works with the storage team to get storage provisioned and
presented to the server.
In a traditional physical server environment, the storage group can easily
manage the relationships between storage and physical servers: a logical unit
number (LUN) created on a storage area network (SAN) is assigned to a physical
server and only that server uses the LUN. Server virtualization changes all that.
But storage is perhaps the most critical component of a virtual infrastructure,
so it must be implemented and managed properly for maximum performance
and reliability. The relationship between server virtualization and storage is a
VMware storage
management tight one, so the management must be as well.

VMs CAN COMPLICATE STORAGE


Among server
vSphere 5
storage boost Virtualization is about the sharing of virtualization’s
a common set of physical resources
among many virtual machines (VMs).
strengths are its
Memory Virtualization file systems like VMware features that
problems fix Inc.’s VMFS allow many physical servers provide high
to read and write concurrently to the
same LUNs. This is possible because of a availability and
Sponsor special locking mechanism that ensures workload load
resources multiple hosts have exclusive access to
each of their VMs on a shared LUN. balancing across
Among server virtualization’s strengths a virtualization
are its features that provide high avail-
ability and workload load balancing
cluster.
across a virtualization cluster. Features
such as VMware’s vMotion and Storage vMotion can move VMs while they’re
running from host to host as well as from one storage device to another.
To further complicate things, the movement of virtual machines on storage
devices doesn’t just occur at the virtualization layer. Many storage arrays now
have an automated storage tiering feature built around tiers of devices with
different performance characteristics, such as solid-state drives (SSDs) and
SATA drives that are pooled and presented to a host. The array dynamically

3 STORAGE Focus On: Storage in a VMware Environment


STORAGE

moves data across tiers based on performance demands. All of that occurs at
the storage layer and the virtual host is unaware of the movement.
While the features that move VMs around are beneficial, they can cause
headaches for storage and virtualization administrators as the relationships
between virtual machines, the physical hosts they’re located on and the
storage device where their virtual disk resides is a dynamic one. That will
have the most impact when troubleshooting problems and monitoring per-
formance. Because the virtualization admin is unaware of what’s occurring
at the storage layer and the storage admin doesn’t know what’s happening
at the virtualization layer, neither gets to see the big picture.

PLUG-INS FILL THE MANAGEMENT GAP


Storage vendors recognized the importance of tight integration between storage
and server virtualization and have worked to develop software integration
with existing virtualization management tools like VMware’s vCenter Server.
VMware storage
management VMware offers a solid set of vSphere APIs that allows third-party vendors to
integrate their products with vSphere.
Also, vCenter Server has a plug-in archi-
tecture that makes it easy for third-
Plug-ins appear
vSphere 5
storage boost party plug-ins to seamlessly integrate as a tab inside the
with the vCenter Server admin interface.
Plug-ins appear as a tab inside the
vSphere Client, and
Memory
vSphere Client, and their behavior and their behavior and
problems fix appearance can be customized. This appearance can
allows options or information to be
displayed that’s specific to a particular be customized.
Sponsor object, such as a VM, host or cluster.
resources Not all storage vendors were quick to develop vCenter Server plug-ins, but
most of the major storage vendors today offer plug-ins that allow their storage
arrays to be monitored and managed from within vCenter Server. Each vendor’s
storage plug-in typically only supports specific storage array models and
families, and the plug-in’s functionality and features vary from vendor to
vendor. Generally, storage plug-ins may offer these capabilities:
• Simplified expansion of virtual data stores. LUNs are created and
presented to hosts, which then create data stores such as VMFS volumes
from them. To increase the size of a data store, the underlying LUN on the
storage array needs to be increased first. The plug-in allows both the LUN
and VMFS volume to be increased from the same console.
• Storage provisioning. Storage admins can assign chunks of storage to
virtual environments; this allows virtualization administrators to create and

4 STORAGE Focus On: Storage in a VMware Environment


STORAGE

size their own LUNs and manage the configuration of the storage.
• Storage management. A plug-in can give virtualization administrators
the ability to manage storage array capabilities like LUN masking and thin
provisioning, and to set multi-pathing policies, set tiering policies, optimize
I/O settings and define access lists.
• Automated VM storage mapping. This type of plug-in lets you monitor
and manage the physical/virtual relationships between the virtual machines,
hosts and storage arrays. This can help the virtualization admin by mapping
between the virtualization identifier and the storage array identifier for the
same disk.
• View detailed storage information. This brings information from the
virtualization layer and the storage layer into a unified view, and lets you see
the exact details of the physical storage layer from within the virtualization
console.
• Physical storage health monitoring. This capability provides information
on the physical health of storage arrays so virtualization admins will know
VMware storage
management
when physical hardware fails or becomes degraded.
• VM cloning. The cloning of VMs is basically just a data copy job that can
be offloaded to the array, which can do it more efficiently. This is especially
useful in virtual desktop environments that have larger VM density.
vSphere 5
storage boost • Backup and recovery at the storage layer. This allows you to create
point-in-time snapshots on the storage array of VM data stores. You can then
mount the snapshot and restore VMs from it as needed.
Memory
problems fix
ANATOMY OF A PLUG-IN
The joining of the storage and virtualization layers allows the virtualization
Sponsor admin to stay within the context of the virtual management user interface
resources (UI) without having to grant access to a specialized storage management UI.
Most of the storage plug-ins let you define credentials for the storage arrays
that will be managed inside the virtualization management console. This allows
seamless integration between the two consoles, and it’s also good from a
security perspective as you don’t have to grant virtualization admins direct
access to the storage management console.
Hewlett-Packard (HP) Co.’s approach to the integration of storage manage-
ment into vCenter Server was to leverage its Insight Control management
console and integrate portions of it within vCenter Server as a plug-in. In
addition to a module to manage HP storage, the company included a module
to manage HP server hardware. So both server and storage hardware can be
managed from a single console. When the plug-in is installed, it creates a
special HP storage privilege within vCenter Server that allows access to be

5 STORAGE Focus On: Storage in a VMware Environment


STORAGE

Storage vendors’ vCenter Server


plug-ins
Storage vendor vCenter Server plug-in

Dell/Compellent vSphere Client Plug-in


Dell/EqualLogic EqualLogic Host Integration Tools for VMware
(downloadable via Dell support site)
EMC PDF describing multiple plug-ins
(downloadable via PowerLink)
Hewlett-Packard (HP) Insight Control Storage Module for vCenter
HP/3PAR HP 3PAR Utility Storage and VMware vSphere
IBM IBM XIV Management Console for VMware
VMware storage
vCenter
management
NetApp Virtual Storage Console
XIO ISE Manager: vSphere Edition
vSphere 5
storage boost

granted to the HP storage plug-in. The plug-in brings storage management


Memory into virtualization but not vice versa. vCenter Server has very granular permis-
problems fix sions and roles can be defined so access to storage-specific information can
be granted to storage admins. This allows storage admins to have a single
console to all the storage arrays integrated with vCenter Server.
Sponsor The HP Insight Control Storage Module for vCenter Server currently supports
resources the firm’s P4000, EVA, P9000/XP and P2000/MSA storage arrays. The plug-in
creates an HP Insight Software tab in vCenter Server that appears whenever
a VM, host or cluster is selected; it also offers a menu option for actions
such as cloning/creating virtual machines or creating data stores. The tab
also provides a storage overview of the object selected, such as storage
provisioned to a host, and storage provisioned by a host and the arrays it’s
connected to; it also provides links to directly launch the storage manage-
ment console for an array. There are various views you can select to see
different information, such as Storage Disks and HBAs & Paths; you can also
customize the columns and choose from the many detailed storage fields
that are available. In addition, there are sections so you can see specific
storage objects related to the object you’ve selected, such as VMs, hosts
and data stores.

6 STORAGE Focus On: Storage in a VMware Environment


STORAGE

The vStorage APIs for


Array Integration (VAAI)

VMware Inc.’s vStorage APIs were introduced in vSphere to allow


tighter integration of advanced storage capabilities between
vSphere and third-party storage applications and devices. There are
multiple categories of vStorage APIs that deal with different aspects
of storage integration. The vStorage APIs for Array Integration (VAAI)
were co-developed with specific storage vendors (e.g., Dell, EMC,
NetApp) to enable storage array-based capabilities directly from
within vSphere. Here are three examples of current VAAI features:
• Copy Offload. Virtual machine (VM) cloning or template-based
deployment can be hardware-accelerated by array offloads rather than
file-level copy operations at the ESX/ESXi host. This can also be applied
to other copy operations that occur, such as a Storage vMotion.
• Write-Same Offload. When provisioning an eager-zeroed virtual
VMware storage
management disk (VMDK), the formatting process sends gigabytes of zeros from
the ESX/ESXi host to the array. With write-same offload, the array
handles formatting of the eager-zeroed thick VMDK.
vSphere 5
• Hardware-Assisted Locking. The traditional file-locking mech-
storage boost anism using SCSI reservations is replaced by a more efficient mech-
anism that is atomic (handled in a single operation). This allows for
an increase in the number of ESX/ESXI hosts deployed in a cluster
Memory with VMFS data stores.
problems fix
As the vStorage APIs mature, expect to see even more offloading
and features at the storage array level, such as UNMAP support
Sponsor (return zeroed VMFS blocks), thin provision stun (stuns I/O if a thin
resources
volume runs out of space), NFS file-level copy (analogous to XCOPY
on block) and NFS advanced file attributes (so vSphere can better
understand more about files on NFS storage).

SINGLE-CONSOLE CONTROL
In addition to information about storage arrays, there are storage tools that
can perform actions such as cloning a virtual machine by utilizing array-
based replication, creating batches of new virtual machines, or provisioning
storage and creating VMFS volumes. While these tasks can be accomplished
within vCenter Server, the HP plug-in provides automation and offloads the
tasks to the storage array, which can handle it more efficiently.

7 STORAGE Focus On: Storage in a VMware Environment


STORAGE

The marriage of storage and virtualization in a single console allows for


tighter management integration, which benefits virtualization admins but is
less beneficial to storage admins. Virtualization admins can get more involved
with some of the storage-related functions, but those are traditionally handled
by storage admins, who may be reluctant to give up their control of provisioning
and managing storage resources. Demonstrating the integration and features,
and granting storage admins access
to the virtualization console, may help
convince them to empower the virtual-
The marriage of
ization admin to perform some basic storage and virtual-
storage management. Even if virtualiza-
tion admins aren’t allowed to manage
ization in a single
storage resources, being able to view console allows for
detailed storage array information is
advantageous by itself.
tighter management
While there are plenty of manage- integration, which
VMware storage
management
ment apps that integrate with VMware, benefits virtualization
there are plug-ins available for other
hypervisors, like EMC Corp.’s Virtual admins but is less
vSphere 5
Storage Integrator for Hyper-V, which beneficial to storage
integrates with Microsoft Corp.’s
storage boost
System Center Virtual Machine Manager admins.
(SCVMM). Vendors have focused on
Memory
VMware because of its popularity and because VMware has a deeper and
problems fix more mature set of APIs and SDKs. The storage integration plug-ins that
are available for virtualization are relatively new, and vendor offerings are
still evolving with more features and better integration. No matter what
Sponsor hypervisor you have, storage plug-ins are a must-have for any virtualization
resources environment, as they provide better visibility and integration, and enhance
your ability to monitor, manage and troubleshoot your critical storage
resources. 2

Eric Siebert is a virtualization expert who has written many articles for TechTarget
websites.

8 STORAGE Focus On: Storage in a VMware Environment


vSphere 5
gives a big boost
VMware storage
management
to storage
vSphere 5 vSphere 5 contains many storage enhancements,
storage boost
including a storage-centric DRS tool, Storage Profiles,
a new API for storage awareness and an improvement

w
Memory
problems fix
to an existing API, an increase to the LUN size limit,
and improvements to Storage vMotion. BY ERIC SIEBERT

Sponsor
resources

ITH vSPHERE 4’S vSTORAGE APIS, released in 2009, VMware Inc. made
strides toward addressing the way its platform interacted with
storage resources. But the company for the most part paid scant
attention to storage management from within vCenter Server.
vSphere 5, which VMware plans to release in the third quarter,
is set to change that scenario. The latest version contains many
improvements—both big and small—that make it an exciting release
for storage. In this article we will survey all the new vSphere storage
enhancements.

9 STORAGE Focus On: Storage in a VMware Environment


STORAGE

One important note: Because some of these new vSphere storage features
rely on specific functionality built into storage arrays, you will need to make
sure that your storage array supports them before you can use them. Typically,
most storage vendors do not provide immediate support for new vSphere
features and APIs across all their storage models, so be sure and check with
your vendor to find out when they will support it.

In what is perhaps
STORAGE DRS
In what is perhaps the most notable the most notable
storage improvement in vSphere 5, storage improvement
VMware has expanded its Distributed
Resource Scheduler (DRS) to include in vSphere 5, VMware
storage. In vSphere 4, when DRS tries has expanded its
to balance VM workloads across hosts,
it takes only CPU and memory usage
Distributed Resource
VMware storage
management into account and ignores storage Scheduler (DRS) to
resource usage. Storage I/O Control
allows you to prioritize and limit I/O
include storage.
on data stores, but it doesn’t allow you
vSphere 5
storage boost to redistribute it. Storage DRS in vSphere 5 fixes that limitation by selecting
the best placement for your VM based on available disk space and current
I/O load. Besides initial placement of VMs, it will also provide load balancing
Memory between data stores using Storage vMotion based on storage space utilization,
problems fix I/O metrics and latency. Anti-affinity rules can also be created so certain
virtual disks do not share the same data stores and are separated from one
another. Data store clusters (also known as storage pods) are used to aggre-
Sponsor gate multiple storage resources so Storage DRS can manage the storage
resources resources at a cluster level comparable to how DRS manages policies as
well as compute resources in a cluster.

STORAGE PROFILES
vSphere 5’s Storage Profiles enable virtual machine storage provisioning to be
independent of specific storage resources available in an environment. You
can define virtual machine placement rules in terms of storage characteristics
and then monitor a VM’s storage placement based on these rules. The new
vSphere Storage Profiles will ensure that a particular VM remains on a class
of storage that meets its performance requirements. If a VM gets provisioned
on a class of storage that doesn’t meet the requirements, it becomes non-
compliant and its performance can suffer.

10 STORAGE Focus On: Storage in a VMware Environment


STORAGE

vSTORAGE APIs FOR STORAGE AWARENESS


Through a new set of APIs, the vStorage APIs for Storage Awareness (VASA),
vSphere 5 is aware of a data store’s class of storage. The APIs enable vSphere
to read the performance characteristics of a storage device so it can deter-
mine if a VM is compliant with a Storage Profile. The vStorage APIs for Storage
Awareness also make it much easier to select the appropriate disk for virtual
machine placement. This is useful when certain storage array capabilities or
characteristics would be beneficial for a certain VM. The vStorage APIs for
Storage Awareness allow storage arrays to integrate with vCenter Server for
management functionality via server-side plug-ins, and they give a vSphere
administrator more in-depth knowledge of the topology, capabilities and
state of the physical storage devices available to the cluster. And, in addition
to the functionality it lends to Storage Profiles, these APIs are a key enabler
for vSphere Storage DRS, by providing array information that allows Storage
DRS to work optimally with storage arrays.
VMware storage
management
VMFS5 SIZE LIMITS
The new version of VMware’s proprietary Virtual Machine File System, Version
vSphere 5
5 (in keeping with vSphere 5), includes performance improvements, but the
storage boost biggest change is in scalability: The 2 TB LUN limit in vSphere 4 has finally
been increased, to 16 TB. In vSphere 4, there is an option to choose from
among 1 MB, 2 MB, 4 MB and 8 MB block sizes when creating a VMFS data
Memory store. Each block size dictates the size limit for a single virtual disk: 1 MB
problems fix equals 256 GB, 2 MB equals 512 GB, 4 MB equals 1 TB, and 8 MB equals 2
TB. The default block size in vSphere 4 is 1 MB, and once set it cannot be
changed without deleting a VMFS data store and re-creating it. This caused
Sponsor problems as many people used the default block size and later learned that
resources they couldn’t create virtual disks greater than 256 GB. In vSphere 5, the block
size choice goes away. There is only a single block size (1 MB) that can be
used on VMFS volumes, supporting virtual disks up to a limit of 2 TB. While
the 2 TB LUN size in vSphere 5 has been increased, the single virtual disk
size limit of 2 TB has not increased.
For those upgrading to vSphere 5, porting existing data stores from VMFS3
to VMFS5 is seamless and non-destructive. But when you upgrade the volume
to VMFS5, it preserves the existing block size. If you want to get it to 1 MB,
you have to delete the volume and re-create it. If you’re already at 8 MB, there
probably is not much advantage to doing this. When it comes to LUN size,
however, you can grow your LUNs larger than 2 TB after upgrading to VMFS5
without any problems.

11 STORAGE Focus On: Storage in a VMware Environment


STORAGE

iSCSI UI SUPPORT
In vSphere 5, VMware improved the user interface that is used in the vSphere
Client to configure both hardware and software iSCSI adapters. In previous
versions, to completely configure iSCSI support you had to visit multiple
areas in the client, which made for a complicated and confusing process.
In vSphere 5 you can configure dependent hardware iSCSI and software
iSCSI adapters along with the network configurations and port binding in
a single dialog box. And vSphere 5 has full SDK access to allow iSCSI
configuration via scripting.

STORAGE I/O CONTROL NFS SUPPORT


Many new storage features, upon release in vSphere, initially support only
block-based storage devices. Storage I/O Control (SIOC) is an example of this
convention. SIOC enables you to prioritize and limit I/O on data stores, but
pre-Version 5, it did not support NFS data stores. In vSphere 5, SIOC has been
VMware storage
management extended to provide cluster-wide I/O shares and limits for NFS data stores.

vSphere 5 vSTORAGE APIS FOR ARRAY INTEGRATION: THIN PROVISIONING


storage boost The vStorage APIs for Array Integration (VAAI), introduced in vSphere 4, include
the ability to offload several storage-intensive functions from vSphere to a
storage array. In vSphere 5, VAAI has been enhanced to allow storage arrays
Memory that use thin provisioning to reclaim blocks when a virtual disk is deleted.
problems fix Normally, when a virtual disk is deleted, the blocks still contain data, and the
storage array is not aware that they are deleted blocks. This new capability
allows vSphere to inform the storage array about the deleted blocks so it can
Sponsor reclaim the space and maximize space efficiency.
resources

SWAP TO SSD
Using solid-state drives (SSDs) as a storage tier is increasing in popularity.
vSphere 5 provides new forms of SSD handling and optimization. For instance,
the VMkernel will automatically recognize and tag SSD devices that are local
to VMware ESXi or are available on shared storage devices. In addition, the
VMkernel scheduler has been modified to allow VM swap files to extend to
local or network SSD devices, which minimizes the performance impact of
using memory overcommitment. ESXi can auto-detect SSD drives on certain
supported storage arrays; you can use the Storage Array Type Plug-ins (SATP)
rules, which are part of the Pluggable Storage Architecture (PSA) framework,
to tag devices that cannot be auto-detected.

12 STORAGE Focus On: Storage in a VMware Environment


STORAGE

STORAGE vMOTION ENHANCEMENTS: SNAPSHOTS AND MIRROR MODE


In vSphere 4, if a VM had active snapshots it could not be moved to another
data store using Storage vMotion. In vSphere 5, that limitation has been removed.
This is important because while Storage vMotion operations were not common
in vSphere 4, they are in vSphere 5 since the new Storage DRS feature will be
moving VMs between data stores on a regular basis as storage I/O loads are
redistributed.
Another Storage vMotion improvement relates to how changes are tracked.
In vSphere 4, VMware enhanced Storage vMotion by employing the Changed
Block Tracking (CBT) feature to track block changes while a Storage vMotion
occurs instead of relying on VM snapshots. Once the copy process completed,
the changed blocks were then copied to the destination disk. In vSphere 5 the
company improved Storage vMotion even further, by abandoning CBT in favor
of a new mirror mode. Instead of keeping track of blocks that change while
a Storage vMotion is in process and then copying them once it completes,
Storage vMotion now performs a mirrored write, so any writes that occur during
VMware storage
management the Storage vMotion process are written to both the source and destination at
the same time. To ensure that the source and destination disks stay in sync,
they have to acknowledge each write that occurs. In vSphere 4, VMs with a lot
vSphere 5
of CBT-related storage I/O had the potential of not keeping pace, in which case
storage boost the Storage vMotion process would eventually fail. This new process is more
efficient, much faster and avoids the potential for failure. 2

Memory
problems fix

Sponsor
resources

13 STORAGE Focus On: Storage in a VMware Environment


SSD
satiates
vSphere
memory
VMware storage
management
appetite
vSphere 5
storage boost Using SSD technology in a vSphere environment
helps with memory overcommitment. Find out what
Memory the limitations of server hardware are, how memory

w
problems fix
overcommitment works and why SSD is better at
supporting it than mechanical disk. BY ERIC SIEBERT
Sponsor
resources

ITH vSPHERE 4’S vSTORAGE APIS, released in 2009, VMware Inc. made
strides toward addressing the way its platform interacted with storage
resources. But the company for the most part paid scant attention to
storage management from within vCenter Server. vSphere 5, which
VMware plans to release in the third quarter, is set to change that
scenario. The latest version contains many improvements—both big
and small—that make it an exciting release for storage. In this article
we will survey all the new vSphere storage enhancements.

14 STORAGE Focus On: Storage in a VMware Environment


STORAGE

One important note: Because some of these new vSphere storage features
rely on specific functionality built into storage arrays, you will need to make
sure that your storage array supports them before you can use them. Typically,
most storage vendors do not provide immediate support for new vSphere
features and APIs across all their storage models, so be sure and check with
your vendor to find out when they will support it.

If a resource shortage
STORAGE DRS
In what is perhaps the most notable occurs in any one
storage improvement in vSphere 5, area on a host—such
VMware has expanded its Distributed
Resource Scheduler (DRS) to include
as RAM—the number
storage. In vSphere 4, when DRS tries of VMs that the host
to balance VM workloads across
hosts, it takes only CPU and memory
can run will be
VMware storage
management usage into account and ignores stor- restricted despite
age resource usage. Storage I/O Con-
trol allows you to prioritize and limit
plentiful resources
vSphere 5
I/O on data stores, but it doesn’t allow in other areas.
storage boost you to redistribute it. Storage DRS in
vSphere 5 fixes that limitation by selecting the best placement for your VM
based on available disk space and current I/O load. Besides initial placement
Memory of VMs, it will also provide load balancing between data stores using Storage
problems fix vMotion based on storage space utilization, I/O metrics and latency. Anti-
affinity rules can also be created so certain virtual disks do not share the same
data stores and are separated from one another. Data store clusters (also
Sponsor known as storage pods) are used to aggregate multiple storage resources so
resources Storage DRS can manage the storage resources at a cluster level comparable
to how DRS manages policies as well as compute resources in a cluster.

STORAGE PROFILES
vSphere 5’s Storage Profiles enable virtual machine storage provisioning to be
independent of specific storage resources available in an environment. You can
define virtual machine placement rules in terms of storage characteristics and
then monitor a VM’s storage placement based on these rules. The new vSphere
Storage Profiles will ensure that a particular VM remains on a class of storage
that meets its performance requirements. If a VM gets provisioned on a class
of storage that doesn’t meet the requirements, it becomes non-compliant and
its performance can suffer.

15 STORAGE Focus On: Storage in a VMware Environment


STORAGE

vSTORAGE APIs FOR STORAGE AWARENESS


Through a new set of APIs, the vStorage APIs for Storage Awareness (VASA),
vSphere 5 is aware of a data store’s class of storage. The APIs enable vSphere
to read the performance characteristics of a storage device so it can deter-
mine if a VM is compliant with a Storage Profile. The vStorage APIs for Storage
Awareness also make it much easier to select the appropriate disk for virtual
machine placement. This is useful when certain storage array capabilities or
characteristics would be beneficial for a certain VM. The vStorage APIs for
Storage Awareness allow storage arrays to integrate with vCenter Server for
management functionality via server-side plug-ins, and they give a vSphere
administrator more in-depth knowledge of the topology, capabilities and state
of the physical storage devices available to the cluster. And, in addition to the
functionality it lends to Storage Profiles, these APIs are a key enabler for
vSphere Storage DRS, by providing array information that allows Storage
DRS to work optimally with storage arrays.
VMware storage
management
VMFS5 SIZE LIMITS
Using an SSD as the
The new version of VMware’s propri- storage device for
vSphere 5
etary Virtual Machine File System, virtual machine swap
storage boost Version 5 (in keeping with vSphere 5),
includes performance improvements, files allows you to
but the biggest change is in scalability: make use of memory
Memory The 2 TB LUN limit in vSphere 4 has
problems fix finally been increased, to 16 TB. In overcommitment
vSphere 4, there is an option to without taking a big
choose from among 1 MB, 2 MB, 4
Sponsor MB and 8 MB block sizes when creat-
hit in performance.
resources ing a VMFS data store. Each block size
dictates the size limit for a single virtual disk: 1 MB equals 256 GB, 2 MB
equals 512 GB, 4 MB equals 1 TB, and 8 MB equals 2 TB. The default block size
in vSphere 4 is 1 MB, and once set it cannot be changed without deleting a
VMFS data store and re-creating it. This caused problems as many people
used the default block size and later learned that they couldn’t create virtual
disks greater than 256 GB. In vSphere 5, the block size choice goes away.
There is only a single block size (1 MB) that can be used on VMFS volumes,
supporting virtual disks up to a limit of 2 TB. While the 2 TB LUN size in
vSphere 5 has been increased, the single virtual disk size limit of 2 TB has
not increased.
For those upgrading to vSphere 5, porting existing data stores from VMFS3
to VMFS5 is seamless and non-destructive. But when you upgrade the volume

16 STORAGE Focus On: Storage in a VMware Environment


STORAGE

to VMFS5, it preserves the existing block size. If you want to get it to 1 MB,
you have to delete the volume and re-create it. If you’re already at 8 MB, there
probably is not much advantage to doing this. When it comes to LUN size,
however, you can grow your LUNs larger than 2 TB after upgrading to VMFS5
without any problems.

iSCSI UI SUPPORT
In vSphere 5, VMware improved the user interface that is used in the vSphere
Client to configure both hardware and software iSCSI adapters. In previous
versions, to completely configure iSCSI support you had to visit multiple areas
in the client, which made for a complicated and confusing process. In vSphere
5 you can configure dependent hardware iSCSI and software iSCSI adapters
along with the network configurations and port binding in a single dialog box.
And vSphere 5 has full SDK access to allow iSCSI configuration via scripting.

VMware storage
management
STORAGE I/O CONTROL NFS SUPPORT
Many new storage features, upon release in vSphere, initially support only
vSphere 5
block-based storage devices. Storage I/O Control (SIOC) is an example of this
storage boost convention. SIOC enables you to prioritize and limit I/O on data stores, but
pre-Version 5, it did not support NFS data stores. In vSphere 5, SIOC has been
extended to provide cluster-wide I/O shares and limits for NFS data stores.
Memory
problems fix
vSTORAGE APIS FOR ARRAY INTEGRATION: THIN PROVISIONING
The vStorage APIs for Array Integration (VAAI), introduced in vSphere 4, include
Sponsor the ability to offload several storage-intensive functions from vSphere to a
resources
storage array. In vSphere 5, VAAI has been enhanced to allow storage arrays
that use thin provisioning to reclaim blocks when a virtual disk is deleted.
Normally, when a virtual disk is deleted, the blocks still contain data, and the
storage array is not aware that they are deleted blocks. This new capability
allows vSphere to inform the storage array about the deleted blocks so it can
reclaim the space and maximize space efficiency.

SWAP TO SSD
Using solid-state drives (SSDs) as a storage tier is increasing in popularity.
vSphere 5 provides new forms of SSD handling and optimization. For instance,
the VMkernel will automatically recognize and tag SSD devices that are local
to VMware ESXi or are available on shared storage devices. In addition, the

17 STORAGE Focus On: Storage in a VMware Environment


STORAGE

VMkernel scheduler has been modified to allow VM swap files to extend


to local or network SSD devices, which minimizes the performance impact
of using memory overcommitment. ESXi can auto-detect SSD drives on
certain supported storage arrays; you can use the Storage Array Type
Plug-ins (SATP) rules, which are part of the Pluggable Storage Architecture
(PSA) framework, to tag devices that cannot be auto-detected.

STORAGE vMOTION ENHANCEMENTS: SNAPSHOTS AND MIRROR MODE


In vSphere 4, if a VM had active snapshots it could not be moved to another
data store using Storage vMotion. In vSphere 5, that limitation has been
removed. This is important because while Storage vMotion operations were
not common in vSphere 4, they are in vSphere 5 since the new Storage
DRS feature will be moving VMs between data stores on a regular basis
as storage I/O loads are redistributed.
Another Storage vMotion improvement relates to how changes are
tracked. In vSphere 4, VMware enhanced Storage vMotion by employing the
VMware storage Changed Block Tracking (CBT) feature to track block changes while a Storage
management vMotion occurs instead of relying on VM snapshots. Once the copy process
completed, the changed blocks were then copied to the destination disk.
In vSphere 5 the company improved Storage vMotion even further, by
vSphere 5 abandoning CBT in favor of a new mirror mode. Instead of keeping track
storage boost
of blocks that change while a Storage vMotion is in process and then
copying them once it completes, Storage vMotion now performs a mirrored
write, so any writes that occur during the Storage vMotion process are
Memory written to both the source and destination at the same time. To ensure
problems fix
that the source and destination disks stay in sync, they have to acknowl-
edge each write that occurs. In vSphere 4, VMs with a lot of CBT-related
storage I/O had the potential of not keeping pace, in which case the Stor-
Sponsor
age vMotion process would eventually fail. This new process is more effi-
resources
cient, much faster and avoids the potential for failure. 2

18 STORAGE Focus On: Storage in a VMware Environment


RESOURCES FROM OUR SPONSOR

• Moving to Virtualization: A Guide to What’s Possible from Dell and VMware

• Leveraging Virtualization for Disaster Recovery in Your Growing Business

About Dell and VMware:


Dell and VMware deliver efficient cloud infrastructure solutions that are fast to deploy, and easy
to manage. The Dell | VMware® alliance is built upon a strong history of working together to
test, certify and improve solutions involving VMware® technology on Dell servers and storage.

You might also like