Professional Documents
Culture Documents
Storage: Storage in A Environment
Storage: Storage in A Environment
STORAGE
FOCUS ON
Storage in a
VMware
environment
While VMware server virtualization is pretty
mature, the way the platform interacts with
storage is not. But that’s changing. Find out
how your job to support storage for VMware
is getting easier.
INSIDE
VMware storage management evolves
1
Managing
storage
for virtual
server
environments
VMware storage
management
vSphere 5
storage boost
Memory
problems fix
Virtual servers and storage systems don’t have
v
to exist in separate worlds; new tools and plug-ins
Sponsor
resources provide single-console management of both
virtual servers and storage. BY ERIC SIEBERT
Copyright 2011, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by any means, without permission in writing
2 from the publisher. For permissions or reprint information, please contact Mike Kelly, VP and Group Publisher (mkelly@storagemagazine.com).
STORAGE
FUNCTIONAL SILOS
Inside every data center there are typically silos related to specific functional
areas, each with a dedicated group responsible for management. There are
teams responsible for managing specific data center resources, such as the
network, servers, storage systems and virtualization. Each group focuses on
managing its own area and works with other groups when needed to handle
integration points between groups. If a new server requires shared storage,
the server team works with the storage team to get storage provisioned and
presented to the server.
In a traditional physical server environment, the storage group can easily
manage the relationships between storage and physical servers: a logical unit
number (LUN) created on a storage area network (SAN) is assigned to a physical
server and only that server uses the LUN. Server virtualization changes all that.
But storage is perhaps the most critical component of a virtual infrastructure,
so it must be implemented and managed properly for maximum performance
and reliability. The relationship between server virtualization and storage is a
VMware storage
management tight one, so the management must be as well.
moves data across tiers based on performance demands. All of that occurs at
the storage layer and the virtual host is unaware of the movement.
While the features that move VMs around are beneficial, they can cause
headaches for storage and virtualization administrators as the relationships
between virtual machines, the physical hosts they’re located on and the
storage device where their virtual disk resides is a dynamic one. That will
have the most impact when troubleshooting problems and monitoring per-
formance. Because the virtualization admin is unaware of what’s occurring
at the storage layer and the storage admin doesn’t know what’s happening
at the virtualization layer, neither gets to see the big picture.
size their own LUNs and manage the configuration of the storage.
• Storage management. A plug-in can give virtualization administrators
the ability to manage storage array capabilities like LUN masking and thin
provisioning, and to set multi-pathing policies, set tiering policies, optimize
I/O settings and define access lists.
• Automated VM storage mapping. This type of plug-in lets you monitor
and manage the physical/virtual relationships between the virtual machines,
hosts and storage arrays. This can help the virtualization admin by mapping
between the virtualization identifier and the storage array identifier for the
same disk.
• View detailed storage information. This brings information from the
virtualization layer and the storage layer into a unified view, and lets you see
the exact details of the physical storage layer from within the virtualization
console.
• Physical storage health monitoring. This capability provides information
on the physical health of storage arrays so virtualization admins will know
VMware storage
management
when physical hardware fails or becomes degraded.
• VM cloning. The cloning of VMs is basically just a data copy job that can
be offloaded to the array, which can do it more efficiently. This is especially
useful in virtual desktop environments that have larger VM density.
vSphere 5
storage boost • Backup and recovery at the storage layer. This allows you to create
point-in-time snapshots on the storage array of VM data stores. You can then
mount the snapshot and restore VMs from it as needed.
Memory
problems fix
ANATOMY OF A PLUG-IN
The joining of the storage and virtualization layers allows the virtualization
Sponsor admin to stay within the context of the virtual management user interface
resources (UI) without having to grant access to a specialized storage management UI.
Most of the storage plug-ins let you define credentials for the storage arrays
that will be managed inside the virtualization management console. This allows
seamless integration between the two consoles, and it’s also good from a
security perspective as you don’t have to grant virtualization admins direct
access to the storage management console.
Hewlett-Packard (HP) Co.’s approach to the integration of storage manage-
ment into vCenter Server was to leverage its Insight Control management
console and integrate portions of it within vCenter Server as a plug-in. In
addition to a module to manage HP storage, the company included a module
to manage HP server hardware. So both server and storage hardware can be
managed from a single console. When the plug-in is installed, it creates a
special HP storage privilege within vCenter Server that allows access to be
SINGLE-CONSOLE CONTROL
In addition to information about storage arrays, there are storage tools that
can perform actions such as cloning a virtual machine by utilizing array-
based replication, creating batches of new virtual machines, or provisioning
storage and creating VMFS volumes. While these tasks can be accomplished
within vCenter Server, the HP plug-in provides automation and offloads the
tasks to the storage array, which can handle it more efficiently.
Eric Siebert is a virtualization expert who has written many articles for TechTarget
websites.
w
Memory
problems fix
to an existing API, an increase to the LUN size limit,
and improvements to Storage vMotion. BY ERIC SIEBERT
Sponsor
resources
ITH vSPHERE 4’S vSTORAGE APIS, released in 2009, VMware Inc. made
strides toward addressing the way its platform interacted with
storage resources. But the company for the most part paid scant
attention to storage management from within vCenter Server.
vSphere 5, which VMware plans to release in the third quarter,
is set to change that scenario. The latest version contains many
improvements—both big and small—that make it an exciting release
for storage. In this article we will survey all the new vSphere storage
enhancements.
One important note: Because some of these new vSphere storage features
rely on specific functionality built into storage arrays, you will need to make
sure that your storage array supports them before you can use them. Typically,
most storage vendors do not provide immediate support for new vSphere
features and APIs across all their storage models, so be sure and check with
your vendor to find out when they will support it.
In what is perhaps
STORAGE DRS
In what is perhaps the most notable the most notable
storage improvement in vSphere 5, storage improvement
VMware has expanded its Distributed
Resource Scheduler (DRS) to include in vSphere 5, VMware
storage. In vSphere 4, when DRS tries has expanded its
to balance VM workloads across hosts,
it takes only CPU and memory usage
Distributed Resource
VMware storage
management into account and ignores storage Scheduler (DRS) to
resource usage. Storage I/O Control
allows you to prioritize and limit I/O
include storage.
on data stores, but it doesn’t allow you
vSphere 5
storage boost to redistribute it. Storage DRS in vSphere 5 fixes that limitation by selecting
the best placement for your VM based on available disk space and current
I/O load. Besides initial placement of VMs, it will also provide load balancing
Memory between data stores using Storage vMotion based on storage space utilization,
problems fix I/O metrics and latency. Anti-affinity rules can also be created so certain
virtual disks do not share the same data stores and are separated from one
another. Data store clusters (also known as storage pods) are used to aggre-
Sponsor gate multiple storage resources so Storage DRS can manage the storage
resources resources at a cluster level comparable to how DRS manages policies as
well as compute resources in a cluster.
STORAGE PROFILES
vSphere 5’s Storage Profiles enable virtual machine storage provisioning to be
independent of specific storage resources available in an environment. You
can define virtual machine placement rules in terms of storage characteristics
and then monitor a VM’s storage placement based on these rules. The new
vSphere Storage Profiles will ensure that a particular VM remains on a class
of storage that meets its performance requirements. If a VM gets provisioned
on a class of storage that doesn’t meet the requirements, it becomes non-
compliant and its performance can suffer.
iSCSI UI SUPPORT
In vSphere 5, VMware improved the user interface that is used in the vSphere
Client to configure both hardware and software iSCSI adapters. In previous
versions, to completely configure iSCSI support you had to visit multiple
areas in the client, which made for a complicated and confusing process.
In vSphere 5 you can configure dependent hardware iSCSI and software
iSCSI adapters along with the network configurations and port binding in
a single dialog box. And vSphere 5 has full SDK access to allow iSCSI
configuration via scripting.
SWAP TO SSD
Using solid-state drives (SSDs) as a storage tier is increasing in popularity.
vSphere 5 provides new forms of SSD handling and optimization. For instance,
the VMkernel will automatically recognize and tag SSD devices that are local
to VMware ESXi or are available on shared storage devices. In addition, the
VMkernel scheduler has been modified to allow VM swap files to extend to
local or network SSD devices, which minimizes the performance impact of
using memory overcommitment. ESXi can auto-detect SSD drives on certain
supported storage arrays; you can use the Storage Array Type Plug-ins (SATP)
rules, which are part of the Pluggable Storage Architecture (PSA) framework,
to tag devices that cannot be auto-detected.
Memory
problems fix
Sponsor
resources
w
problems fix
overcommitment works and why SSD is better at
supporting it than mechanical disk. BY ERIC SIEBERT
Sponsor
resources
ITH vSPHERE 4’S vSTORAGE APIS, released in 2009, VMware Inc. made
strides toward addressing the way its platform interacted with storage
resources. But the company for the most part paid scant attention to
storage management from within vCenter Server. vSphere 5, which
VMware plans to release in the third quarter, is set to change that
scenario. The latest version contains many improvements—both big
and small—that make it an exciting release for storage. In this article
we will survey all the new vSphere storage enhancements.
One important note: Because some of these new vSphere storage features
rely on specific functionality built into storage arrays, you will need to make
sure that your storage array supports them before you can use them. Typically,
most storage vendors do not provide immediate support for new vSphere
features and APIs across all their storage models, so be sure and check with
your vendor to find out when they will support it.
If a resource shortage
STORAGE DRS
In what is perhaps the most notable occurs in any one
storage improvement in vSphere 5, area on a host—such
VMware has expanded its Distributed
Resource Scheduler (DRS) to include
as RAM—the number
storage. In vSphere 4, when DRS tries of VMs that the host
to balance VM workloads across
hosts, it takes only CPU and memory
can run will be
VMware storage
management usage into account and ignores stor- restricted despite
age resource usage. Storage I/O Con-
trol allows you to prioritize and limit
plentiful resources
vSphere 5
I/O on data stores, but it doesn’t allow in other areas.
storage boost you to redistribute it. Storage DRS in
vSphere 5 fixes that limitation by selecting the best placement for your VM
based on available disk space and current I/O load. Besides initial placement
Memory of VMs, it will also provide load balancing between data stores using Storage
problems fix vMotion based on storage space utilization, I/O metrics and latency. Anti-
affinity rules can also be created so certain virtual disks do not share the same
data stores and are separated from one another. Data store clusters (also
Sponsor known as storage pods) are used to aggregate multiple storage resources so
resources Storage DRS can manage the storage resources at a cluster level comparable
to how DRS manages policies as well as compute resources in a cluster.
STORAGE PROFILES
vSphere 5’s Storage Profiles enable virtual machine storage provisioning to be
independent of specific storage resources available in an environment. You can
define virtual machine placement rules in terms of storage characteristics and
then monitor a VM’s storage placement based on these rules. The new vSphere
Storage Profiles will ensure that a particular VM remains on a class of storage
that meets its performance requirements. If a VM gets provisioned on a class
of storage that doesn’t meet the requirements, it becomes non-compliant and
its performance can suffer.
to VMFS5, it preserves the existing block size. If you want to get it to 1 MB,
you have to delete the volume and re-create it. If you’re already at 8 MB, there
probably is not much advantage to doing this. When it comes to LUN size,
however, you can grow your LUNs larger than 2 TB after upgrading to VMFS5
without any problems.
iSCSI UI SUPPORT
In vSphere 5, VMware improved the user interface that is used in the vSphere
Client to configure both hardware and software iSCSI adapters. In previous
versions, to completely configure iSCSI support you had to visit multiple areas
in the client, which made for a complicated and confusing process. In vSphere
5 you can configure dependent hardware iSCSI and software iSCSI adapters
along with the network configurations and port binding in a single dialog box.
And vSphere 5 has full SDK access to allow iSCSI configuration via scripting.
VMware storage
management
STORAGE I/O CONTROL NFS SUPPORT
Many new storage features, upon release in vSphere, initially support only
vSphere 5
block-based storage devices. Storage I/O Control (SIOC) is an example of this
storage boost convention. SIOC enables you to prioritize and limit I/O on data stores, but
pre-Version 5, it did not support NFS data stores. In vSphere 5, SIOC has been
extended to provide cluster-wide I/O shares and limits for NFS data stores.
Memory
problems fix
vSTORAGE APIS FOR ARRAY INTEGRATION: THIN PROVISIONING
The vStorage APIs for Array Integration (VAAI), introduced in vSphere 4, include
Sponsor the ability to offload several storage-intensive functions from vSphere to a
resources
storage array. In vSphere 5, VAAI has been enhanced to allow storage arrays
that use thin provisioning to reclaim blocks when a virtual disk is deleted.
Normally, when a virtual disk is deleted, the blocks still contain data, and the
storage array is not aware that they are deleted blocks. This new capability
allows vSphere to inform the storage array about the deleted blocks so it can
reclaim the space and maximize space efficiency.
SWAP TO SSD
Using solid-state drives (SSDs) as a storage tier is increasing in popularity.
vSphere 5 provides new forms of SSD handling and optimization. For instance,
the VMkernel will automatically recognize and tag SSD devices that are local
to VMware ESXi or are available on shared storage devices. In addition, the