Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

White Paper

LONG-DISTANCE APPLICATION MOBILITY


ENABLED BY EMC VPLEX GEO
An Architectural Overview

EMC GLOBAL SOLUTIONS

Abstract

This white paper describes the design, deployment, and validation of a


virtualized application environment incorporating Microsoft Windows 2008 R2
with Hyper-V, SAP ERP 6.0 EHP4, Microsoft SharePoint 2010, and Oracle
Database 11gR2 on virtualized EMC® VNX™ and EMC VMAX™ storage
presented by EMC VPLEX™ Geo.

June 2011
Copyright © 2011 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its


publication date. The information is subject to change without notice.

The information in this publication is provided “as is.” EMC Corporation makes
no representations or warranties of any kind with respect to the information in
this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this


publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation
Trademarks on EMC.com.

All trademarks used herein are the property of their respective owners.

Part Number H8214.1

Long-Distance Application Mobility 2


Enabled by EMC VPLEX Geo
Contents
Executive summary ............................................................................................................... 6
Business case .................................................................................................................................. 6
Solution overview ............................................................................................................................ 6
Key results ....................................................................................................................................... 7

Introduction .......................................................................................................................... 8
Purpose ........................................................................................................................................... 8
Scope .............................................................................................................................................. 8
Audience ......................................................................................................................................... 8
Terminology ..................................................................................................................................... 9

Solution Overview ............................................................................................................... 10


Overview ........................................................................................................................................ 10
Physical architecture ...................................................................................................................... 10
Hardware resources ....................................................................................................................... 11
Software resources ........................................................................................................................ 12

Key components ................................................................................................................. 13


Introduction ................................................................................................................................... 13
Common elements ......................................................................................................................... 13

EMC VPLEX Geo ................................................................................................................... 14


EMC VPLEX Geo overview ............................................................................................................... 14
EMC VPLEX Geo design considerations........................................................................................... 14
EMC VPLEX Geo configuration ........................................................................................................ 15

EMC VPLEX Geo administration ............................................................................................ 19


EMC VPLEX Geo administration overview........................................................................................ 19
EMC VPLEX Geo administration process ......................................................................................... 20

EMC VNX5700..................................................................................................................... 21
EMC VNX5700 overview ................................................................................................................. 21
EMC VNX5700 configuration .......................................................................................................... 21
Pool configuration ..................................................................................................................... 21
LUN configuration...................................................................................................................... 21

EMC Symmetrix VMAX ......................................................................................................... 22


EMC Symmetrix VMAX overview ..................................................................................................... 22
EMC Symmetrix VMAX configuration............................................................................................... 22
Symmetrix volume configuration ............................................................................................... 22
Meta device configuration ......................................................................................................... 22

Long-Distance Application Mobility 3


Enabled by EMC VPLEX Geo
Microsoft Hyper-V ............................................................................................................... 23
Microsoft Hyper-V overview ............................................................................................................ 23
Microsoft Hyper-V configuration ..................................................................................................... 23
Hyper-V networking configuration .................................................................................................. 24
Microsoft SCVMM configuration ..................................................................................................... 28

Networking infrastructure .................................................................................................... 29


Networking infrastructure overview ................................................................................................ 29
Network design considerations ...................................................................................................... 29
Network configuration .................................................................................................................... 29

Silver Peak WAN optimization.............................................................................................. 31


WAN optimization overview............................................................................................................ 31
Silver Peak NX appliance................................................................................................................ 31
Silver Peak design considerations.................................................................................................. 32
Silver Peak WAN optimization results ............................................................................................. 32

Microsoft Office SharePoint Server 2010.............................................................................. 33


SharePoint overview ...................................................................................................................... 33

Microsoft SharePoint Server 2010 configuration .................................................................. 33


SharePoint Server configuration overview ...................................................................................... 33
SharePoint Server design considerations ....................................................................................... 33
SharePoint Server farm virtual machine configurations .................................................................. 34
SharePoint virtual machine configuration and resources ................................................................ 35
SharePoint farm test methodology ................................................................................................. 35

SharePoint Server environment validation ........................................................................... 36


Test summary ................................................................................................................................ 36
SharePoint baseline test ................................................................................................................ 36
SharePoint encapsulated test ........................................................................................................ 37
SharePoint live migration test ........................................................................................................ 38
SharePoint live migration with compression test ............................................................................ 40
SharePoint and Silver Peak WAN optimization ............................................................................... 41

SAP .................................................................................................................................... 42
SAP overview ................................................................................................................................. 42

SAP configuration ............................................................................................................... 43


SAP configuration overview ............................................................................................................ 43
SAP design considerations............................................................................................................. 43
SAP virtual machine configurations ................................................................................................ 44
HP LoadRunner configuration ......................................................................................................... 44

Long-Distance Application Mobility 4


Enabled by EMC VPLEX Geo
SAP ERP workload profile ............................................................................................................... 44

SAP environment validation ................................................................................................ 45


Test summary ................................................................................................................................ 45
SAP test methodology .................................................................................................................... 45
SAP load generation....................................................................................................................... 46
SAP test procedure ........................................................................................................................ 46
SAP test results .............................................................................................................................. 47
SAP and Silver Peak WAN optimization .......................................................................................... 49

Oracle................................................................................................................................. 50
Oracle overview ............................................................................................................................. 50

Oracle configuration............................................................................................................ 50
Oracle configuration overview ........................................................................................................ 50
Oracle virtual machine configuration.............................................................................................. 50
Oracle database configuration and resources ................................................................................ 51
SwingBench utility configuration .................................................................................................... 52

Oracle environment validation............................................................................................. 53


Test summary ................................................................................................................................ 53
Oracle baseline test ....................................................................................................................... 53
Oracle encapsulated test ............................................................................................................... 54
Oracle distance simulation test ...................................................................................................... 55
Oracle distance simulation with compression test ......................................................................... 56
Oracle live migration test ............................................................................................................... 57
Oracle and Silver Peak Wan optimization ....................................................................................... 58

Conclusion ......................................................................................................................... 59
Summary ....................................................................................................................................... 59
Findings ......................................................................................................................................... 59

References .......................................................................................................................... 60
White papers ................................................................................................................................. 60
Product documentation.................................................................................................................. 60
Other documentation ..................................................................................................................... 60

Long-Distance Application Mobility 5


Enabled by EMC VPLEX Geo
Executive summary
Business case Today’s global enterprise demands always-on availability of applications and
information in order to remain competitive. The priority is mission-critical
applications—the applications that with downtime result in lost productivity, lost
customers and ultimately, lost revenue. EMC has continuously led in products,
services, and solutions that ensure uptime and protect business from disastrous
losses. EMC® VPLEX™ enables customers to seamlessly migrate workloads over
distance to protect information or to better support initiatives and employees around
the globe. Leveraging the architecture documented here, customers can migrate
workloads to different physical locations, up to 2,000 km apart. This provides an
unprecedented level of flexibility while ensuring application and information
availability.

The business day is no longer 9-to-5; companies are working around the clock at
offices across the globe. Information and applications are needed to keep the
business running smoothly. EMC VPLEX helps customers easily migrate workloads
around the globe to:
• Increase ROI by increasing utilization of hardware and software assets
• Ensure availability of information and applications
• Minimize interruption of revenue generating processes
• Optimize application and data access to better meet specific geographic
demands

Solution overview The EMC VPLEX family is a solution for federating EMC and non-EMC storage. The
VPLEX platform logically resides between the servers and heterogeneous storage
assets supporting a variety of arrays from various vendors. VPLEX simplifies storage
management by allowing LUNs, provisioned from various arrays, to be managed
though a centralized management interface.

The EMC VPLEX platform removes physical barriers within, across, and between data
centers. VPLEX Local provides simplified management and non-disruptive data
mobility across heterogeneous arrays. VPLEX Metro provides mobility, availability,
and collaboration between two VPLEX clusters within synchronous distances. VPLEX
Geo further dissolves those distances by extending these use cases to asynchronous
distances.

With a unique scale-up and scale-out architecture, VPLEX’s advanced data caching
and distributed cache coherency provide workload resiliency, automatic sharing,
balancing and failover of storage domains, and enable both local and remote data
access with predictable service levels.

Long-Distance Application Mobility 6


Enabled by EMC VPLEX Geo
Key results VPLEX Geo provides a more effective way of managing virtual storage environments
by enabling transparent integration with existing applications and infrastructure, and
by providing the ability to migrate data between remote data centers with no
interruption in service. Organizations do not need to perform traditionally complex,
time-consuming tasks to migrate their data between geographically dispersed data
centers, such as making physical backups or using data replication services.

With VPLEX Geo employed as described in this solution, organizations can:


• Easily migrate applications in real time from one site to another with no
downtime, using standard infrastructure tools such as Microsoft Hyper-V.
• Provide an application-transparent and non-disruptive solution for interruption
avoidance and data migration. This reduces the operational impact associated
with traditional solutions (such as tape backup and data replication) from days
or weeks, to minutes.
• Transparently share and balance resources between geographically-dispersed
data centers with standard infrastructure tools.

Long-Distance Application Mobility 7


Enabled by EMC VPLEX Geo
Introduction
Purpose The purpose of this document is to provide readers with an overall understanding of
the VPLEX Geo technology and how it can be used with tools such as Microsoft
Hyper-V to provide effective resource distribution and sharing between data centers
across distances of up to 2,000 km with no downtime.

VPLEX Geo enables application mobility between data centers at asynchronous


distances. Using VPLEX Geo in conjunction with Microsoft Hyper-V, IT administrators
can guarantee application mobility across existing WANs. With the addition of Silver
Peak compression, existing WAN bandwidth can be optimized for maximum
AccessAnywhere performance between locations.

Scope The scope of this white paper is to document the:


• Environment configuration for multiple applications using virtualized storage
presented by EMC VPLEX Geo
• Migration from traditional, SAN-attached storage to a virtualized storage
environment presented by EMC VPLEX Geo
• Application mobility within a geographically dispersed VPLEX Geo virtualized
storage environment

Audience This white paper is intended for EMC employees, partners, and customers including IT
planners, virtualization architects and administrators, and any other IT professionals
involved in evaluating, acquiring, managing, operating, or designing infrastructure
that leverages EMC technologies.

Long-Distance Application Mobility 8


Enabled by EMC VPLEX Geo
Terminology This document includes the following terminology.

Table 1. Terminology
Term Definition
Asynchronous Asynchronous consistency groups are used for distributed volumes in
group VPLEX Geo to ensure that I/O to all volumes in the group is
coordinated across both clusters, and all directors in each cluster. All
volumes in an asynchronous group share the same detach rule, are in
write-back cache mode, and behave the same way in the event of an
inter-cluster link failure. Only distributed virtual volumes can be
included in an asynchronous consistency group.

CNA Converged Network Adapter

COM Communication—identifies inter- and intra-cluster communication


links

Consistency Consistency groups allow you to group volumes together and apply a
group set of properties to the entire group. In a VPLEX Geo where clusters
are separated by asynchronous distances (up to 50 ms RTT),
consistency groups are required for asynchronous I/O between the
clusters. In the event of a director, cluster, or inter-cluster link failure
consistency groups ensure consistency in the order in which data is
written to the back-end arrays, preventing possible data corruption.

Distributed Distributed devices use storage from both clusters. A distributed


device device’s components must be other devices, and those devices must
be created from storage in both clusters in the Geo-plex.

DR Disaster Recovery

HA High Availability

OLTP On-line transaction processing

SAP ABAP SAP Advanced Business Application Programming

SAP ERP SAP Enterprise Resource Planning

Synchronous Synchronous consistency groups provide a convenient way to apply


group rule sets and other properties to a group of volumes at a time,
simplifying system configuration and administration on large
systems. Volumes in a synchronous group behave the same in a
VPLEX, and can have global or local visibility. Synchronous
consistency groups can contain local, global, or distributed volumes.

UCS Cisco Unified Computing System

VM Virtual Machine. A software implementation of a machine that


executes programs like a physical machine.

VPLEX Geo Provides distributed federation within, across, and between two
clusters (within asynchronous distances).

VHD Virtual Hard Disk. A Hyper-V virtual hard disk (VHD) is a file that
encapsulates a hard disk image.

Long-Distance Application Mobility 9


Enabled by EMC VPLEX Geo
Solution Overview
Overview The validated solution is built in a Microsoft Hyper-V environment on EMC VPLEX Geo
infrastructure that incorporates EMC Symmetrix® VMAX™ and EMC VNX™ storage
arrays. The key components of the physical architecture are:

• EMC VPLEX Geo infrastructure blocks providing access and management of


virtualized storage
• An EMC VNX5700™ storage array
• EMC Symmetrix VMAX storage arrays

• Microsoft Hyper-V clusters supporting SAP, Microsoft SharePoint, and Oracle


• Silver Peak NX WAN optimization appliances

Physical Figure 1 illustrates the physical architecture of the use case solution.
architecture

Figure 1. Physical architecture diagram

Long-Distance Application Mobility 10


Enabled by EMC VPLEX Geo
Hardware Table 2 describes the hardware resources used in this solution.
resources
Table 2. Hardware resources
Equipment Quantity Configuration
Rack servers 4 Production Site (Site A)
2 six-core Xeon 5650 CPUs, 96 GB RAM
2 10-GB Emulex CNA adapters

Unified computing blade 4 Disaster Recover Site (Site B)


servers 2 quad-core Xeon 5670 CPUs, 48 GB RAM
2 10-GB QLogic CNA adapters

EMC Symmetrix VMAX 1 FC, 600 GB/15k FC drives, 200 GB Flash drives

EMC VNX5700 1 FC connectivity, 600 GB/15k FC drives, 200 GB


Flash drives

EMC VPLEX 2 VPLEX Geo cluster with two engines and four
directors on each cluster

WAN emulation 1 2 1-GbE network emulators

WAN compression 2 1-GbE hardware appliance


appliance

Enterprise-class 4 Converged network switches


switches 2 per site for array and server connectivity

Long-Distance Application Mobility 11


Enabled by EMC VPLEX Geo
Software resources Table 3 describes the software resources used in this solution environment.

Table 3. Software resources


Software Version
EMC PowerPath® 5.5

Microsoft Windows 2008 R2 SP1

Microsoft Windows 2008 R2 Hyper-V SP1

Microsoft Office SharePoint Server 2010 14.0.4762.1000

Microsoft SQL Server 2008 R2 10.50.1600.1

Red Hat Enterprise Linux 5.5

SAP ERP 6.0 EHP4

SAP NetWeaver 7.0 EHP 1 Unicode 64-bit

Oracle RDBMS 11gR2 11.2.0.2.0

Visual Studio Test Suite 2008 SP1

KnowledgeLake Document Loader 1.1

SwingBench 2.3.0.422

HP LoadRunner 9.5.1

Long-Distance Application Mobility 12


Enabled by EMC VPLEX Geo
Key components
Introduction The virtualized data center environment described in this white paper was designed
and deployed using a shared infrastructure. All layers of the environment are shared
to create the greatest return on infrastructure investment, while supporting multiple
application requirements for functionality and performance.

Using server virtualization, based on Microsoft Hyper-V, Intel x86-based servers are
shared across applications and clustered to achieve redundancy and failover
capability. VPLEX Geo is used to present shared data stores across the physical data
center locations, enabling migration of the application virtual machines (VMs)
between the physical sites. Physical Site A storage consists of a Symmetrix VMAX
Single Engine (SE) and a VNX5700 for the SAP, Microsoft, and Oracle environments.
VNX5700 is used for the physical Site B data center infrastructure and storage.

Common elements The following sections briefly describe the components used in this solution,
including:
• EMC VPLEX Geo
• EMC VPLEX Geo administration
• EMC VNX5700
• EMC Symmetrix VMAX SE
• Microsoft 2008 R2 with Hyper-V
• Microsoft Systems Center Virtual Machine Manager (SCVMM)
• Silver Peak NX-9000 WAN optimization appliance

Long-Distance Application Mobility 13


Enabled by EMC VPLEX Geo
EMC VPLEX Geo
EMC VPLEX Geo EMC VPLEX Geo is a storage virtualization platform for the private and hybrid cloud.
overview EMC VPLEX Geo is a SAN-based block solution for local and distributed federation
that allows the physical storage provided by traditional storage arrays to be
virtualized, accessed, and managed across the boundaries between data centers.

This form of access, called AccessAnywhere, removes many of the constraints of the
physical data center boundaries and its storage arrays. AccessAnywhere storage
allows data to be moved, accessed, and mirrored transparently between data
centers, effectively allowing storage and applications to work between data centers
as though those physical boundaries were not there.

EMC VPLEX Geo In this solution, we designed our VPLEX Geo plexes using a routed topology with the
design following environmental characteristics:
considerations
• The routers are situated between the clusters.
• An Empirix network emulator is used between clusters.

Figure 2 shows the VPLEX Geo cluster in a routed topology.

Figure 2. VPLEX Geo cluster using routed topology

Long-Distance Application Mobility 14


Enabled by EMC VPLEX Geo
EMC VPLEX Geo For this solution, the VPLEX Geo clusters consist of two clusters in two geographical
configuration locations. On each cluster, there are two port groups as described in Table 4 and
Table 5.

Table 4. VPLEX Geo-plex 1 port groups


Subnet attributes for Port Group 0:
prefix 192.168.11.0

subnet mask 255.255.255.0

cluster-address 192.168.11.251

gateway 192.168.11.1

mtu 1500

remote-subnet 192.168.22.0/24
Subnet attributes for Port Group 1:
prefix 10.6.11.0

subnet mask 255.255.255.0

cluster-address 10.6.11.251

gateway 10.6.11.1

mtu 1500

remote-subnet 10.6.22.0/24

Table 5. VPLEX Geo-plex 2 port groups


Subnet attributes for Port Group 0:
prefix 192.168.22.0

subnet mask 255.255.255.0

cluster-address 192.168.22.252

gateway 192.168.22.2

mtu 1500

remote-subnet 192.168.11.0/24
Subnet attributes for Port Group 1:
prefix 10.6.22.0

subnet mask 255.255.255.0

cluster-address 10.6.22.252

gateway 10.6.22.1

mtu 1500

remote-subnet 10.6.11.0/24

Long-Distance Application Mobility 15


Enabled by EMC VPLEX Geo
After both clusters join together to form VPLEX Geo, network connectivity is
established from each director across two clusters.

Figure 3 shows the connectivity status of director-1-1-A.

Figure 3. Director-1-1-A connectivity status

Figure 4 shows the connectivity status of director-2-1-A.

Figure 4. Director 2-1-A connectivity status

When distributed devices are created, they are in synchronous mode by default.
VPLEX Geo clusters require consistency groups to be configured to make distributed
devices in asynchronous mode.

Long-Distance Application Mobility 16


Enabled by EMC VPLEX Geo
Figure 5 shows that the Consistency Group VPLEX-Async-Group was created on both
clusters. There are a total of nine virtual volumes in the group.

Figure 5. VPLEX consistency group

To verify all virtual volumes are in asynchronous mode, the VPLEX CLI can be used as
shown in Figure 6.

Figure 6. Verify asynchronous mode

Long-Distance Application Mobility 17


Enabled by EMC VPLEX Geo
After the VPLEX Geo cluster WAN connection was established, and the distance
latency was introduced between two clusters (representing two geographical
locations), there was a delay on both network traffic and the SAN.

Figure 7 shows the VPLEX WAN port status.

Figure 7. VPLEX WAN port status

Figure 8 shows the packet round-trip time (RRT) between the VPLEX directors.

Figure 8. RTT between VPLEX directors

Long-Distance Application Mobility 18


Enabled by EMC VPLEX Geo
EMC VPLEX Geo administration
EMC VPLEX Geo When bringing an existing storage array into a virtualized storage environment, the
administration options are to:
overview
• Encapsulate storage volumes from existing storage arrays that have already
been used by hosts
or

• Create a new VPLEX Geo LUN and migrate the existing data to that LUN

VPLEX Geo provides an option to encapsulate the existing data using VPlexcli. When
application consistency is set (using the –appc flag), the volumes claimed are data-
protected and no data is lost.

Note: There is no GUI equivalent for the appc flag.

In this solution, we encapsulated existing storage volumes with real data and brought
them into VPLEX Geo clusters, as shown in Figure 9. The data was protected when
storage volumes were claimed with the –appc flag to make storage volumes
“application consistent.”

Figure 9. Encapsulated storage volumes

Long-Distance Application Mobility 19


Enabled by EMC VPLEX Geo
EMC VPLEX Geo In this solution, administration of VPLEX Geo was done primarily through the
administration Management Console, although the same functionality exists with VPlexcli.
process
On authenticating to the secure web-based GUI, the user is presented with a set of
on-screen configuration options, listed in the order of completion. For more
information about each step in the workflow, refer to the EMC VPLEX Management
Console online help. Table 6 summarizes the steps to be taken, from the discovery of
the arrays up to the storage being visible to the host.

Table 6. VPLEX Geo administration process


Step Action
1 Discover available storage
VPLEX Geo automatically discovers storage arrays that are zoned to the back-
end ports. All arrays connected to each director in the cluster are listed in the
Storage Arrays view.

2 Claim storage volumes


Storage volumes must be claimed before they can be used in the cluster (with
the exception of the metadata volume, which is created from an unclaimed
storage volume). Only after a storage volume is claimed, can it be used to
create extents, devices, and then virtual volumes.

3 Create extents
Create extents for the selected storage volumes and specify the capacity.

4 Create devices from extents


A simple device is created from one extent and uses storage in one cluster only.

5 Create a virtual volume


Create a virtual volume using the device created in the previous step.

6 Register initiators
When initiators (hosts accessing the storage) are connected directly or through
a Fibre Channel fabric, VPLEX Geo automatically discovers them and populates
the Initiators view. Once discovered, you must register the initiators with VPLEX
Geo before they can be added to a storage view and access storage. Registering
an initiator gives a meaningful name to the port’s WWN, which is typically the
server’s DNS name, to allow you to easily identify the host.

7 Create a storage view


For storage to be visible to a host, first create a storage view and then add
VPLEX Geo front-end ports and virtual volumes to the view. Virtual volumes are
not visible to the hosts until they are in a storage view with associated ports
and initiators.
The Create Storage View wizard enables you to create a storage view and add
initiators, ports, and virtual volumes to the view. Once all components are
added to the view, it automatically becomes active. When a storage view is
active, hosts can see the storage and begin I/O to the virtual volumes.
After creating a storage view, you can only add or remove virtual volumes
through the GUI. To add or remove ports and initiators, use the CLI. For more
information, refer to the EMC VPLEX CLI Guide.

Long-Distance Application Mobility 20


Enabled by EMC VPLEX Geo
EMC VNX5700
EMC VNX5700 The EMC VNX family delivers industry-leading innovation and enterprise capabilities
overview for file, block, and object storage in a scalable, easy-to-use solution. This next-
generation storage platform combines powerful and flexible hardware with advanced
efficiency, management, and protection software to meet the demanding needs of
today’s enterprises.

The VNX series is designed to meet the high-performance, high-scalability


requirements of midsize and large enterprises, delivering leadership performance,
efficiency, and simplicity for demanding virtual application environments.

EMC VNX5700 This section describes how the VNX5700 was configured in this solution.
configuration
Pool configuration
Table 7 describes the VNX5700 pool configuration used in this solution.

Table 7. VNX5700 pool configuration


Pool Protection type Drive count Drive technology Drive capacity
VNXPool0 RAID 1/0 16 SAS 300 GB

VNXPool1 RAID 5 30 SAS 300 GB

VNXPool2 RAID 1/0 16 SAS 300 GB

VNXPool3 RAID 1/0 2 SATA Flash 200 GB

VNXPool4 RAID 1/0 2 SAS 300 GB

LUN configuration
Table 8 describes the VNX5700 LUN configuration used in this solution.

Table 8. VNX5700 LUN configuration


LUN LUN ID LUN size Pool
R10CSV01 1 2 TB VNXPool0

R5CSV01 2 2 TB VNXPool1

R5CSV02 3 2 TB VNXPool1

R5CSV03 4 2 TB VNXPool1

R10CSV02 38 2 TB VNXPool2

R10CSV03 39 150 GB VNXPool3

ORALOG 43 150 GB VNXPool4

Long-Distance Application Mobility 21


Enabled by EMC VPLEX Geo
EMC Symmetrix VMAX
EMC Symmetrix The EMC Symmetrix VMAX series is the latest generation of the Symmetrix product
VMAX overview line. Built on the strategy of simple, intelligent, modular storage, it incorporates a
scalable fabric interconnect design that allows the storage array to seamlessly grow
from an entry-level configuration into a large-scale enterprise storage system.
Symmetrix VMAX arrays provide improved performance and scalability for demanding
enterprise environments such as those found in large virtualization environments,
while maintaining support for EMC's broad portfolio of platform software offerings.

Symmetrix VMAX systems deliver software capabilities that improve capacity use,
ease of use, business continuity, and security. These features provide significant
advantage to customer deployments in a virtualized environment where ease of
management and protection of virtual machine assets and data assets are required.

EMC Symmetrix This section describes how the EMC Symmetrix VMAX was configured in this solution.
VMAX
configuration Symmetrix volume configuration
Table 9 describes the Symmetrix VMAX volume configuration used in this solution.

Table 9. Symmetrix VMAX volume configuration


Protection Drive
Volume ID Device size Drive capacity
type technology
1FD:229 RAID 5 (7+1) 240 GB Fibre Channel 450 GB 15k

22A:22B RAID 5 (7+1) 150 GB Fibre Channel 450 GB 15k

Note: Devices 22A:22B are used as stand-alone devices.

Meta device configuration


Table 10 describes the Symmetrix VMAX metadevice configuration used in this
solution.

Table 10. Symmetrix VMAX metadevice configuration


Meta Meta
Volume ID Protect Volume size
configuration members
1FD RAID 5 (7+1) striped 1FE:205 2.1 TB

206 RAID 5 (7+1) striped 207:20E 2.1 TB

20F RAID 5 (7+1) striped 210:217 2.1 TB

218 RAID 5 (7+1) striped 219:220 2.1 TB

221 RAID 5 (7+1) striped 222:229 2.1 TB

Long-Distance Application Mobility 22


Enabled by EMC VPLEX Geo
Microsoft Hyper-V
Microsoft Hyper-V Hyper-V is a hypervisor-based virtualization technology from Microsoft that
overview organizations use to reduce costs by using virtualization through Windows Server
2008 R2. Microsoft Hyper-V enables customers to make the best use of their server
hardware by consolidating multiple server roles as separate virtual machines running
on a single physical machine.

Microsoft Hyper-V This section describes the configuration of the Microsoft Hyper-V environment used in
configuration this solution.

Windows Failover Cluster is used to provide high-availability features as well as live


migration and cluster shared volume capability. Using the VPLEX volume for the
cluster shared volume allows multiple virtual machines to be hosted on a single LUN,
while still allowing for live migration of a virtual machine from one site to another
independent of the other virtual machines on that volume.

Figure 10 shows that four nodes are used at each site and three Ethernet network
connections are used for Heartbeat, live migration, and client access at the respective
site. System Center Virtual Machine Manager (SCVMM) is used to manage the virtual
machines on the Hyper-V cluster.

Figure 10. Hyper-V cluster nodes and connections

Long-Distance Application Mobility 23


Enabled by EMC VPLEX Geo
Hyper-V One Hyper-V virtual switch, named MSvSwitch, is created for the virtual machine
networking network. Figure 11 shows the relationship between the virtual machine, virtual
configuration switch, and physical NIC on one of the cluster nodes.

Figure 11. Virtual switch relationship

The virtual machine network is configured to use VLAN tagging as shown in Figure 12.

Long-Distance Application Mobility 24


Enabled by EMC VPLEX Geo
Figure 12. VLAN tagging on the virtual machine network

EMC PowerPath is installed on the cluster nodes, as shown in Figure 13, to provide
load balancing and for fault tolerance on the FC network. To provide support for the
VPLEX device, Invista® Devices Support must be selected during installation. It can
also be changed after installation using Add/Remove Programs.

Figure 13. EMC PowerPath installed

Long-Distance Application Mobility 25


Enabled by EMC VPLEX Geo
The distributed volume on VPLEX Geo is presented to the nodes on both sites. A basic
volume is created, formatted with NTFS on one of the nodes, and then added to the
cluster. With Windows 2008 R2, the Cluster Shared Volumes feature can be activated
by right-clicking on the cluster and selecting Enable Cluster Shared Volumes, as
shown in Figure 14. The disk resource can then be added into cluster shared volume.

Figure 14. Enabling cluster shared volumes

Note: Make sure all nodes are available when enabling the Cluster Shared Volume
feature otherwise you will need to use the Cluster CLI command later to add
the node into the owner list of the cluster resource. See Figure 15.

Figure 15. Cluster resource owner list

Cluster Shared Volume mounts the disk under C:\SharedStorage on every node of the
cluster, as shown in Figure 16.

Long-Distance Application Mobility 26


Enabled by EMC VPLEX Geo
Figure 16. Cluster shared volume location

The virtual machine can be configured to use that path to place the virtual disk on the
shared volume, as shown in Figure 17.

Figure 17. Virtual hard disk path

If the storage network between the host and array fails, there is an option to redirect
the traffic over the LAN to the node that owns the cluster shared disk resource.

Long-Distance Application Mobility 27


Enabled by EMC VPLEX Geo
Microsoft SCVMM If you need to move the virtual machine disk to a different volume, the Migrate
configuration Storage option can be used with Microsoft Systems Center Virtual Machine Manager
(SCVMM), as shown in Figure 18.

Figure 18. SCVMM Migrate Storage option

And provide the target volume to move to, as shown in Figure 19.

Figure 19. Target volume

The virtual machine must be in a saved state or powered off to move the underlying
storage.

Long-Distance Application Mobility 28


Enabled by EMC VPLEX Geo
Networking infrastructure
Networking This section describes the virtual machine network environment used in this solution.
infrastructure Topics include:
overview
• Network design considerations
• Network infrastructure configuration

Network design The virtual machine network environment in this solution consists of a single Layer-2
considerations network extended across the WAN between Site A and Site B. The following design
considerations apply to this environment:
• This extension was done using Cisco's Overlay Transport Virtualization (OTV)
rather than by bridging the VLAN over the WAN. OTV allows for Ethernet LAN
extension over any WAN transport by dynamically encapsulating Layer 2 “Mac
in IP” and routing it across the WAN.
• Edge devices and Nexus 7000 switches exchange information about learned
devices on the extended VLAN at each site via multicast, which negates the
need for ARP and other broadcasts to be propagated across the WAN.
• Additionally, using OTV rather than bridging eliminates BPDU forwarding (part
of normal spanning tree operations in a bridged VLAN scenario) and provides
the ability to eliminate or rate-limit other broadcasts to conserve bandwidth.
Note: For recommendations about using live migration in your own Hyper-V
environment, refer to the Hyper-V: Live Migration Network Configuration Guide
at the Microsoft TechNet site.

Network Table 11 lists the OTV configuration for each edge device in the virtual machine
configuration network.

Table 11. Virtual machine network OTV configuration


Site OTV Configuration
Site A: feature otv
Pcloud-7000-OTV otv site-vlan 1
interface Overlay1
description VPLEX-WAN
otv isis authentication key-chain VPLEX
otv join-interface Ethernet3/16
otv control-group 239.1.1.1
otv data-group 239.194.1.0/29
otv extend-vlan 580
otv site-identifier 1

Long-Distance Application Mobility 29


Enabled by EMC VPLEX Geo
Site OTV Configuration
Site B: feature otv
Pcloud-7000-VPLEX-SITE-B interface Overlay1
description VPLEX-WAN
otv isis authentication key-chain VPLEX
otv join-interface Ethernet3/33
otv control-group 239.1.1.1
otv data-group 239.194.1.0/29
otv extend-vlan 580
otv site-identifier 1

Note: For more detail on Cisco OTV refer to the Cisco Quick Start Guide.

Long-Distance Application Mobility 30


Enabled by EMC VPLEX Geo
Silver Peak WAN optimization
WAN optimization WAN optimization is the process of improving network traffic flow by increasing
overview efficiency and minimizing bandwidth roadblocks through the use of data
compression, caching, and other techniques.

There are many choices for WAN optimization products and vendors that can be
deployed to meet your networking needs. In this solution we used Silver Peak WAN
optimization appliances.

Note: Bandwidth savings from WAN optimization are independent of distance, and
present over all distances. Acceleration benefits from WAN optimization
increase substantially as latency increases. Detailed test results for
acceleration beyond 20 ms are available at www.silver-peak.com.

Silver Peak NX Silver Peak’s appliances are data-center-class network devices designed to meet the
appliance rigorous WAN optimization requirements of large enterprises, delivering top
performance, scalability, and reliability.

Silver Peak’s high-capacity NX appliance scales from megabytes-per-second to


gigabytes-per-second of WAN capacity in a single device. By optimizing primarily at
the network layer, the appliance can optimize all IP traffic regardless of transport
protocol or application software version. Silver Peak’s optimization technology helps
to overcome common WAN bandwidth, latency, and quality challenges.

As shown in Figure 20, Silver Peak appliances can be deployed between VPLEX Geo
clusters at both ends of the WAN. Silver Peak, when deployed with VPLEX Geo,
mitigates many challenges associated with deploying a geographically distributed
architecture, including limited bandwidth, high latency (due to distance), and WAN
quality.

Figure 20. Silver Peak WAN optimization appliances and EMC VPLEX Geo

Long-Distance Application Mobility 31


Enabled by EMC VPLEX Geo
Silver Peak design In this solution, the Silver Peak appliances were configured as follows:
considerations
• Silver Peak compression appliances were configured in Routed mode using
policy-based routing, and inserted inline on each side of the 1-Gigabit Ethernet
WAN link.
• Site-to-site traffic was routed across the WAN, entering the appliance through
the Site A LAN interface. It was then compressed and sent through a GRE tunnel
between the two appliances where it is uncompressed on the other side and
sent out the Site B LAN interface into the Site B network.
• Under an appliance failure scenario the appliances fail to open, meaning all
traffic continues to pass through, although uncompressed. In addition, in
Bridged mode, System Bypass can be used to pass all traffic uncompressed
between sites if required.

The Silver Peak appliances can also be configured as a redirect target using WCCP
rather than be deployed inline. See the Silver Peak Configuration Guide available at
www.silver-peak.com for more detail.

Silver Peak WAN Our test results showed that this solution benefitted from the Silver Peak WAN
optimization optimization across all applications. Figure 21 shows the traffic across the LAN and
results WAN, and describes the peak and average deduplication ratio for all applications
over the testing period for both sites. The average deduplication was 66 percent on
Site B and 56 percent on Site A.

Figure 21. Silver Peak WAN optimization performance

Long-Distance Application Mobility 32


Enabled by EMC VPLEX Geo
Microsoft Office SharePoint Server 2010
SharePoint This section covers the following topics:
overview
• Microsoft SharePoint Server 2010 configuration
• SharePoint Server environment validation

Microsoft SharePoint Server 2010 configuration


SharePoint Server When customers move their physical SharePoint environments to a virtualized
configuration infrastructure, they also seek all the benefits that virtualization brings to such a
overview complex, federated application as SharePoint Server. Mobility of the entire farm,
rather than building a mirror farm, is a rising requirement of enterprise-level
application owners. Migrating the entire SharePoint farm to a remote site as a form of
disaster avoidance can be complex. Building a solution needs to meet the two
leading challenges:
• Moving the entire SharePoint farm between different data centers without
interrupting operations on the farm
• Reducing storage maintenance costs

The virtualized SharePoint Server 2010 farm used in our solution overcomes these
challenges by building on Microsoft Hyper-V enabled by VPLEX Geo technology, which
allows disparate storage arrays at multiple locations to provision a single, shared
array on the SharePoint 2010 farm.

SharePoint Server In this SharePoint 2010 environment design, the major configuration highlights
design include:
considerations
• The SharePoint farm is designed as a publishing portal. There is around 400 GB
of user content, consisting of four SharePoint site collections (document
centers) with four content databases, each populated with 100 GB of random
user data. Microsoft network load balancing (NLB) was enabled on three web
front-ends (WFE) servers for load balancing and local failover consideration.
• The SharePoint farm uses seven virtual machines hosted on four physical
Hyper-V servers at the production site. Three web front-end (WFE) servers are
also configured with query roles for load balancing.
• The query components have been scaled out to three partitions. Each query
server contains a part of the index partitions and a mirror of another index
partition for fault-tolerance considerations.
• Two index components are provisioned for fault tolerance and better crawl
performance.

Long-Distance Application Mobility 33


Enabled by EMC VPLEX Geo
SharePoint Server Table 12 describes the virtual machine configurations for the SharePoint Server 2010
farm virtual farm.
machine
configurations Table 12. Microsoft SharePoint Server 2010 farm virtual machines
Configuration Description
Three WFE virtual machines The division of resources offers the best search
performance and redundancy in a virtualized SharePoint
farm. As WFE and query roles are CPU-intensive, the WFE
VMs were allocated four virtual CPUs.
The query components have been scaled out into three
partitions. Each query server contains a part of the index
partitions and a mirror of another index partition for fault-
tolerance consideration.

Two index virtual machines Two index components were partitioned in this farm for
better crawl performance. Multiple crawl components
mapped to the same crawl database to achieve fault
tolerance. The index components were designed to crawl
themselves without impacting the production WFE
servers.
Four virtual CPUs and 6 GB memory were allocated for the
index server. The incremental crawl was scheduled to run
every two hours per day.

Application Excel virtual Two virtual CPUs and 2 GB of memory were allocated for
machine the application server as these roles require less
resource.

SQL Server virtual machine Four virtual CPUs and 16 GB of memory were allocated for
the SQL Server virtual machine as CPU utilization and
memory requirement for SQL in a SharePoint farm could
be high.
With more memory allocated to the SQL virtual machine,
the SQL Server becomes more effective in caching
SharePoint user data, leading to fewer required physical
IOPS for storage and better performance.
Four tempdb db files were created, which is equal to the
SQL Server core CPUs as Microsoft recommended.

Long-Distance Application Mobility 34


Enabled by EMC VPLEX Geo
SharePoint virtual Table 13 lists the virtual machine configuration of the SharePoint farm and the
machine allocated resources.
configuration and
resources Table 13. SharePoint farm virtual machine configuration
Server role Quantity vCPUs Memory (GB) Bootdisk (GB) Search disk (GB)
WFE Servers 3 4 4 40 60

Index Servers 2 4 6 40 60

Application Server (Host 1 2 2 40 Not Applicable


Central Admin)

SQL Server 2008 R2 1 4 16 40 Not Applicable


Enterprise

SharePoint farm The data population tool uses a set of sample documents. Altering the document
test methodology names and metadata (before insertion) makes each document unique.

One load-agent host is allocated for each WFE, allowing data to be loaded in parallel
until the targeted 400 GB data size is reached. The data is spread evenly across the
four-site collections (each collection is a unique content database). The user profiles
consist of a mix of three user operations: browse, search, and modify.

KnowledgeLake DocLoaderLite was used to populate SharePoint with random user


data, while Microsoft VSTS 2008 SP1 emulated the client user load. Third-party
vendor code was used to ensure an unbiased and validated test approach.

During validation, a Microsoft heavy-user load profile was used to determine the
maximum user count that the Microsoft SharePoint 2010 server farm could sustain
while ensuring the average response times remained within acceptable limits.
Microsoft standards state that a heavy user performs 60 requests in each hour; that
is, there is a request every 60 seconds.

The user profiles in this testing consist of three user operations:


• 80 percent browse
• 10 percent search
• 10 percent modify

Note: Microsoft publishes default service-level agreement (SLA) response times for
each SharePoint user operation. Common operations (such as browse and
search) should be completed within 3 seconds or less, and uncommon
operations (such as modify) should be completed within 5 seconds or less.
These response time SLAs were comfortably met and exceeded.

Long-Distance Application Mobility 35


Enabled by EMC VPLEX Geo
SharePoint Server environment validation
Test summary Our testing validated SharePoint Server 2010 operations before and after
encapsulation into the VPLEX Geo cluster:
• A baseline test is performed first to log the SharePoint 2010 farm base
performance.
• The next test then validates the effect on performance when the SharePoint
farm virtual machines are encapsulated into the VPLEX Geo cluster.

• Live migration tests were performed, using a distance emulator on the whole
SharePoint farm after the encapsulation of storage into the VPLEX Geo cluster.
Latency was set at 20 ms equivalent to 2,000 km.
• Live migration tests were performed on the whole SharePoint farm with the
insertion of Silver Peak compression. Latency was set to 20 ms equivalent to
2,000 km.
SharePoint 2010, VPLEX Geo, and Hyper-V performance data was logged for analysis
during the tests. This data presents an account of results from VSTS 2008 SP1, which
generates continuous workload (Browse/Search/Modify) to the WFEs of the
SharePoint 2010 farm, while simultaneously consolidating the SQL and Oracle OLTP
workload on the same Hyper-V clusters.

SharePoint With a mixed user profile of 80/10/10, the virtualized SharePoint farm can support a
baseline test maximum of 11,520 users with 10 percent concurrency, while satisfying Microsoft’s
acceptable response time criteria, as shown in Table 14 and Table 15.

Table 14. SharePoint user activity baseline performance


User activity - Acceptable response Baseline response time
Browse/Search/Modify time Browse/Search/Modify
80% / 10% / 10% <3 / <3 / <5 sec 2.47/2.00/1.33 sec

Table 15. SharePoint content mix baseline performance


Content mix -
Browse/Search/ Requests per Microsoft Maximum user
Modify second (RPS) user profile Concurrency capacity
80% / 10% / 10% 19.2 Heavy 10 % 11520

Long-Distance Application Mobility 36


Enabled by EMC VPLEX Geo
SharePoint The storage for the entire SharePoint farm was encapsulated and virtualized in this
encapsulated test test. The storage was active across both sites and made available to the SharePoint
and SQL servers through VPLEX Geo. After that, all SharePoint virtual machines were
started up.

After encapsulation, with a mixed user profile of 80/10/10, the virtualized SharePoint
farm can support a maximum of 12,780 users with 10 percent concurrency, while
satisfying Microsoft’s acceptable response time criteria.

Figure 22 shows the performance of passed tests per second after encapsulating into
the VPLEX LUNs on the SharePoint virtual machines.

Figure 22. Performance of passed tests per second after SharePoint encapsulation

Table 16 and Table 17 show the performance results when encapsulation is used.

Table 16. SharePoint user activity performance after encapsulation


User activity - Acceptable response Baseline response time
Browse/Search/Modify time Browse/Search/Modify
80% / 10% / 10% <3 / <3 / <5 sec 2.41 / 2.33 / 1.26 sec

Table 17. SharePoint content mix performance after encapsulation


Requests
Content mix - per Microsoft
Browse/Search/ second user Maximum
Modify (RPS) profile Concurrency user capacity
80% / 10% / 10% 21.3 Heavy 10% 12780

Long-Distance Application Mobility 37


Enabled by EMC VPLEX Geo
SharePoint live In this test environment, the whole SharePoint farm was migrated from the
migration test production site to the DR site with a 2,000 km distance by using Hyper-V live
migration without loss of service. During the live migration process, the SharePoint
farm was running with a full load. All SharePoint virtual machines were migrated in
sequence from the production site to the DR site in the eight-node Hyper-V clusters.

The 10 GbE connection was used for the live-migration network as live migration
requires high bandwidth.

Figure 23 shows the performance of passed tests per second during the live
migration. When running live migration between the sites, the transactions per
second fluctuated. The drop in the number of passed tests per second was due to the
migration process of the SQL server as Hyper-V was doing memory transfer across the
sites.

Because the live migration process affects the maximum user capacity of the entire
SharePoint farm, we recommend doing the whole farm live migration during non-peak
hours. As shown in Figure 23, there was no loss of service for the whole SharePoint
farm during the live migration.

Figure 23. Passed tests per second during live migration of the entire SharePoint
farm across sites with a 2,000 km distance

Table 18 and Table 19 list the SharePoint farm performance results during the live
migration across the sites with a 2,000 km distance. Running the entire SharePoint
farm on the DR site would decrease the number of passed tests per second because
of the 20 ms latency between the clients and the SharePoint farm including the web
front-end servers.

Table 18. User activity performance including the live migration between sites with
a 2,000 km distance
User activity - Acceptable response Response time
Browse/Search/Modify time Browse/Search/Modify
80% / 10% / 10% <3 / <3 / <5 sec 2.90 / 1.73 / 1.49 sec

Long-Distance Application Mobility 38


Enabled by EMC VPLEX Geo
Table 19. Content mix performance during the live migration between sites with
2000 km distance
Content mix - Requests Maximum Successful
Browse/Search/ per second Concurrency
user capacity Requests Rate
Modify (RPS)
80% / 10% / 10% 15.6 10% 9360 100%

Virtual machines with large memory configurations take longer to migrate than virtual
machines with smaller memory configurations. This is because active memory is
copied over the network to the receiving cluster node before migration.

Table 20 lists the live migration duration, with and without latency for the entire
SharePoint farm. Note how a 2,000 km distance between the two data centers caused
a longer cross-site live migration duration.

Table 20. Live migration duration of all SharePoint virtual machines


Live migration duration
Live migration duration
SharePoint Farm with 2000 km distance
without distance latency
server role and 20 ms latency
(mm:ss)
(mm:ss)
Application Excel 0:28 1:14

Web Front End Server 1 0:56 1:25

Web Front End Server 2 0:44 1:35

Web Front End Server 3 0:38 1:52

SQL Server 3:05 4:48

Crawler Server 1 1:06 1:45

Crawler Server 2 0:48 2:31

Long-Distance Application Mobility 39


Enabled by EMC VPLEX Geo
SharePoint live In this test, the entire SharePoint farm was migrated across the sites with a 2,000 km
migration with distance with the insertion of Silver Peak WAN optimization. Latency was set to 20 ms
compression test equivalent to 2,000 km. During the live migration process, the SharePoint farm was
running with a full load. All SharePoint virtual machines were migrated in sequence
across sites in the eight-node Hyper-V clusters.

In this scenario, the 1 GbE connection with Silver Peak WAN optimization enabled
was used for the live-migration network. The live migration duration with Silver Peak
WAN compression was similar to the 10 GbE network connection.

Table 21 details the migration duration for all the SharePoint virtual machines.

Table 21. WAN-optimized SharePoint live migration results


Live migration duration with a 2,000 km distance
SharePoint farm server role
and Silver Peak enabled (mm:ss)
Application Excel 1:49

Web Front End Server 1 1:36

Web Front End Server 2 2:47

Web Front End Server 3 2:49

SQL Server 6:32

Crawler Server 1 3:12

Crawler Server 2 3:00

Long-Distance Application Mobility 40


Enabled by EMC VPLEX Geo
SharePoint and Figure 24 shows the data reduction ratio during full load on the SharePoint farm. Our
Silver Peak WAN testing showed that data reduction can reach up to 68.5 percent, including the live
optimization migration process across the sites.

Figure 24. Reduction of SharePoint traffic with Silver Peak

Long-Distance Application Mobility 41


Enabled by EMC VPLEX Geo
SAP
SAP overview Large and midsize organizations deploy SAP ERP 6.0 EHP4 to meet their core
business needs such as financial analysis, human capital management, procurement
and logistics, product development and manufacturing, and sales and service;
supported by analysis, corporate services, and end-user service delivery.

EMC VPLEX Geo enables virtualized storage for applications to access LUNs between
data center sites, and provides the ability to move virtual machines between data
centers. This optimizes data center resources and results in zero downtime for data
center relocation and server maintenance.

Because SAP applications and modules can be distributed among several virtual
servers (see Figure 25), and normal operations involve extensive communication
between them, it is critical that communication is not disrupted when individual
virtual machines are moved from site to site.

Figure 25. SAP environment

The rest of this section covers the following topics:


• SAP configuration overview
• Validation of the virtualized SAP environment

Long-Distance Application Mobility 42


Enabled by EMC VPLEX Geo
SAP configuration
SAP configuration SAP ERP system PRD was installed as a high-availability system with the International
overview Demonstration and Education System (IDES) database ABAP stack on Windows 2008
Enterprise SP2 and Microsoft SQL Server 2008 R2 Enterprise.

IDES represents a model international company with subsidiaries in several


countries. IDES contains application data for various business scenarios that can be
run in the system. The business processes in the IDES system are designed to reflect
real-life business requirements and characteristics.

SAP design In this SAP ERP 6.0 EHP4 environment, the major configuration considerations
considerations include:
• SAP patches, parameters, basis settings, and load balancing, as well as
Windows 2008 and Hyper-V were all installed and configured according to SAP
procedures and guidelines.
• SAP update processes (UPD/UP2) were configured on the Application Server
instances.
• Some IDES functionality—for example, synchronization with the external GTS
system—was deactivated to eliminate unnecessary external interfaces that
were outside the scope of the test.
• The system was configured and customized to enable LoadRunner automated
scripts to run business processes on the functional areas including Sales and
Distribution (SD), Material Management (MM), and Finance and Controlling
(FI/CO). The Order to Cash (OTC) business scenario was used as an example in
this use case.
• The storage for the entire SAP environment was encapsulated and virtualized in
this test. The storage was across the two sites and made available to the SAP
servers through VPLEX Geo.

Long-Distance Application Mobility 43


Enabled by EMC VPLEX Geo
SAP virtual The sample SAP system PRD consists of one SAP Database instance, one ABAP
machine system central services (ASCS) instance, and two application server (AS) instances.
configurations All instances are installed on Hyper-V virtual machines with the configuration as
described in Table 22.

Table 22. SAP virtual machine resources


Memory OS bootdisk Additional
Server role Quantity vCPUs
(GB) (GB) disks (GB)
SAP ERPDB 1 4 16 90 842

SAPASCS 1 2 4 90 32

SAPERPDI 2 2 8 90 25

HP LoadRunner The HP LoadRunner application emulates concurrent users to apply production


configuration workloads on an application platform or environment. LoadRunner applies
consistent, measurable, and repeatable loads to an application from end to end.

The LoadRunner system consists of one LoadRunner controller and the associated
virtual user generator in a virtual machine with the configuration as listed in Table 23.

Table 23. LoadRunner virtual machine


Server role vCPUs Memory (GB)
Controller 2 16

The parameters were configured according to best practices, including enabling IP


spoofing, running as a process instead of a thread, and setting think time to a limited
value.

SAP ERP workload In our testing LoadRunner ran an order-to-cash (OTC) business process scenario to
profile generate the application-specific workload. This process covers a sell-from-stock
scenario, which includes the creation of a customer order with six line items and the
corresponding delivery with subsequent goods movement and invoicing. Special
pricing conditions were also used. The process consists of the following transactions:

1. Create an order with six line items (Transaction VA01).


2. Create a delivery for this order (VL01N).
3. Display the customer order (VA03).
4. Change the delivery (VL02N) and post goods issue.
5. Create an invoice (VF01).
6. Create an accounting document.

Long-Distance Application Mobility 44


Enabled by EMC VPLEX Geo
SAP environment validation
Test summary The test objective was to:
• Validate the non-disruptive movement of the SAP Database, Central Services,
and Application Server instance virtual machines across data centers, enabled
by Microsoft Hyper-V live migration and EMC VPLEX Geo

During validation, 100 sales orders were created in the PRD system. The purpose of
this scenario was to maintain active connections between the user GUI, database
instance, and the application server instances during a Hyper-V live migration, so that
business continuity and a federated solution landscape under live migration could be
verified by a successful SAP sales order creation process.

SAP test Table 24 describes the SAP test scenarios.


methodology
Table 24. SAP validation test scenarios
Scenario Description
Baseline 100 sales orders were created on the SAP ERP 6.0 EHP4 system while
the SAP database, central services, and application server instances
resided in Data Center 1.

Live migration 100 sales orders were initiated on the SAP ERP 6.0 EHP4 system while
the SAP database, central services, and application server instances
resided in Data Center 1.
During the SAP sales orders creation process, live migrations were
conducted to move the SAP VMs from Data Center 1 to Data Center 2.

Live migration 100 sales orders were initiated on the SAP ERP 6.0 EHP4 system while
with WAN the SAP database, central services, and application server instances
optimization resided in Data Center 1.
(compression) During the SAP sales orders creation process, live migrations were
conducted to move the SAP VMs from Data Center 1 to Data Center 2.
The live migrations were conducted using Silver Peak WAN
optimization.

We used the following SAP key performance indicators to evaluate the functionality
and throughput during the tests:
• Business volume (number of SAP business documents processed)
• SAP average response time for dialog work process

Some other statistics are on the Windows OS level, and those metrics were collected
from Microsoft SCVMM.

Long-Distance Application Mobility 45


Enabled by EMC VPLEX Geo
SAP load The SAP ERP 6.0 EHP4 system used to validate this solution was a standard IDES
generation system with a custom configuration and additional master data and transactional
data. The database size was 511 GB and the SAP SID was PRD.

The LoadRunner controller ramped up one virtual user every 20 seconds until the
number of virtual users reached 10 concurrent and active virtual users. All virtual
users generated system workload activity during the entire testing period. All virtual
users connected to PRD through the predefined logon group in order to distribute the
workload evenly across both SAP application server instances.

SAP test procedure Table 25 lists the test procedure steps for each phase of testing.

Table 25. SAP test procedure


Step Action
1 Count existing SAP sales order documents.

2 Reset the LoadRunner environment.

3 Start OS performance collection.

4 Run the LoadRunner scenario.

5 Stop OS performance collection.

6 Count existing SAP sales order documents.

7 Collect performance metrics from SAP and Microsoft SCVMM.

Long-Distance Application Mobility 46


Enabled by EMC VPLEX Geo
SAP test results Our result showed that the SAP sales order creation process was not interrupted
during the test. The virtual users experienced longer response time during the live
migration, but it soon returned to previous performance once live migration was
completed. Table 26 lists the test results.

Table 26. SAP test results


Live migration Live Migration
Metrics Baseline
(no Silver Peak) (with Silver Peak)
Number of Sales Orders 100 100 100

Total Dialog Steps 2,343 2,342 2,341

DIA Avg. Resp. Time (ms) 540.1 1,360 1,450

CPU Time % 29.7 9.4 9

DB Time % 63.7 74.8 75.1

LoadRunner Duration (mm:ss) 12:40 15:12 15:34


Live Migration Times in sequence (mm:ss)
SAPERPDB - 4:49 4:40

SAPERPENQ1 - 1:08 1:13

SAPERPDI1 - 2:36 3:01

SAPERPDI2 - 2:37 3:07

Total Migration Duration - 11:10 12:01

Figure 26 compares the metrics of the three test scenarios described in Table 26.

2,700
2,400
2,100
1,800
1,500
1,200
900
600
300
0
Baseline Live Migration (no Silver Live Migration (with Silver
Peak) Peak)

Nr. of Sales Orders Total Dialog Steps DIA Avg. Resp. Time (ms)

Figure 26. SAP test results compared

Long-Distance Application Mobility 47


Enabled by EMC VPLEX Geo
Figure 27 compares the sequence times for the two live migration scenarios (no
compression and with compression), as described in Table 26.

14:24

12:00

9:36

7:12

4:48

2:24

0:00
Live Migration (no Silver Peak) Live Migration (with Silver Peak)

SAPERPDB SAPERPENQ1 SAPERPDI1 SAPERPDI2

Figure 27. SAP live migration times compared

Figure 28 shows the virtual user response time from LoadRunner Controller during the
test.

Figure 28. LoadRunner virtual user response time

Long-Distance Application Mobility 48


Enabled by EMC VPLEX Geo
SAP and Silver The test results showed that the Silver Peak WAN optimization appliance was able to
Peak WAN compress, on average, 33 percent of the out-going traffic and 39 percent of the
optimization incoming traffic during the testing period. Figure 29 compares the amount of traffic
before and after Silver Peak compression.

Figure 29. Reduction of SAP traffic with Silver Peak

Long-Distance Application Mobility 49


Enabled by EMC VPLEX Geo
Oracle
Oracle overview Oracle Database 11g Enterprise Edition delivers industry-leading performance,
scalability, security, and reliability on a choice of clustered or single servers running
Windows, Linux, and UNIX. It provides comprehensive features, easily managing the
most demanding transaction processing, business intelligence, and content
management applications.

In this solution, SwingBench was used to exercise the Oracle database. SwingBench
is a load generator and benchmark tool designed to test Oracle databases. The
SwingBench Order Entry - PL/SQL (SOE) workload models a “TPC-C” like OLTP order
entry workload.

Note: If you are considering implementing Oracle Database in a Hyper-V


environment, please refer to the Oracle support website, My Oracle Support,
and the document Certification Information for Oracle Database on Microsoft
Windows (64-bit) [ID 1307195.1].

Oracle configuration
Oracle In this Oracle environment, the major configuration highlights include:
configuration
• A 200 GB OLTP Oracle Database 11g
overview
• The Oracle Database 11g was running in archivelog mode with Flashback
enabled

Oracle virtual Table 27 describes the virtual machine configurations for the Oracle environment.
machine
configuration Table 27. Oracle virtual machines
Component Description
Operating system Red Hat Enterprise Linux 5 (64-bit) release 5.5

Kernel 2.6.18-194.el5 #1 SMP

CPU 4 vCPUs

Memory 24 GB

Oracle database version Oracle Database 11g Enterprise Edition Release 2


11.2.0.2.0 – 64-bit

Long-Distance Application Mobility 50


Enabled by EMC VPLEX Geo
Oracle database This section describes the Oracle 11g database configuration.
configuration and
resources Table 28 describes the key configuration parameters for the database.

Table 28. Key database parameters


Instance parameter Value
db_name VPGEO

db_block_size 8192

log_buffer 20963328

memory_max_target 9663676416

memory_target 9663676416

sort_area_size 65536

Table 29 describes the sizing allocation and usage of the database tablespaces.

Table 29. Key database parameters


Tablespace Size (MB) Used (MB) Free (MB)
SOE_DATA 81920.0 74098.4 7821.6

SOE_INDEX 137216.0 130920.8 6295.2

SYSAUX 710.0 658.1 51.9

SYSTEM 800.0 709.0 91.0

TEMP 4096.0 0 4096.0

UNDOTBS1 2048.0 88.1 1959.9

USERS 5.0 1.3 3.7

Long-Distance Application Mobility 51


Enabled by EMC VPLEX Geo
SwingBench utility The SwingBench SOE database schema models a traditional OLTP database. Tables
configuration and indexes reside in separate tablespaces and are shown in Table 30.

Table 30. Schema tables and indexes


Table name Index
CUSTOMERS CUSTOMERS_PK (UNIQUE),
CUST_ACCOUNT_MANAGER_IX,
CUST_EMAIL_IX, CUST_LNAME_IX,
CUST_UPPER_NAME_IX

INVENTORIES INVENTORY_PK (UNIQUE),


INV_PRODUCT_IX, INV_WAREHOUSE_IX

ORDERS ORDER_PK (UNIQUE), ORD_CUSTOMER_IX,


ORD_ORDER_DATE_IX,
ORD_SALES_REP_IX, ORD_STATUS_IX

ORDER_ITEMS ORDER_ITEMS_PK (UNIQUE),


ITEM_ORDER_IX, ITEM_PRODUCT_IX

PRODUCT_DESCRIPTIONS PRD_DESC_PK (UNIQUE), PROD_NAME_IX

PRODUCT_INFORMATION PRODUCT_INFORMATION_PK (UNIQUE),


PROD_SUPPLIER_IX

WAREHOUSES WAREHOUSES_PK (UNIQUE)

LOGON n/a

Table 31 shows the table and index sizes for the SOE schemas.

Table 31. Table and index sizes


Table name Table size (MB) Index size (MB)
ORDER_ITEMS 28800.0 59949.3

CUSTOMERS 22784.0 36302.4

ORDERS 20928.0 34542.9

LOGON 1536.0 0.0

INVENTORIES 48.0 123.50

PRODUCT_DESCRIPTIONS 0.7 0.6

PRODUCT_INFORMATION 0.63 0.6

WAREHOUSES 0.13 0.4

Long-Distance Application Mobility 52


Enabled by EMC VPLEX Geo
Oracle environment validation
Test summary Our testing validated the VPLEX Geo configuration and the use of live migration for
non-disruptive movement of the virtual machine across data centers. There was
testing at each stage of the solution build.
• Baseline tests were performed on the Oracle virtual machine prior to
encapsulation of the storage into the VPLEX Geo cluster.
• Encapsulated tests were performed on the Oracle virtual machine after the
encapsulation of storage into the VPLEX Geo cluster.
• Distance simulation tests were performed on the Oracle virtual machine after
the encapsulation of storage into the VPLEX Geo cluster. Latency was set at
20 ms equivalent to 2,000 km.
• Distance simulation tests were performed on the Oracle virtual machine after
the encapsulation of storage into the VPLEX Geo cluster and the insertion of
Silver Peak compression. Latency was set at 20 ms equivalent to 2,000 km.

At each stage, an availability test using a SwingBench Order Entry - PL/SQL (SOE)
workload of 20 users was run against the Oracle 11g database with and without live
migration and the results compared.

After each test was run the database was flashed back to a common restore point to
ensure consistency.

Oracle baseline A baseline SwingBench Order Entry - PL/SQL (SOE) workload test for 20 virtual users
test was run. This produced an average of 240 transactions per minute over the hour of
the baseline test with an average response time of 15.6 ms per transaction, as shown
in Table 32 and Figure 30.

Table 32. Oracle baseline test results


SwingBench transaction Average response Number of transactions
Customer Registration 16 ms 2316

Browse Products 5 ms 4918

Order Products 30 ms 4009

Process Orders 24 ms 2395

Browse Orders 3 ms 802

Long-Distance Application Mobility 53


Enabled by EMC VPLEX Geo
Figure 30. TPM output from SwingBench from Oracle baseline test

Oracle After encapsulating the volumes into VPLEX Geo, the SwingBench Order Entry -
encapsulated test PL/SQL (SOE) workload test for 20 virtual users was repeated. This produced an
average of 236 transactions per minute over the hour of the post encapsulation test,
with an average response time of 7.4 ms per transaction, as shown in Table 33 and
Figure 31.

Table 33. Oracle encapsulated test results


SwingBench transaction Average response Number of transactions
Customer Registration 4 ms 2358

Browse Products 1 ms 4730

Order Products 1 ms 4008

Process Orders 22 ms 2307

Browse Orders 9 ms 785

Long-Distance Application Mobility 54


Enabled by EMC VPLEX Geo
Figure 31. Oracle encapsulated test results

Oracle distance After encapsulation testing was complete, a distance emulator was employed to
simulation test insert a latency of 20 ms between sites. The SwingBench Order Entry - PL/SQL (SOE)
workload test for 20 virtual users was repeated. This produced an average of 263
transactions per minute over the hour of the distance simulation test, with an average
response time of 46 ms per transaction, as shown in Table 34 and Figure 32.

Table 34. Oracle distance simulation test results


SwingBench transaction Average response Number of transactions
Customer Registration 27 ms 2630

Browse Products 56 ms 5251

Order Products 83 ms 4495

Process Orders 38 ms 2616

Browse Orders 30 ms 842

Figure 32. Oracle distance simulation test results

Long-Distance Application Mobility 55


Enabled by EMC VPLEX Geo
Oracle distance For this test, Silver Peak compression was enabled, along with the distance emulator
simulation with configured for a latency of 20 ms between sites. The SwingBench Order Entry -
compression test PL/SQL (SOE) workload test for 20 virtual users was repeated. This produced an
average of 251 transactions per minute over the hour of the distance simulation with
compression test, with an average response time of 136 ms per transaction, as
shown in Table 35 and Figure 33.

Table 35. Oracle distance simulation with compression test results


SwingBench transaction Average response Number of transactions
Customer Registration 60 ms 2486

Browse Products 49 ms 5101

Order Products 187 ms 4116

Process Orders 163 ms 2551

Browse Orders 221 ms 821

Figure 33. Oracle distance simulation with compression test results

Long-Distance Application Mobility 56


Enabled by EMC VPLEX Geo
Oracle live After starting the SwingBench Order Entry - PL/SQL (SOE) workload test for 20 virtual
migration test users, the virtual machine migrated all of the virtual machines from Site A to Site B.
Table 36 and Figure 34 show the migration test results and the effects of distance on
the migration times.

Table 36. Oracle live migration test results


Stage Live migration (m:ss) Average TPM
Baseline Test 5:39 271

Encapsulated Test 5:29 263

Distance Simulation Test 6:59 263

Distance Simulation with 7:04 251


Compression enabled

The database remained available throughout the live migration. There was a
temporary dip in the transaction rate as the virtual machine completed its migration
but transactions soon returned to their previous level.

Figure 34. Oracle live migration test results

Long-Distance Application Mobility 57


Enabled by EMC VPLEX Geo
Oracle and Silver Figure 35 compares the bandwidth use for Oracle with and without Silver Peak WAN
Peak Wan optimization. Our testing showed that traffic reduction reached 75 percent on Site B
optimization and 65 percent reduction on Site A, during live migration.

Figure 35. Reduction of Oracle traffic with Silver Peak

Long-Distance Application Mobility 58


Enabled by EMC VPLEX Geo
Conclusion
Summary To meet today’s demanding business challenges an organization’s data must be
highly available—in the right place, at the right time, and at the right cost to the
enterprise. This solution demonstrates the virtual storage capabilities of VPLEX Geo in
a virtualized application environment incorporating Microsoft Hyper-V, SAP, Microsoft
SharePoint Server 2010, and Oracle RDMS.

With EMC VPLEX Geo, organizations can manage their virtual storage environments
more effectively through:

• Transparent integration with existing applications and infrastructure.


• The ability to migrate data between remote data centers with no disruption in
service.
• The ability to migrate data stores across storage arrays non-disruptively for
maintenance and technology refresh operations.

Findings This solution validated the effectiveness of VPLEX Geo for presenting LUNs to Hyper-V
clusters spanning multiple data center locations separated by 2,000 km to enable
workload migration. As detailed in the application sections, the following results
were validated:

• Virtual machine migration times were all well within acceptable ranges, and in
all cases allowed for continuous user access during migration.
• A distributed mirrored volume was used to place the same data at both
locations and maintain cache coherency. Testing validated that it worked well
within expected tolerances at 2,000 km.
• Testing proved that a live transfer of virtual machines from Site A to Site B can
be achieved quickly with no perceptible effect on end users.
• The use of Silver Peak WAN optimization provided deduplication improvements
up to 66 percent.

The capabilities of VPLEX Geo demonstrated in this testing highlight its potential to
enable true dynamic workload balancing and migration across metropolitan data
centers, to support operational and business requirements. VPLEX Geo augments the
flexibility introduced into a server infrastructure by Microsoft Hyper-V with storage
flexibility, to provide a truly scalable, dynamic, virtual data center.

Long-Distance Application Mobility 59


Enabled by EMC VPLEX Geo
References
White papers For additional information, see the white papers listed below.
• EMC Business Continuity for Microsoft Hyper-V Enabled by EMC Symmetrix
VMAX and SRDF/CE
• Microsoft Collaboration Brief - Best Practices for SAP on Hyper-V - April 2010

Product For additional information, see the product documents listed below.
documentation
• Installation Guide: SAP ERP 6.0 – EHP4 Ready ABAP on Windows: SQL Server
Based on SAP NetWeaver 7.0 Including Enhancement Package 1
• Microsoft TechNet Library -Hyper-V: Live Migration Network Configuration Guide

Other For additional information, see the documents listed below.


documentation
• SAP Note 1246467 - Hyper-V Configuration Guideline
• SAP Note 1374671 - High Availability in Virtual Environment on Windows
• SAP Note 1383873 - Windows Server 2008 R2 Support

Long-Distance Application Mobility 60


Enabled by EMC VPLEX Geo

You might also like