Professional Documents
Culture Documents
Long-Distance Application Mobility Enabled by Emc Vplex Geo: An Architectural Overview
Long-Distance Application Mobility Enabled by Emc Vplex Geo: An Architectural Overview
Abstract
June 2011
Copyright © 2011 EMC Corporation. All Rights Reserved.
The information in this publication is provided “as is.” EMC Corporation makes
no representations or warranties of any kind with respect to the information in
this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose.
For the most up-to-date listing of EMC product names, see EMC Corporation
Trademarks on EMC.com.
All trademarks used herein are the property of their respective owners.
Introduction .......................................................................................................................... 8
Purpose ........................................................................................................................................... 8
Scope .............................................................................................................................................. 8
Audience ......................................................................................................................................... 8
Terminology ..................................................................................................................................... 9
EMC VNX5700..................................................................................................................... 21
EMC VNX5700 overview ................................................................................................................. 21
EMC VNX5700 configuration .......................................................................................................... 21
Pool configuration ..................................................................................................................... 21
LUN configuration...................................................................................................................... 21
SAP .................................................................................................................................... 42
SAP overview ................................................................................................................................. 42
Oracle................................................................................................................................. 50
Oracle overview ............................................................................................................................. 50
Oracle configuration............................................................................................................ 50
Oracle configuration overview ........................................................................................................ 50
Oracle virtual machine configuration.............................................................................................. 50
Oracle database configuration and resources ................................................................................ 51
SwingBench utility configuration .................................................................................................... 52
Conclusion ......................................................................................................................... 59
Summary ....................................................................................................................................... 59
Findings ......................................................................................................................................... 59
References .......................................................................................................................... 60
White papers ................................................................................................................................. 60
Product documentation.................................................................................................................. 60
Other documentation ..................................................................................................................... 60
The business day is no longer 9-to-5; companies are working around the clock at
offices across the globe. Information and applications are needed to keep the
business running smoothly. EMC VPLEX helps customers easily migrate workloads
around the globe to:
• Increase ROI by increasing utilization of hardware and software assets
• Ensure availability of information and applications
• Minimize interruption of revenue generating processes
• Optimize application and data access to better meet specific geographic
demands
Solution overview The EMC VPLEX family is a solution for federating EMC and non-EMC storage. The
VPLEX platform logically resides between the servers and heterogeneous storage
assets supporting a variety of arrays from various vendors. VPLEX simplifies storage
management by allowing LUNs, provisioned from various arrays, to be managed
though a centralized management interface.
The EMC VPLEX platform removes physical barriers within, across, and between data
centers. VPLEX Local provides simplified management and non-disruptive data
mobility across heterogeneous arrays. VPLEX Metro provides mobility, availability,
and collaboration between two VPLEX clusters within synchronous distances. VPLEX
Geo further dissolves those distances by extending these use cases to asynchronous
distances.
With a unique scale-up and scale-out architecture, VPLEX’s advanced data caching
and distributed cache coherency provide workload resiliency, automatic sharing,
balancing and failover of storage domains, and enable both local and remote data
access with predictable service levels.
Audience This white paper is intended for EMC employees, partners, and customers including IT
planners, virtualization architects and administrators, and any other IT professionals
involved in evaluating, acquiring, managing, operating, or designing infrastructure
that leverages EMC technologies.
Table 1. Terminology
Term Definition
Asynchronous Asynchronous consistency groups are used for distributed volumes in
group VPLEX Geo to ensure that I/O to all volumes in the group is
coordinated across both clusters, and all directors in each cluster. All
volumes in an asynchronous group share the same detach rule, are in
write-back cache mode, and behave the same way in the event of an
inter-cluster link failure. Only distributed virtual volumes can be
included in an asynchronous consistency group.
Consistency Consistency groups allow you to group volumes together and apply a
group set of properties to the entire group. In a VPLEX Geo where clusters
are separated by asynchronous distances (up to 50 ms RTT),
consistency groups are required for asynchronous I/O between the
clusters. In the event of a director, cluster, or inter-cluster link failure
consistency groups ensure consistency in the order in which data is
written to the back-end arrays, preventing possible data corruption.
DR Disaster Recovery
HA High Availability
VPLEX Geo Provides distributed federation within, across, and between two
clusters (within asynchronous distances).
VHD Virtual Hard Disk. A Hyper-V virtual hard disk (VHD) is a file that
encapsulates a hard disk image.
Physical Figure 1 illustrates the physical architecture of the use case solution.
architecture
EMC Symmetrix VMAX 1 FC, 600 GB/15k FC drives, 200 GB Flash drives
EMC VPLEX 2 VPLEX Geo cluster with two engines and four
directors on each cluster
SwingBench 2.3.0.422
HP LoadRunner 9.5.1
Using server virtualization, based on Microsoft Hyper-V, Intel x86-based servers are
shared across applications and clustered to achieve redundancy and failover
capability. VPLEX Geo is used to present shared data stores across the physical data
center locations, enabling migration of the application virtual machines (VMs)
between the physical sites. Physical Site A storage consists of a Symmetrix VMAX
Single Engine (SE) and a VNX5700 for the SAP, Microsoft, and Oracle environments.
VNX5700 is used for the physical Site B data center infrastructure and storage.
Common elements The following sections briefly describe the components used in this solution,
including:
• EMC VPLEX Geo
• EMC VPLEX Geo administration
• EMC VNX5700
• EMC Symmetrix VMAX SE
• Microsoft 2008 R2 with Hyper-V
• Microsoft Systems Center Virtual Machine Manager (SCVMM)
• Silver Peak NX-9000 WAN optimization appliance
This form of access, called AccessAnywhere, removes many of the constraints of the
physical data center boundaries and its storage arrays. AccessAnywhere storage
allows data to be moved, accessed, and mirrored transparently between data
centers, effectively allowing storage and applications to work between data centers
as though those physical boundaries were not there.
EMC VPLEX Geo In this solution, we designed our VPLEX Geo plexes using a routed topology with the
design following environmental characteristics:
considerations
• The routers are situated between the clusters.
• An Empirix network emulator is used between clusters.
cluster-address 192.168.11.251
gateway 192.168.11.1
mtu 1500
remote-subnet 192.168.22.0/24
Subnet attributes for Port Group 1:
prefix 10.6.11.0
cluster-address 10.6.11.251
gateway 10.6.11.1
mtu 1500
remote-subnet 10.6.22.0/24
cluster-address 192.168.22.252
gateway 192.168.22.2
mtu 1500
remote-subnet 192.168.11.0/24
Subnet attributes for Port Group 1:
prefix 10.6.22.0
cluster-address 10.6.22.252
gateway 10.6.22.1
mtu 1500
remote-subnet 10.6.11.0/24
When distributed devices are created, they are in synchronous mode by default.
VPLEX Geo clusters require consistency groups to be configured to make distributed
devices in asynchronous mode.
To verify all virtual volumes are in asynchronous mode, the VPLEX CLI can be used as
shown in Figure 6.
Figure 8 shows the packet round-trip time (RRT) between the VPLEX directors.
• Create a new VPLEX Geo LUN and migrate the existing data to that LUN
VPLEX Geo provides an option to encapsulate the existing data using VPlexcli. When
application consistency is set (using the –appc flag), the volumes claimed are data-
protected and no data is lost.
In this solution, we encapsulated existing storage volumes with real data and brought
them into VPLEX Geo clusters, as shown in Figure 9. The data was protected when
storage volumes were claimed with the –appc flag to make storage volumes
“application consistent.”
3 Create extents
Create extents for the selected storage volumes and specify the capacity.
6 Register initiators
When initiators (hosts accessing the storage) are connected directly or through
a Fibre Channel fabric, VPLEX Geo automatically discovers them and populates
the Initiators view. Once discovered, you must register the initiators with VPLEX
Geo before they can be added to a storage view and access storage. Registering
an initiator gives a meaningful name to the port’s WWN, which is typically the
server’s DNS name, to allow you to easily identify the host.
EMC VNX5700 This section describes how the VNX5700 was configured in this solution.
configuration
Pool configuration
Table 7 describes the VNX5700 pool configuration used in this solution.
LUN configuration
Table 8 describes the VNX5700 LUN configuration used in this solution.
R5CSV01 2 2 TB VNXPool1
R5CSV02 3 2 TB VNXPool1
R5CSV03 4 2 TB VNXPool1
R10CSV02 38 2 TB VNXPool2
Symmetrix VMAX systems deliver software capabilities that improve capacity use,
ease of use, business continuity, and security. These features provide significant
advantage to customer deployments in a virtualized environment where ease of
management and protection of virtual machine assets and data assets are required.
EMC Symmetrix This section describes how the EMC Symmetrix VMAX was configured in this solution.
VMAX
configuration Symmetrix volume configuration
Table 9 describes the Symmetrix VMAX volume configuration used in this solution.
Microsoft Hyper-V This section describes the configuration of the Microsoft Hyper-V environment used in
configuration this solution.
Figure 10 shows that four nodes are used at each site and three Ethernet network
connections are used for Heartbeat, live migration, and client access at the respective
site. System Center Virtual Machine Manager (SCVMM) is used to manage the virtual
machines on the Hyper-V cluster.
The virtual machine network is configured to use VLAN tagging as shown in Figure 12.
EMC PowerPath is installed on the cluster nodes, as shown in Figure 13, to provide
load balancing and for fault tolerance on the FC network. To provide support for the
VPLEX device, Invista® Devices Support must be selected during installation. It can
also be changed after installation using Add/Remove Programs.
Note: Make sure all nodes are available when enabling the Cluster Shared Volume
feature otherwise you will need to use the Cluster CLI command later to add
the node into the owner list of the cluster resource. See Figure 15.
Cluster Shared Volume mounts the disk under C:\SharedStorage on every node of the
cluster, as shown in Figure 16.
The virtual machine can be configured to use that path to place the virtual disk on the
shared volume, as shown in Figure 17.
If the storage network between the host and array fails, there is an option to redirect
the traffic over the LAN to the node that owns the cluster shared disk resource.
And provide the target volume to move to, as shown in Figure 19.
The virtual machine must be in a saved state or powered off to move the underlying
storage.
Network design The virtual machine network environment in this solution consists of a single Layer-2
considerations network extended across the WAN between Site A and Site B. The following design
considerations apply to this environment:
• This extension was done using Cisco's Overlay Transport Virtualization (OTV)
rather than by bridging the VLAN over the WAN. OTV allows for Ethernet LAN
extension over any WAN transport by dynamically encapsulating Layer 2 “Mac
in IP” and routing it across the WAN.
• Edge devices and Nexus 7000 switches exchange information about learned
devices on the extended VLAN at each site via multicast, which negates the
need for ARP and other broadcasts to be propagated across the WAN.
• Additionally, using OTV rather than bridging eliminates BPDU forwarding (part
of normal spanning tree operations in a bridged VLAN scenario) and provides
the ability to eliminate or rate-limit other broadcasts to conserve bandwidth.
Note: For recommendations about using live migration in your own Hyper-V
environment, refer to the Hyper-V: Live Migration Network Configuration Guide
at the Microsoft TechNet site.
Network Table 11 lists the OTV configuration for each edge device in the virtual machine
configuration network.
Note: For more detail on Cisco OTV refer to the Cisco Quick Start Guide.
There are many choices for WAN optimization products and vendors that can be
deployed to meet your networking needs. In this solution we used Silver Peak WAN
optimization appliances.
Note: Bandwidth savings from WAN optimization are independent of distance, and
present over all distances. Acceleration benefits from WAN optimization
increase substantially as latency increases. Detailed test results for
acceleration beyond 20 ms are available at www.silver-peak.com.
Silver Peak NX Silver Peak’s appliances are data-center-class network devices designed to meet the
appliance rigorous WAN optimization requirements of large enterprises, delivering top
performance, scalability, and reliability.
As shown in Figure 20, Silver Peak appliances can be deployed between VPLEX Geo
clusters at both ends of the WAN. Silver Peak, when deployed with VPLEX Geo,
mitigates many challenges associated with deploying a geographically distributed
architecture, including limited bandwidth, high latency (due to distance), and WAN
quality.
Figure 20. Silver Peak WAN optimization appliances and EMC VPLEX Geo
The Silver Peak appliances can also be configured as a redirect target using WCCP
rather than be deployed inline. See the Silver Peak Configuration Guide available at
www.silver-peak.com for more detail.
Silver Peak WAN Our test results showed that this solution benefitted from the Silver Peak WAN
optimization optimization across all applications. Figure 21 shows the traffic across the LAN and
results WAN, and describes the peak and average deduplication ratio for all applications
over the testing period for both sites. The average deduplication was 66 percent on
Site B and 56 percent on Site A.
The virtualized SharePoint Server 2010 farm used in our solution overcomes these
challenges by building on Microsoft Hyper-V enabled by VPLEX Geo technology, which
allows disparate storage arrays at multiple locations to provision a single, shared
array on the SharePoint 2010 farm.
SharePoint Server In this SharePoint 2010 environment design, the major configuration highlights
design include:
considerations
• The SharePoint farm is designed as a publishing portal. There is around 400 GB
of user content, consisting of four SharePoint site collections (document
centers) with four content databases, each populated with 100 GB of random
user data. Microsoft network load balancing (NLB) was enabled on three web
front-ends (WFE) servers for load balancing and local failover consideration.
• The SharePoint farm uses seven virtual machines hosted on four physical
Hyper-V servers at the production site. Three web front-end (WFE) servers are
also configured with query roles for load balancing.
• The query components have been scaled out to three partitions. Each query
server contains a part of the index partitions and a mirror of another index
partition for fault-tolerance considerations.
• Two index components are provisioned for fault tolerance and better crawl
performance.
Two index virtual machines Two index components were partitioned in this farm for
better crawl performance. Multiple crawl components
mapped to the same crawl database to achieve fault
tolerance. The index components were designed to crawl
themselves without impacting the production WFE
servers.
Four virtual CPUs and 6 GB memory were allocated for the
index server. The incremental crawl was scheduled to run
every two hours per day.
Application Excel virtual Two virtual CPUs and 2 GB of memory were allocated for
machine the application server as these roles require less
resource.
SQL Server virtual machine Four virtual CPUs and 16 GB of memory were allocated for
the SQL Server virtual machine as CPU utilization and
memory requirement for SQL in a SharePoint farm could
be high.
With more memory allocated to the SQL virtual machine,
the SQL Server becomes more effective in caching
SharePoint user data, leading to fewer required physical
IOPS for storage and better performance.
Four tempdb db files were created, which is equal to the
SQL Server core CPUs as Microsoft recommended.
Index Servers 2 4 6 40 60
SharePoint farm The data population tool uses a set of sample documents. Altering the document
test methodology names and metadata (before insertion) makes each document unique.
One load-agent host is allocated for each WFE, allowing data to be loaded in parallel
until the targeted 400 GB data size is reached. The data is spread evenly across the
four-site collections (each collection is a unique content database). The user profiles
consist of a mix of three user operations: browse, search, and modify.
During validation, a Microsoft heavy-user load profile was used to determine the
maximum user count that the Microsoft SharePoint 2010 server farm could sustain
while ensuring the average response times remained within acceptable limits.
Microsoft standards state that a heavy user performs 60 requests in each hour; that
is, there is a request every 60 seconds.
Note: Microsoft publishes default service-level agreement (SLA) response times for
each SharePoint user operation. Common operations (such as browse and
search) should be completed within 3 seconds or less, and uncommon
operations (such as modify) should be completed within 5 seconds or less.
These response time SLAs were comfortably met and exceeded.
• Live migration tests were performed, using a distance emulator on the whole
SharePoint farm after the encapsulation of storage into the VPLEX Geo cluster.
Latency was set at 20 ms equivalent to 2,000 km.
• Live migration tests were performed on the whole SharePoint farm with the
insertion of Silver Peak compression. Latency was set to 20 ms equivalent to
2,000 km.
SharePoint 2010, VPLEX Geo, and Hyper-V performance data was logged for analysis
during the tests. This data presents an account of results from VSTS 2008 SP1, which
generates continuous workload (Browse/Search/Modify) to the WFEs of the
SharePoint 2010 farm, while simultaneously consolidating the SQL and Oracle OLTP
workload on the same Hyper-V clusters.
SharePoint With a mixed user profile of 80/10/10, the virtualized SharePoint farm can support a
baseline test maximum of 11,520 users with 10 percent concurrency, while satisfying Microsoft’s
acceptable response time criteria, as shown in Table 14 and Table 15.
After encapsulation, with a mixed user profile of 80/10/10, the virtualized SharePoint
farm can support a maximum of 12,780 users with 10 percent concurrency, while
satisfying Microsoft’s acceptable response time criteria.
Figure 22 shows the performance of passed tests per second after encapsulating into
the VPLEX LUNs on the SharePoint virtual machines.
Figure 22. Performance of passed tests per second after SharePoint encapsulation
Table 16 and Table 17 show the performance results when encapsulation is used.
The 10 GbE connection was used for the live-migration network as live migration
requires high bandwidth.
Figure 23 shows the performance of passed tests per second during the live
migration. When running live migration between the sites, the transactions per
second fluctuated. The drop in the number of passed tests per second was due to the
migration process of the SQL server as Hyper-V was doing memory transfer across the
sites.
Because the live migration process affects the maximum user capacity of the entire
SharePoint farm, we recommend doing the whole farm live migration during non-peak
hours. As shown in Figure 23, there was no loss of service for the whole SharePoint
farm during the live migration.
Figure 23. Passed tests per second during live migration of the entire SharePoint
farm across sites with a 2,000 km distance
Table 18 and Table 19 list the SharePoint farm performance results during the live
migration across the sites with a 2,000 km distance. Running the entire SharePoint
farm on the DR site would decrease the number of passed tests per second because
of the 20 ms latency between the clients and the SharePoint farm including the web
front-end servers.
Table 18. User activity performance including the live migration between sites with
a 2,000 km distance
User activity - Acceptable response Response time
Browse/Search/Modify time Browse/Search/Modify
80% / 10% / 10% <3 / <3 / <5 sec 2.90 / 1.73 / 1.49 sec
Virtual machines with large memory configurations take longer to migrate than virtual
machines with smaller memory configurations. This is because active memory is
copied over the network to the receiving cluster node before migration.
Table 20 lists the live migration duration, with and without latency for the entire
SharePoint farm. Note how a 2,000 km distance between the two data centers caused
a longer cross-site live migration duration.
In this scenario, the 1 GbE connection with Silver Peak WAN optimization enabled
was used for the live-migration network. The live migration duration with Silver Peak
WAN compression was similar to the 10 GbE network connection.
Table 21 details the migration duration for all the SharePoint virtual machines.
EMC VPLEX Geo enables virtualized storage for applications to access LUNs between
data center sites, and provides the ability to move virtual machines between data
centers. This optimizes data center resources and results in zero downtime for data
center relocation and server maintenance.
Because SAP applications and modules can be distributed among several virtual
servers (see Figure 25), and normal operations involve extensive communication
between them, it is critical that communication is not disrupted when individual
virtual machines are moved from site to site.
SAP design In this SAP ERP 6.0 EHP4 environment, the major configuration considerations
considerations include:
• SAP patches, parameters, basis settings, and load balancing, as well as
Windows 2008 and Hyper-V were all installed and configured according to SAP
procedures and guidelines.
• SAP update processes (UPD/UP2) were configured on the Application Server
instances.
• Some IDES functionality—for example, synchronization with the external GTS
system—was deactivated to eliminate unnecessary external interfaces that
were outside the scope of the test.
• The system was configured and customized to enable LoadRunner automated
scripts to run business processes on the functional areas including Sales and
Distribution (SD), Material Management (MM), and Finance and Controlling
(FI/CO). The Order to Cash (OTC) business scenario was used as an example in
this use case.
• The storage for the entire SAP environment was encapsulated and virtualized in
this test. The storage was across the two sites and made available to the SAP
servers through VPLEX Geo.
SAPASCS 1 2 4 90 32
SAPERPDI 2 2 8 90 25
The LoadRunner system consists of one LoadRunner controller and the associated
virtual user generator in a virtual machine with the configuration as listed in Table 23.
SAP ERP workload In our testing LoadRunner ran an order-to-cash (OTC) business process scenario to
profile generate the application-specific workload. This process covers a sell-from-stock
scenario, which includes the creation of a customer order with six line items and the
corresponding delivery with subsequent goods movement and invoicing. Special
pricing conditions were also used. The process consists of the following transactions:
During validation, 100 sales orders were created in the PRD system. The purpose of
this scenario was to maintain active connections between the user GUI, database
instance, and the application server instances during a Hyper-V live migration, so that
business continuity and a federated solution landscape under live migration could be
verified by a successful SAP sales order creation process.
Live migration 100 sales orders were initiated on the SAP ERP 6.0 EHP4 system while
the SAP database, central services, and application server instances
resided in Data Center 1.
During the SAP sales orders creation process, live migrations were
conducted to move the SAP VMs from Data Center 1 to Data Center 2.
Live migration 100 sales orders were initiated on the SAP ERP 6.0 EHP4 system while
with WAN the SAP database, central services, and application server instances
optimization resided in Data Center 1.
(compression) During the SAP sales orders creation process, live migrations were
conducted to move the SAP VMs from Data Center 1 to Data Center 2.
The live migrations were conducted using Silver Peak WAN
optimization.
We used the following SAP key performance indicators to evaluate the functionality
and throughput during the tests:
• Business volume (number of SAP business documents processed)
• SAP average response time for dialog work process
Some other statistics are on the Windows OS level, and those metrics were collected
from Microsoft SCVMM.
The LoadRunner controller ramped up one virtual user every 20 seconds until the
number of virtual users reached 10 concurrent and active virtual users. All virtual
users generated system workload activity during the entire testing period. All virtual
users connected to PRD through the predefined logon group in order to distribute the
workload evenly across both SAP application server instances.
SAP test procedure Table 25 lists the test procedure steps for each phase of testing.
Figure 26 compares the metrics of the three test scenarios described in Table 26.
2,700
2,400
2,100
1,800
1,500
1,200
900
600
300
0
Baseline Live Migration (no Silver Live Migration (with Silver
Peak) Peak)
Nr. of Sales Orders Total Dialog Steps DIA Avg. Resp. Time (ms)
14:24
12:00
9:36
7:12
4:48
2:24
0:00
Live Migration (no Silver Peak) Live Migration (with Silver Peak)
Figure 28 shows the virtual user response time from LoadRunner Controller during the
test.
In this solution, SwingBench was used to exercise the Oracle database. SwingBench
is a load generator and benchmark tool designed to test Oracle databases. The
SwingBench Order Entry - PL/SQL (SOE) workload models a “TPC-C” like OLTP order
entry workload.
Oracle configuration
Oracle In this Oracle environment, the major configuration highlights include:
configuration
• A 200 GB OLTP Oracle Database 11g
overview
• The Oracle Database 11g was running in archivelog mode with Flashback
enabled
Oracle virtual Table 27 describes the virtual machine configurations for the Oracle environment.
machine
configuration Table 27. Oracle virtual machines
Component Description
Operating system Red Hat Enterprise Linux 5 (64-bit) release 5.5
CPU 4 vCPUs
Memory 24 GB
db_block_size 8192
log_buffer 20963328
memory_max_target 9663676416
memory_target 9663676416
sort_area_size 65536
Table 29 describes the sizing allocation and usage of the database tablespaces.
LOGON n/a
Table 31 shows the table and index sizes for the SOE schemas.
At each stage, an availability test using a SwingBench Order Entry - PL/SQL (SOE)
workload of 20 users was run against the Oracle 11g database with and without live
migration and the results compared.
After each test was run the database was flashed back to a common restore point to
ensure consistency.
Oracle baseline A baseline SwingBench Order Entry - PL/SQL (SOE) workload test for 20 virtual users
test was run. This produced an average of 240 transactions per minute over the hour of
the baseline test with an average response time of 15.6 ms per transaction, as shown
in Table 32 and Figure 30.
Oracle After encapsulating the volumes into VPLEX Geo, the SwingBench Order Entry -
encapsulated test PL/SQL (SOE) workload test for 20 virtual users was repeated. This produced an
average of 236 transactions per minute over the hour of the post encapsulation test,
with an average response time of 7.4 ms per transaction, as shown in Table 33 and
Figure 31.
Oracle distance After encapsulation testing was complete, a distance emulator was employed to
simulation test insert a latency of 20 ms between sites. The SwingBench Order Entry - PL/SQL (SOE)
workload test for 20 virtual users was repeated. This produced an average of 263
transactions per minute over the hour of the distance simulation test, with an average
response time of 46 ms per transaction, as shown in Table 34 and Figure 32.
The database remained available throughout the live migration. There was a
temporary dip in the transaction rate as the virtual machine completed its migration
but transactions soon returned to their previous level.
With EMC VPLEX Geo, organizations can manage their virtual storage environments
more effectively through:
Findings This solution validated the effectiveness of VPLEX Geo for presenting LUNs to Hyper-V
clusters spanning multiple data center locations separated by 2,000 km to enable
workload migration. As detailed in the application sections, the following results
were validated:
• Virtual machine migration times were all well within acceptable ranges, and in
all cases allowed for continuous user access during migration.
• A distributed mirrored volume was used to place the same data at both
locations and maintain cache coherency. Testing validated that it worked well
within expected tolerances at 2,000 km.
• Testing proved that a live transfer of virtual machines from Site A to Site B can
be achieved quickly with no perceptible effect on end users.
• The use of Silver Peak WAN optimization provided deduplication improvements
up to 66 percent.
The capabilities of VPLEX Geo demonstrated in this testing highlight its potential to
enable true dynamic workload balancing and migration across metropolitan data
centers, to support operational and business requirements. VPLEX Geo augments the
flexibility introduced into a server infrastructure by Microsoft Hyper-V with storage
flexibility, to provide a truly scalable, dynamic, virtual data center.
Product For additional information, see the product documents listed below.
documentation
• Installation Guide: SAP ERP 6.0 – EHP4 Ready ABAP on Windows: SQL Server
Based on SAP NetWeaver 7.0 Including Enhancement Package 1
• Microsoft TechNet Library -Hyper-V: Live Migration Network Configuration Guide