Professional Documents
Culture Documents
h11878 Design Implementation Guide Vspex Vplex
h11878 Design Implementation Guide Vspex Vplex
Abstract
This document describes the EMC VSPEX Proven Infrastructure solution for private cloud deployments with
EMC VPLEX Metro, VMware vSphere, and EMC VNX for up to 125 virtual machines.
June, 2013
Copyright 2013 EMC Corporation. All rights reserved. Published in the USA.
Published June 2013
EMC believes the information in this publication is accurate of its publication date. The information is subject
to change without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties
of any kind with respect to the information in this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software
described in this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United
States and other countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical documentation and
advisories section on the EMC Online Support website.
EMC VSPEX with EMC VPLEX for VMware vSphere 5.1
Part Number H11878
Contents
1. Executive Summary ..............................................................................................8
2. Background and VPLEX Overview ..........................................................................9
2.1 Document purpose ...................................................................................................... 9
2.2 Target Audience ........................................................................................................... 9
2.3 Business Challenges.................................................................................................... 9
5. Solution Architecture..........................................................................................22
5.1 Overview .................................................................................................................... 22
5.2 Solution Architecture VPLEX Key Components ............................................................ 23
5.3 VPLEX Cluster Witness ............................................................................................... 25
11. Converting a VPLEX Local Cluster into a VPLEX Metro Cluster .............................65
11.1 Gathering Information for Cluster-2 .......................................................................... 65
11.2 Configuration Information for Cluster Witness .......................................................... 65
11.3 Consistency Group and Detach Rules ....................................................................... 66
11.4 Create Distributed Devices between VPLEX Cluster-1 and Cluster-2.......................... 67
11.5 Create Storage View for ESXi Hosts .......................................................................... 68
List of Figures
Figure 1: Private Cloud components for VSPEX with VPLEX Local......................................... 12
Figure 2: Private Cloud components for a VSPEX/VPLEX Metro solution .............................. 13
Figure 3: VPLEX delivers zero downtime .............................................................................. 15
Figure 4: Application Mobility within a datacenter .............................................................. 16
Figure 5: Application and Data Mobility Example ................................................................ 17
Figure 6: Application and Data Mobility Example ................................................................ 18
Figure 7: Highly Available Infrastructure Example ............................................................... 20
Figure 8: VPLEX Local architecture for Traditional Single Site Environments ........................ 22
Figure 9: VPLEX Metro architecture for Distributed Environments ........................................ 23
Figure 10: VSPEX deployed with VPLEX Metro using distributed volumes ........................... 24
Figure 11: Failure scenarios without VPLEX Witness ............................................................ 26
Figure 12: Failure scenarios with VPLEX Witness ................................................................. 26
Figure 13: VSPEX deployed with VPLEX Metro configured with 3rd Site VPLEX Witness ........ 27
Figure 15: Storage Layout for 125 Virtual Machine Private Cloud Proven Infrastructure ...... 33
Figure 16: Place VNX LUNs into VPLEX Storage Group ......................................................... 51
Figure 17: VPLEX Local System Status and Login Screen ..................................................... 52
Figure 18: Provisioning Storage .......................................................................................... 52
Figure 19: EZ-Provisioning Step 1: Claim Storage and Create Virtual Volumes .................... 53
Figure 20: EZ-Provisioning Step 2: Register Initiators .......................................................... 53
Figure 21: EZ-Provisioning Step 3: Create Storage View ...................................................... 53
Figure 22: EZ-Provisioning................................................................................................... 56
Figure 23: Create Virtual Volumes- Select Array................................................................... 56
Figure 24: Create Virtual Volumes- Select Storage Volumes ................................................ 57
Figure 25: Create Distributed Volumes- Select Mirrors ........................................................ 58
Figure 26: Create Distributed Volumes- Select Consistency Group ...................................... 59
Figure 27: EZ-Provisioning- Register Initiators ..................................................................... 59
Figure 28: View Unregistered Initiator-ports ........................................................................ 60
Figure 29: View Unregistered Initiator-ports ........................................................................ 61
Figure 30: EZ-Provisioning- Create Storage View ................................................................. 61
Figure 31: Create Storage View- Select Initiators ................................................................. 62
Figure 32: Create Storage View- Select Ports ....................................................................... 62
Figure 33: Create Storage View- Select Virtual Volumes ...................................................... 63
Figure 34: VPLEX Metro System Status Page ....................................................................... 66
Figure 35: VPLEX Consistency Group created for Virtual Volumes........................................ 67
Figure 36: VPLEX Distributed Devices .................................................................................. 67
Figure 37: VPLEX Storage View for ESXi Hosts ..................................................................... 68
Figure 38: Batch Migration, Create Migration Plan .............................................................. 73
Figure 39: Batch Migration, Start Migration ......................................................................... 74
Figure 40: Batch Migration, Monitor Progress ..................................................................... 74
Figure 41: Batch Migrations, Change Migration State.......................................................... 75
Figure 42: Batch Migrations, Commit the Migration ............................................................ 75
List of Tables
Table 1: VPLEX Components ............................................................................................... 24
Table 2: Hardware Resources for Storage ........................................................................... 32
Table 3: IPv4 Networking Information ................................................................................. 35
Table 4: Metadata Backup Information ............................................................................... 35
Table 5: SMTP details to configure event notifications ........................................................ 36
Table 6: SNMP information ................................................................................................. 37
Table 7: Certificate Authority (CA) and Host Certificate information .................................... 37
Table 8: Product Registration Information ........................................................................... 37
Table 9: VPLEX Metro IP WAN Configuration Information ..................................................... 38
Table 10: Cluster Witness Configuration Information .......................................................... 39
Table 13: IPv4 Networking Information ............................................................................... 77
Table 14: Metadata Backup Information ............................................................................. 77
Table 15: SMTP details to configure event notifications ...................................................... 77
Table 16: SNMP information ............................................................................................... 79
Table 17: Certificate Authority (CA) and Host Certificate information .................................. 79
Table 18: Product Registration Information ......................................................................... 79
Table 19: IP WAN Configuration Information ....................................................................... 80
Table 20: Cluster Witness Configuration Information .......................................................... 81
1. Executive Summary
Businesses face many challenges in delivering application availability, while working within
constrained IT budgets. Increased deployment of storage virtualization lowers costs and
improves availability, but this alone will not allow businesses to provide the required
application access demanded by users. This document provides an overview of VPLEX, the
use cases, and how VSPEX with VPLEX solutions provide the continuous availability and
mobility mission-critical applications require for 24X7 operations.
This document is divided into sections that give an overview of the VPLEX family, use cases,
solution architecture, how VPLEX extends VMware capabilities along with solution
requirements and configuration details.
The EMC VPLEX family versions, Local and Metro, provide continuous availability and
non-disruptive data mobility for EMC and non-EMC storage within and across data centers.
Additionally, this document will cover the following:
VMware vSphere makes it simpler and less expensive to provide higher levels of
availability for critical business applications. With vSphere, organizations can easily
increase the baseline level of availability provided for all applications, as well as
provide higher levels of availability more easily and cost-effectively.
How VPLEX Metro extends VMware vMotion, HA, DRS and FT, stretching the VMware
cluster across distance providing solutions that go beyond traditional Disaster
Recovery.
Solution requirements for software and hardware, material lists, step-by-step sizing
guidance and worksheets, and verified deployment steps to implement a VPLEX
solution with VSPEX Private Cloud for VMware vSphere that supports up to 125
Virtual machines.
applications stop processing. The ultimate goal of the IT organization is to maintain mission
critical application availability.
Disaster Recovery. This solution provides a new type of deployment which achieves
continuous availability over distance for todays enterprise storage and cloud environments.
VPLEX Metro provides data access and mobility between two VPLEX clusters within
synchronous distances. This solution builds on the VPLEX Local approach by creating a
VPLEX Metro cluster between the two geographically dispersed datacenters. Once
deployed, this solution will provide truly available distributed storage volumes over
distance and makes VMware technology such as vMotion, HA, DRS and FT even better and
easier.
4. VPLEX Overview
The EMC VSPEX with EMC VPLEX solution represents the next-generation architecture for
continuous availability and data mobility for mission-critical applications. This architecture
is based on EMCs 20+years of expertise in designing; implementing and perfecting
enterprise class intelligent cache and distributed data protection solutions. The combined
VSPEX with VPLEX solution provides a complete system architecture capable of supporting
up to 125 virtual machines with a redundant server or network topology and highly
available storage within or across geographically dispersed datacenters.
VPLEX addresses three distinct customer requirements:
Continuous Availability: The ability to create high-availability storage infrastructure
across synchronous distances with unmatched resiliency.
Mobility: The ability to move applications and data across different storage
installationswithin the same data center, across a campus, or within a
geographical region.
Stretched Clusters across Distance The ability to extend VMware vMotion, HA, DRS
and FT outside the data center across distances, ensuring the continuous availability
of VSPEX solutions.
With VPLEX in place, customers now have infinite flexibility in the area of Data Mobility.
This addresses some compelling use cases such as array technology refreshes with nodisruption to the applications or planned downtime. It also enables performance load
balancing for customers who want to dynamically move data to a higher performing or
higher capacity array without affecting the end users.
4.2 Mobility
EMC VPLEX Local enables the connectivity to heterogeneous storage arrays providing
seamless data mobility and the ability to manage storage provisioned from multiple
heterogeneous arrays from a single interface within a data center. This provides you with
the ability to relocate, share and balance infrastructure resources within a data center.
Traditional data migration using array replication or manual data moves, are an expensive,
time consuming, and oftentimes risky process. They are often expensive since companies
typically are paying someone to do the services work. Migrations can be time consuming as
the customer cant just shut down servers, instead they must work through their business
units to identify possible windows to work within that are mostly during nights and
weekends. Migrations can also be risky events if all of the dependencies between
applications arent well documented. It is possible that any issues in the migration process
may not be able to be remediated until the following maintenance cycle without an outage.
VPLEX limits the risk in traditional migrations by having a fully reversible process. If
performance or other issues are discovered when the new storage is put online, the new
storage can be taken down and the old storage can continue serving I/O. Due to the ease of
migrations with VPLEX, the customer can do the migrations themselves and there are
significant services cost savings. Also, new infrastructure can be used immediately with no
need to wait for scheduled downtime to begin migrations. There is powerful TCO associated
with VPLEX all future refreshes and migrations are free.
During a VPLEX Mobility operation any jobs in progress can be paused or stopped without
affecting data integrity. Data Mobility creates a mirror of the source and target devices
allowing the user to commit or cancel the job without affecting the actual data. A record of
all mobility jobs are maintained until the user purges the list for organizational purposes.
Due to its core design, EMC VPLEX Metro provides the perfect foundation for VMware High
Availability and Fault Tolerance clustering over distance ensuring simple and transparent
deployment of stretched clusters without any added complexity.
VPLEX Metro takes a single block storage device in one location and distributes it to
provide single disk semantics across two locations. This enables a distributed VMFS
datastore to be created on that virtual volume. Furthermore, if the layer 2 network has also
been stretched then a single instance of vSphere (including a single logical datacenter)
can now also be distributed into more than one location and VMware HA can be enabled
for any given vSphere cluster. This is possible since the storage federation layer of the
VPLEX is completely transparent to ESXi. It therefore enables the user to add ESXi hosts at
two different locations to the same HA cluster. Stretching an HA failover cluster (such as
VMware HA) with VPLEX creates a Federated HA cluster over distance. This blurs the
boundaries between local HA and disaster recovery since the configuration has the
automatic restart capabilities of HA combined with the geographical distance typically
associated with synchronous DR.
Scale-out clustering hardware lets you start small and grow big with predictable
service levels
Advanced data caching utilizes large-scale SDRAM cache to improve performance
and reduce I/O latency and array contention
Distributed cache coherence for automatic sharing, balancing, and failover of I/O
across the cluster
Consistent view of one or more LUNs across VPLEX clusters (within a data center or
across synchronous distances) enabling new models of high-availability and
workload relocation
With a unique scale-up and scale-out architecture, VPLEX advanced data caching and
distributed cache coherency provide continuous availability, workload resiliency, automatic
sharing, balancing, and failover of storage domains, and enables both local and remote
data access with predictable service levels.
EMC VPLEX has been architected for virtualization enabling federation across VPLEX
Clusters. VPLEX Metro supports a maximum 5ms RTT for FC or 10 GigE connectivity.
To protect against entire site failure causing application outages, VPLEX uses a VMware
Virtual machine located within a separate failure domain to provide a VPLEX Witness
between VPLEX Clusters that are part of a distributed/federated solution. The VPLEX
Witness, known as Cluster Witness, resides in a third failure domain monitoring both VPLEX
Clusters for availability. This third site needs only IP connectivity to the VPLEX sites.
When using a distributed virtual volume across two VPLEX Clusters, if the storage in one of
the sites is lost, all hosts continue to have access to the distributed virtual volume, with no
disruption. VPLEX services all read/write traffic through the remote mirror leg at the other
site.
5. Solution Architecture
5.1 Overview
The VSPEX with VPLEX solution using VMware vSphere has been validated for configuration
with up to 125 virtual machines.
Figure-8 shows an environment with VPLEX Local only, virtualizing the storage and providing
high availability across storage arrays. Since all ESXi servers are able to see VPLEX, VMware
vMotion, HA, and DRS are able to seamlessly move and be restarted on all hosts. This
configuration is traditional virtualized environment compared to the VPLEX Metro
environment which provides high availability within, and across, datacenters.
Figure-9 characterizes both a traditional infrastructure validated with block -based storage
in a single datacenter, and a distributed infrastructure validated with block -based storage
federated across two datacenters, where 8 Gb FC carries storage traffic locally, and 10 GbE
carries storage, management, and application traffic across datacenter sites
Single engine
2
Yes
72GB
1
None
None
Cluster-2 Components
Directors
Redundant Engine SPSs
Single engine
2
Yes
72GB
1
None
None
The figure below shows a high-level physical topology of a VPLEX Metro distributed device.
VPLEX Dual and Quad engine options can be found in the Appendix.
Figure 10: VSPEX deployed with VPLEX Metro using distributed volumes
Figure 10 is a physical representation of the logical configuration shown in Figure 9.
Effectively, with this topology deployed, the distributed volume can be treated just like any
other volume; the only difference being it is now distributed and available in two locations
at the same time. Another benefit of this type of architecture is extreme simplicity since it
is no more difficult to configure a cluster across distance that it is in a single data center.
Note: When deploying VPLEX Metro you have the choice to inter-connect your VPLEX Clusters
by using either 8GB Fiber Channel or 10GB Ethernet WAN connectivity. When using FC
connectivity this can be configured with either a dedicated channel (i.e. separate non
merged fabrics) or an ISL based fabric (i.e. where fabrics have been merged across sites). It
is assumed that any WAN link will be fully routable between sites with physically
redundant circuits.
Note: It is vital that VPLEX Metro has enough bandwidth between clusters to meet
requirements. The Business Continuity Solution Designer (BCSD) tool can be used to
validate the design. EMC can assist in the qualification if desired.
https://elabadvisor.emc.com/app/licensedtools/list
For an in-depth technology and architectural understanding of VPLEX Metro, VMware HA, and
their interactions, please refer to the VPLEX HA Techbook found here:
http://www.emc.com/collateral/hardware/technical- documentation/h7113-vplexarchitecture-deployment.pdf
Using VPLEX Witness ensures that true Federated Availability can be delivered. This means
that regardless of site or link/WAN failure a copy of the data will automatically remain
online in at least one of the locations. When setting up a single or a group of distributed
volumes the user will choose a preference rule which is a special property that each
individual or group of distributed volumes has. It is the preference rule that determines the
outcome after failure conditions such as site failure or link partition. The preference rule can
either be set to cluster A preferred, cluster B preferred or no automatic winner. At a high
level this has the following effect to a single or group of distributed volumes under different
failure conditions as listed below:
As one can see from Figure 12 VPLEX Witness converts a VPLEX Metro from an active/active
mobility and collaboration solution into an active/active continuously available storage
cluster. Furthermore once VPLEX Witness is deployed, failure scenarios become selfmanaging (i.e. fully automatic) which makes it extremely simple since there is nothing to do
regardless of the failure condition.
Figure 13: VSPEX deployed with VPLEX Metro configured with 3rd Site
VPLEX Witness
As depicted in Figure 13 above, we can see that the Witness VM is deployed in a separate
fault domain and connected into both VPLEX management stations via an IP network.
Note: VPLEX Witness will support a maximum round trip latency of 1 second between
VPLEXs.
are typically used for quorum devices and/or other commonly shared volumes within a
cluster.
Dual fabric designs for fabric redundancy and HA should be implemented to avoid a
single point of failure. This provides data access even in the event of a full fabric
outage.
Each VPLEX director will physically connect to both fabrics for both host (front-end)
and storage (back-end) connectivity. Hosts will connect to both an A director and B
director from both fabrics for the supported HA level of connectivity as required with
the Non-Disruptive Upgrade (NDU) pre-checks.
Fabric zoning should consist of a set of zones a single initiator and up to 16 targets
Avoid port speed issues between the fabric and VPLEX by using dedicated port
speeds taking special care not to use oversubscribed ports on SAN switches
It is required that each director in a VPLEX cluster must have a minimum of two I/O
paths to every local back-end storage array and to every storage volume presented to
that cluster.
VPLEX allows a maximum of 4 active paths per director to a given LUN (Optimal). This
is considered optimal because each director will load balance across the four active
paths to the storage volume.
Each directors FC WAN ports must be able to see at least one FC WAN port on every
other remote director (required).
The directors local com port is used for communications between directors within the
cluster.
Independent FC WAN links are strongly recommended for redundancy
Each director has two FC WAN ports that should be configured on separate fabrics to
maximize redundancy and fault tolerance.
Use VSANs to isolate VPLEX Metro FC traffic from other traffic using zoning.
Use VLANs to isolate VPLEX Metro Ethernet traffic from other traffic.
Configuration
Block
Common:
1 x 1 GbE NIC per Control Station for management
1 x 1 GbE NIC per SP for management
2 front-end ports per SP
System disks for VNX OE
For 125 virtual machines:
EMC VNX5300
o 60 x 600 GB 15k rpm 3.5-inch SAS drives
o 2 x 600 GB 15k rpm 3.5-inch SAS Hot Spares
o 10 x 200 GB Flash drives
o 1 x 200 GB Flash drive as a hot spare
o 4 x 200 GB Flash drive for FAST Cache
Cluster-1:
(2) Directors
Single Engine
Single Engine
Figure 15: Storage Layout for 125 Virtual Machine Private Cloud Proven
Infrastructure
Additional description
Value
Management server IP
address
192.168.44.171
Network mask
255.255.255.0
Hostname
DC1-VPLEX
192.168.44.254
Additional description
Value
Additional description
Value
Yes
Note: The remaining information in this table applies only if you specified yes to the previous question.
SMTP IP address of
primary connection
192.168.44.254
user@companyname.com
192.168.44.25
user2@companyname.com
192.168.44.25
user3@companyname.com
192.168.44.25
Information
Additional description
Value
default
Additional description
Value
No
private
xxx
Additional description
Value
CA certificate lifetime
CA certificate key
passphrase
dc1-vplex
dc1-vplex
Additional description
Value
12345678
Company name
Company contact
Contacts business email
address
CompanyName
First and last name of a person to contact.
First Last
user@companyname.com
xxx-xxx-xxxx
Contacts business
address
_X_ 1. ESRS
_X_ 2. Email
_X_ 1. ESRS
_X_ 2. WebEx
Additional description
Value
Local director
discovery configuration
details (default values
work in most
installations)
224.100.100.100
Discovery port
10000
11000
192.168.11.0
Subnet mask
255.255.255.0
192.168.11.251
192.168.11.1
MTU:
The size must be set to the same value for Port
Group 0 on both clusters.
Also, the same MTU must be set for Port Group
1 on both clusters
1500
192.168.11.35
192.168.11.36
10.6.11.0
Subnet mask
255.255.255.0
Information
Additional description
Value
10.6.11.251
10.6.11.1
MTU:
The size must be set to the same value for Port
Group 1 on both clusters.
Also, the same MTU must be set for Port Group
0 on both clusters
1500
10.6.11.35
10.6.11.36
VPLEX Metro supports the VPLEX Witness feature, which is implemented through the Cluster
Witness function.
If the inter-cluster network is deployed over Fibre Channel, use separate, unique physical
links from other management traffic links.
Table 10: Cluster Witness Configuration Information
Information
Additional description
Value
username
password
Host certificate
passphrase for the
Cluster Witness certificate
Cluster Witness requires
management IP network
to be separate from intercluster network
Cluster Witness
functionality requires
these protocols to be
enabled by the firewalls
dc1-vplex
255.255.255.0
192.168.34.100
255.255.255.0
192.168.34.200
192.168.44.171
192.168.44.172
configured on the
management network
This part of the EZ-Setup will discover the back-end arrays and look for any LUNs that have
been pre-exposed to the VPELX via the VPLEX Storage Group on the VNX storage array. Note:
You should expect to see the (4) metadata LUNs at this point. Review Chapter 2, Task 7 of
the VPLEX GeoSynchrony v5.1 Configuration Guide.
At this point of the installation you will need to restart the web server services to
incorporate any security and/or certificate changes. Review Chapter 2, Task 8 of the VPLEX
7.8 Meta-volume
Meta-volumes are created during system setup and must be the first storage presented to
VPLEX. The purpose of the metadata volume is to track all virtual-to-physical mappings,
data about devices, virtual volumes, and system configuration settings. Review Chapter 2,
Task 9 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
The meta-volume backup creates a point-in-time copy of the current in-memory metadata
without activating it. The metadata backup is required for an overall system health check to
pass prior to a major migration or update and may also be used if the VPLEX loses access to
one or both primary meta-volume copies. Review Chapter 2, Task 10 of the VPLEX
Configuration Guide.
Before launching the EZ-Setup utility on a cluster that contains the VS2 version of VPLEX
hardware, you should verify that all components in the cluster are functioning correctly.
Review Chapter 3, Task 4 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
Now the EZ-Setup utility can be launched for Cluster 1. Review Chapter 3, Task 5 of the
8.8 Meta-volume
Meta-volumes are created during system setup and must be the first storage presented to
VPLEX. The purpose of the metadata volume is to track all virtual-to-physical mappings,
data about devices, virtual volumes, and system configuration settings. Review Chapter 3,
Task 8 of the VPLEX GeoSynchrony v5.1 Configuration Guide.
The meta-volume backup creates a point-in-time copy of the current in-memory metadata
without activating it. The metadata backup is required for an overall system health check to
pass prior to a major migration or update and may also be used if the VPLEX loses access to
one or both primary meta-volume copies. Review Chapter 3, Task 9 of the VPLEX
Configuration Guide.
Create your meta-volume backup for Cluster-2 and schedule its frequency. Note: by default
you must complete the first backup at time of creation. Review Chapter 3, Task 21 of the
Configuration Guide.
them together into a unified configuration. Review Chapter 3, Task 29 of the VPLEX
Each storage volume is visible from all directors with at least (2) paths
Figure 19: EZ-Provisioning Step 1: Claim Storage and Create Virtual Volumes
After the VPLEX virtual volumes are created, the volumes must be exposed to the hosts that
will need to use them. This is similar to how a VNX volume must be exposed to a host, or to
VPLEX, before it can be utilized.
5. Register the host initiators to the VPLEX virtual volume by clicking on Step 2 as shown in
Figure 20 and following the EZ-Provisioning guide. When finish return here.
10.1 Assumptions
The ESX host is running with LUNs presented directly from the storage-array.
At least one virtual machine is running I/Os on the LUNs presented to the ESX host.
VPLEX must be installed and in Good Health.
One new Switch (pair of switches if HA is required) is available for use as front-end
switches.
Make appropriate masking changes on the storage-array. Please see the Encapsulate
Arrays on ESXi documentation for additional details.
Apply a New Initiator Name and apply the default host type for the ESX server. (repeat as
necessary for all host initiator ports)
Select the previously zoned VPLEX Front-End ports for the ESX51-N1 Storage view.
Log in to the vCenter/vSphere client used for managing the ESX host. In the
Configuration tab for the host, under Storage, the LUNs exported from VPLEX should
be visible as Devices.
Now you can see the exported LUN with the required Datastore on it. It also shows
the Datastore name in the VMFS Label column
o Select the Datastore name
o Click Next
o When asked how to use the Datastore present on the LUN, click on the option
that uses the old data with New Signature
Check the status of paths for Fibre Channel Host Adapters. They should show as
active.
If a previously existing virtual machine is not yet present in the inventory, perform
the following steps.
To power on the required virtual machines, in the left pane of the vCenter/vSphere
Client, right click the virtual machine names and run I/Os from the virtual machines.
Additional description
Host certificate
passphrase for the
Cluster Witness certificate
Cluster Witness requires
management IP network
to be separate from intercluster network
Value
Once the VPLEX Metro and Witness have been configured, you will now need to allocate the
datastore LUNs from Cluster-2 to the ESXi hosts at Site-B. The assumption has been made
that Site-Bs VNX has been configured identically to Site-As VNX
Note: Only volumes with visibility and storage-at-cluster properties which match those of
the consistency group can be added to the consistency group.
The first step is to create, or select, a consistency group as shown in Figure 35. From
previous steps, a VPLEX distributed virtual volume should already be create, as depicted in
Figure 36.
The vSwitch that hosts the client VLANs is configured with sufficient ports to
accommodate the maximum number of virtual machines it may host.
All required virtual machine port groups are configured, and each server has access
to the required VMware datastores.
An interface is configured correctly for vMotion using the material in the vSphere
Networking guide.
If at some point during the deployment process a step results in an error, please visit
https://support.emc.com/products/29264_VPLEX-VS2. All of the approved
troubleshooting guides are available through keyword search.
13. Summary
Despite the many challenges IT managers face delivering continuous availability for
mission-critical applications, the VSPEX with VPLEX solution described in this design and
implementation guide will provide the infrastructure to meet the needs of the most
demanding user availability requirements. The components and products included in this
solution have been selected based on proven performance and reliability in production
environments worldwide. Here are some data points reinforcing the selection of the solution
technologies:
VSPEX is designed for flexibility and validated by EMC to ensure interoperability and
fast deployment. VSPEX enables you to choose the network, server, and hypervisor
that your environment requires to go along with EMCs industry-leading storage and
backup.
VMware is the most pervasively deployed Hypervisor, with the largest virtualized
environments in corporate and cloud provider networks running on vSphere. VMware
will provide the stable high performance infrastructure required for hosting missioncritical applications.
The EMC VPLEX family today is deployed in over 2000 continuous availability
clusters with over 200PB of storage managed. Over 15 million run time hours with 59s+ availability. Here are some stories from VPLEX users based on user experience:
o A well known Financial Services firm had an entire array failure and it didnt
realize it for a week as their VPLEX protected remote array seamlessly took
over
o A regional government data center lost power due to a backhoe operators
mistake their users didnt notice because VPLEX connected a second data
center and provided continuous availability
o A large hospital required a non-disruptive data center relocation VPLEX
moved 100s of VMs to the new data center with NO application downtime
EMC VSPEX with the VNX Series is high-performing unified storage with unsurpassed
simplicity and efficiency, optimized for virtual applications. With the VNX Series,
youll achieve new levels of performance, protection, compliance, and ease of
management. Leverage a single platform for file and block data services. Centralized
management makes administration simple Leverage a single platform for file and
block data services. Centralized management makes administration simple.
In choosing the VSPEX with VPLEX solution, you are assured a world class infrastructure
backed by EMC, with best in class support backing up the solution.
Appendix-A -- References
EMC documentation
The following documents, available on EMC Online Support provide additional and relevant
information. If you do not have access to a document, contact your EMC representative.
EMC VSPEX Private Cloud: VMware vSphere 5.1 for up to 500 Virtual Machines
EMC VPLEX Site Preparation Guide
Implementation and Planning Best Practices for EMC VPLEX Technical Notes
EMC VPLEX Release Notes
EMC VPLEX Security Configuration Guide
EMC VPLEX Configuration Worksheet
EMC VPLEX CLI Guide
EMC VPLEX Product Guide
VPLEX Procedure Generator
o Encapsulate Arrays on ESXi boot from SAN
o Encapsulate Arrays on ESXi non-boot from SAN
Maximum
Virtual volumes
8000
Storage volumes
8000
3200
256
400
Number of Extents
Extents per storage volume
RAID-1 mirror legs
24000
128
2
8000
8000
32TB
32TB
8PB
4KB
25
25
Number of Clusters
1024
1000
4
3Gbps
5ms
Additional description
Management server IP
address
Network mask
Hostname
Value
Additional description
Value
Additional description
Value
Yes or no:
Note: The remaining information in this table applies only if you specified yes to the previous question.
SMTP IP address of
primary connection
Information
Additional description
Value
1, 2, 3, or 4:
1, 2, 3, or 4:
1, 2, 3, or 4:
Additional description
Value
Yes or no:
(Default = private)
xxx
Additional description
Value
CA certificate lifetime
CA certificate key
passphrase
Additional description
Value
Company name
Company contact
__ 1. ESRS
__ 2. Email
__ 3. None (notifications
are not configured)
__ 1. ESRS
__ 2. WebEx
Additional description
Value
Local director
discovery configuration
details (default values
work in most
installations)
(Default = 224.100.100.100)
Discovery port
(Default = 10000)
(Default = 11000)
(Default = 1500)
Information
Additional description
Value
MTU:
The size must be set to the same value for Port
Group 1 on both clusters.
Also, the same MTU must be set for Port Group
0 on both clusters
(Default = 1500)
Additional description
Host certificate
passphrase for the
Cluster Witness certificate
Cluster Witness requires
management IP network
to be separate from intercluster network
Value
Cluster Witness
functionality requires
these protocols to be
enabled by the firewalls
configured on the
management network