Professional Documents
Culture Documents
XenServer 6x Best Practices Compellent
XenServer 6x Best Practices Compellent
XenServer 6x Best Practices Compellent
Page 2
Document revision
Date 2/16/2009 5/21/2009 10/1/2010 12/21/2010 8/22/2011 11/29/2011 Revision 1 2 3 3.1 4.0 4.1 Description Initial 5.0 Documentation Documentation update for 5.5 Document Revised for 5.6 and iSCSI MPIO Updated iSCSI information Documentation updated for 6.0 Update for Software iSCSI information
THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the DELL logo, the DELL badge, and Compellent are trademarks of Dell Inc. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names Page 3
or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
Page 4
Contents
Document revision ................................................................................................. 3 Contents .................................................................................................................. 5 General syntax ..................................................................................................... 8 Conventions ......................................................................................................... 8 Preface ................................................................................................................... 9 Audience ............................................................................................................ 9 Purpose .............................................................................................................. 9 Customer support .................................................................................................. 9 Introduction ........................................................................................................... 10 XenServer Storage Overview ........................................................................................ 11 XenServer Storage Terminology ............................................................................... 11 Shared iSCSI Storage............................................................................................. 11 Shared Fibre Channel Storage ................................................................................. 12 Shared NFS ........................................................................................................ 13 Volume to Virtual Machine Mapping .................................................................... 14 NIC Bonding vs. iSCSI MPIO ................................................................................ 14 Multi-Pathing ..................................................................................................... 15 Enable Multi-pathing in XenCenter...................................................................... 15 Software iSCSI ......................................................................................................... 16 Overview .......................................................................................................... 16 Open iSCSI initiator Setup with Dell Compellent ........................................................... 17 Multipath with Dual Subnets ................................................................................... 17 Configuring Dedicated Storage NIC ...................................................................... 18 To Assign NIC Functions using the XE CLI ............................................................... 18 XenServer Software iSCSI Setup.......................................................................... 19 Login to Compellent Control Ports ...................................................................... 19 Configure Server Objects in Enterprise Manager ..................................................... 20 View Multipath Status ..................................................................................... 22 Multi-path Requirements with Single Subnet ............................................................... 23 Configuring Bonded Interface ............................................................................ 23 Configuring Dedicated Storage Network ............................................................... 24 To assign NIC functions using the XE CLI: .............................................................. 25 XenServer Software iSCSI Setup.......................................................................... 25 Page 5
Configure Server Objects in Enterprise Manager ..................................................... 27 Multi-path Requirements with Dual Subnets, Legacy Port Mode ........................................ 28 Log in to Dell Compellent iSCSI Target Ports.......................................................... 30 View Multipath Status ..................................................................................... 33 iSCSI SR Using iSCSI HBA ........................................................................................ 33 Fibre Channel ......................................................................................................... 38 Overview .......................................................................................................... 38 Adding a FC LUN to XenServer Pool .......................................................................... 38 Data Instant Replay to Recover Virtual Machines or Data ................................................ 40 Overview ..................................................................................................... 40 Recovery Option 1 One VM per LUN ........................................................................ 40 Recovery Option 2 Recovery Server ........................................................................ 50 Dynamic Capacity ..................................................................................................... 62 Dynamic Capacity Overview .............................................................................. 62 Dynamic Capacity with XenServer ....................................................................... 62 Data Progression ...................................................................................................... 63 Data Progression on XenServer ........................................................................... 63 Boot from SAN ......................................................................................................... 64 VM Metadata Backup and Recovery ............................................................................... 65 Backing Up VM MetaData ....................................................................................... 65 Importing VM MetaData ......................................................................................... 67 Disaster Recovery ..................................................................................................... 68 Replication Overview ........................................................................................... 68 Test XenServer Disaster Recovery ....................................................................... 70 Recovering from a Disaster .................................................................................... 73 Replication Based Disaster Recovery ......................................................................... 77 Disaster Recovery Replication Example ................................................................ 77 Live Volume ....................................................................................................... 82 Overview .......................................................................................................... 82 Appendix 1 Troubleshooting ........................................................................................ 84 XenServer Pool FC Mapping Issue ............................................................................. 84 Starting Software iSCSI ......................................................................................... 85 Two ways to Start iSCSI ................................................................................... 86 Software iSCSI Fails to Start as Server Boot ................................................................. 86 Wildcard Doesnt Return All Volumes ........................................................................ 86 Page 6
View Multipath Status ........................................................................................... 87 XenCenter GUI displays Multipathing Incorrectly .......................................................... 87 Connectivity issues with a Fibre Channel Storage Repository ........................................... 88
Page 7
General syntax
Figure 1, Document Syntax
Item Menu items, dialog box titles, field names, keys Mouse click required User Input User typing required Website addresses Email addresses
Conventions
Timesavers are tips specifically designed to save time or reduce the number of steps.
Caution indicates the potential for risk including system or data damage.
Warning indicates that failure to follow directions could result in bodily harm.
Page 8
Preface
Audience
The audience for this document is System Administrators who are responsible for the setup and maintenance of Citrix XenServer and associated storage. Readers should have a working knowledge of the installation and management of Citrix XenServer and the Dell Compellent Storage Center.
Purpose
This document provides best practices for the setup, configuration and management of Citrix XenServer with Dell Compellent Storage Center. This document is highly technical and intended for storage and server administrators as well as information technology professionals interested in learning more about how Citrix XenServer integrates with Compellent Storage Center.
Customer support
Dell Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week, 365 days a year. For additional support, email Dell Compellent at support@compellent.com. Dell Compellent responds to emails during normal business hours. Additional information on XenServer 6.0 can be found in the Citrix XenServer 6.0 Administration Guide located on the Citrix download site. Information on Dell Compellent Storage Center is located on the Dell Compellent Knowledge Center.
Page 9
Introduction
This document will provide configuration examples, tips, recommended settings, and other storage guidelines a user can follow while integrating Citrix XenServer with the Dell Compellent Storage Center. This document has been written to answer many frequently asked questions with regard to how XenServer interacts with the Dell Compellent Storage Center's various features such as Dynamic Capacity, Data Progression, Replays, and Remote Instant Replay. This document focuses on XenServer 6.0, however most of the concepts apply to XenServer 5.X unless otherwise noted. Dell Compellent advises customers to read XenServer documentation which are publically available on the Citrix XenServer knowledge base documentation pages to provide additional information on installation and configuration. This document assumes the reader has had formal training or has advanced working knowledge of the following: Installation and configuration of Citrix XenServer Configuration and operation of the Dell Compellent Storage Center Operating systems such as Windows or Linux The Citrix XenServer 6.0 Administrators Guide
NOTE: the information contained within this document is based on general circumstances and environments. Actual configuration may vary in different environments.
Page 10
Page 11
Page 12
Shared NFS
XenServer supports NFS file servers, such as the Dell NX3000 with Dell Compellent storage to host SRs. NFS storage repositories can be shared within a resource pool of XenServers. This allows virtual machines to be migrated between XenServers within the pool using XenMotion. Attaching an NFS storage repository requires the hostname or IP address of the NFS server. The NFS server must be configure to export the specified path to all XenServers in a pool or the reading of the SR will fail. Using and NFS share is a relatively simple way to create an SR and doesnt involve the complexity of iSCSI or expense of Fibre Channel. There are some limitations that must be considered before implementing NFS however. An NFS SR will utilize a similar network infrastructure as iSCSI to support redundant paths to the NFS share. The main difference is that iSCSI uses MPIO to support multipathing and load balancing between multiple the paths while NFS is limited to one network interface per SR. Redundancy in an NFS environment can be accomplished by using XenServer bonded interfaces. Bonded interfaces are active/passive and wont provide load balancing across both physical adapters such as iSCSI can provide.
Page 13
A new feature with XenServer 6.0 is the ability to provide a high availability (HA) quorum disk on an NFS volume. However, the XenServer 6.0 Disaster Recovery feature can only be enabled when using LVM over HBA or software iSCSI. The underlying protocol choice for SRs is a business decision that will be unique to each environment. Given the performance benefits and the requirement for Disaster Recovery it is the recommendation of Dell Compellent to use iSCSI or FC HBA, or software iSCSI over NFS.
active/active connections for anything but VM traffic. For this reason, it is recommended that front end iSCSI ports across be configured two subnets. This allows load balancing across all NICs and failover with MPIO.
Multi-Pathing
Multi-Pathing allows for failures in HBAs, Switch Ports, Switches, and SAN IO ports. It is recommended to utilize Multi-Pathing to increase availability and redundancy for critical systems such as production deployments of XenServer when hosting critical servers. XenServer supports Active/Active Multi-Pathing for iSCSI and FC protocols for I/O datapaths. Dynamic Multi-Pathing uses a round-robin mode load balancing algorithm, so both routes will have active traffic on them during normal operations. Multi-Pathing can be enabled via XenCenter or on the command line. Please see the XenServer 6.0 Administrator Guide for information on enabling Multi-Pathing on XenServer hosts. Enabling Multi-Pathing requires a server restart and should be enabled before storage is added to the server. Only use Multi-Pathing when there are multiple paths to the storage center.
Page 15
Software iSCSI
Overview
XenServer Supports shared Storage Repositories (SRs) on iSCSI LUNs. iSCSI is implemented using the open-iSCSI software initiator or by using a supported iSCSI HBAs. XenServer iSCSI Storage Repositories are supported with Dell Compellent Storage Center running in either Legacy mode or Virtual Port mode. Shared iSCSI using the software iSCSI initiator is implemented based on the Linux Volume Manager (LVM) and provides the same performance benefits provided by LVM on local disks. Shared iSCSI SRs using the software-based host initiator are capable of supporting VM agility. Using XenMotion, VMs can be started on any XenServer host in a resource pool and migrated between them with no noticeable interruption. iSCSI SRs utilize the entire LUN specified at creation time and may not span more than one LUN. CHAP support is provided for client authentication, during both the data path initialization and the LUN discovery phases. NOTE: Use dedicated network adapters for iSCSI traffic. The default connection can be used however it is always best practice to separate iSCSI and network traffic. All iSCSI initiators and targets must have a unique name to ensure they can be identified on the network. An initiator has an iSCSI initiator address, and a target has an iSCSI target address. Collectively these are called iSCSI Qualified Names, or IQNs. XenServer hosts support a single iSCSI initiator which is automatically created and configured with a random IQN during host installation. iSCSI targets commonly provide access control via iSCSI initiator IQN lists, so all iSCSI targets/LUNs to be accessed by a XenServer host must be configured to allow access by the host's initiator IQN. Similarly, targets/LUNs to be used as shared iSCSI SRs must be configured to allow access by all host IQNs in the resource pool. iSCSI targets that do not provide access control will typically default to restricting LUN access to a single initiator to ensure data integrity. If an iSCSI LUN is intended for use as a shared SR across multiple XenServer hosts in a resource pool ensure that multi-initiator access is enabled for the specified LUN. It is strongly suggested to change the default XenServer IQN to one that is consistent with a naming schema in the iSCSI environment. The XenServer host IQN value can be adjusted using XenCenter, or via the CLI with the following command when using the iSCSI software initiator: xe host-param-set uuid=<valid_host_id> otherconfig:iscsi_iqn=<new_initiator_iqn> Caution: It is imperative that every iSCSI target and initiator have a unique IQN. If a non-unique IQN identifier is used, data corruption and/or denial of LUN access can occur. Caution: Do not change the XenServer host IQN with iSCSI SRs attached. Doing so can result in failures connecting to new targets or existing SRs. Page 16
In this configuration the Storage Center is set to virtual port mode and the iSCSI Front End ports are on two separate subnets different from the management interface. The Storage Center is configured with two control ports, one for each subnet. Multipathing is controlled through MPIO.
Page 17
3. Setup an IP configuration for the PIF, adding appropriate values for the mode parameter and if using static IP addressing the IP, netmask, gateway, and DNS parameters: xe pif-reconfigure-ip mode=<DHCP | Static> uuid=<pif-uuid> Example: xe pif-reconfigure-ip mode=static ip=10.0.0.10 netmask=255.255.255.0 gateway=10.10.0.1 uuid=<PIF-UUID> 4. Set the PIF's disallow-unplug parameter to true: xe pif-param-set disallow-unplug=true uuid=<PIF-UUID> 5. Set the Management Purpose of the interface: xe pif-param-set other-config:management_purpose="Storage" uuid=<PIFUUID> 6. Repeat this process for each eth interface in the XenServer host that will be dedicated for storage traffic. For iSCSI MPIO configurations this should be a minimum of two eth interfaces that are on separate subnets. For more information on this topic see the Citrix XenServer 6.0 Administrator Guide.
Page 19
iscsiadm -m discovery --type sendtarget --portal <Control Port IP:3260> Example: iscsiadm m discovery --type stendtarget --portal 10.25.0.10:3260
Figure 9, Discover Storage Center Ports
NOTE: If problems are encountered while running the iscsiadm commands, see the iSCSI Troubleshooting section at the end of this document. 2. Repeat the discovery process for each Dell Compellent Control Port. 3. Once all target ports are discovered run iscsiadm with the Login parameter: iscsiadm m node --login
Figure 10, Log into Storage Center Ports
The server objects can be configured in the Storage Center now that the server has logged in.
NOTE: Unchecking the Use iSCSI Name box will aid in identifying the status of MPIO paths.
Page 20
NOTE: Starting in Storage Center version 5.5.x, the steps listed above must be completed using Enterprise Manager. It is not possible to create server objects with the Use iSCSI Names box unchecked when connected directly to the Storage Center.
After creating the server object the volumes can be created and mapped to the server. In a server pool, map the LUN to all servers specifying the same LUN number. See the Dell Compellent documentation for detailed instructions on creating and mapping volumes.
NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool. Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the CLI. Below are the steps for adding storage using XenCenter The steps for adding storage through the CLI can be found in the XenServer 6.0 Administrators Guide. 1. Select the server or pool in XenCenter and click on New Storage 2. Select the Software iSCSI option under virtual disk storage, click next
Figure 12, Add iSCSI Disk
3. 4. 5. 6.
Give the new Storage Repository a name and click next Enter one of the Dell Compellent control ports in the Target Host field, click Discover IQNs Click Discover LUNs Select the LUN to add under Target LUN and click finish Page 21
NOTE: When the Storage Center is in virtual port mode and adding storage with the wildcard option, an incomplete list of volumes mapped to the server may be returned. This is a know issue with the XenCenter GUI. To work around the issue, cycle through the Control Ports in the Target Host field using the (*) wildcard Target IQNs until the Target LUN appears. This is a GUI issue and will not affect multipathing. The SR should now be available to the server. Repeat the steps for mapping and adding storage for any additional SRs.
Page 22
Page 23
NOTE: The process of configuring a single-path, non redundant connection to a Dell Compellent Storage Center is the same except for excluding the steps to bond the two NICs. NOTE: Create NIC bonds as part of the initial resource pool creation, prior to joining additional hosts to the pool. This will allow the bond configuration to be replicated to new hosts as they join the pool. The steps below outline the process of creating a NIC bond in XenServer 6.0 1. Go into Citrix XenCenter, select the server and go to the NIC tab. 2. At the bottom of the NIC window is the option to create a bond. Select the NICs you would like to bond and click create.
Figure 16, Add Bonded Interface
3. Once complete, there will be a new bonded NIC displayed in the list of NICs.
Figure 17, Bonded Interface
Page 24
Before dedicating a network interface as a storage interface for use with iSCSI SRs, ensure that the dedicated interface uses a separate IP subnet which is not routable from the main management interface. If this is not enforced, then storage traffic may be directed over the main management interface after a host reboot, due to the order in which network interfaces are initialized.
For more information on this topic see the Citrix XenServer 6.0 Administrator Guide.
Page 25
In this example the IP address is: 10.35.0.10/16 2. Login to Compellent Control Ports. In this step the iscsiadm command will be utilized in the XenServer CLI to discover and login to all the Dell Compellent iSCSI targets. 3. From the XenServer console, run the following command for the iSCSI control port. iscsiadm -m discovery --type sendtarget --portal <Control Port IP:3260> Example: iscsiadm m discovery --type sendtarget --portal 10.25.0.10:3260
Figure 19, Discover Storage Center Ports
NOTE: If problems are encountered while running the iscsiadm commands, see the iSCSI troubleshooting section at the end of this document. 4. Once all target ports are discovered, run iscsiadm with the Login parameter: iscsiadm m node --login
Figure 20, log into Storage Center Ports
5. Now that the server has logged in the server objects can be configured in the Storage Center. Page 26
After creating the server object the volumes can be created and mapped to the server. In a server pool, be sure the LUNS are mapped to the servers with the same LUN number. See the Dell Compellent Admin Guide for detailed instructions on creating and mapping volumes.
NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool. Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the CLI. Below are the steps for adding storage using XenCenter. Steps for adding storage through the CLI can be found in the XenServer 6.0 Administrators Guide. 1. Select the server or pool in XenCenter and click on New Storage 2. Select the Software iSCSI option under virtual disk storage, click next
Page 27
3. Give the new Storage Repository a name and click next 4. Enter the Dell Compellent control port in the Target Host field, click Discover IQNs 5. Click Discover LUNs to view the available LUNs.
Figure 23, Add iSCSI SR
6. Select the LUN to add under Target LUN and click finish NOTE: When the Storage Center is in virtual port mode and storage is added with the wildcard option, an incomplete list of volumes mapped to the server may be returned. This is a know issue with the XenCenter GUI. To work around the problem, cycle through the Target Host IP addresses using the (*) wildcard IQN until the Target LUN appears. This is a GUI issue and will not affect multipathing. The SR will now be available to the server. Repeat the steps for mapping and adding storage for any additional SRs.
Page 28
XenServer 6.0 iSCSI using 2 unique dedicated storage NICs/subnets o Citrix best practices states that these 2 subnets should be different from the XenServer management network. Multi-pathing enabled on all XenServer Pool Hosts iSCSI Target IP addresses for the Storage Center Front End ports o In this example the primary iSCSI Front End ports IP address are 10.10.63.2, 10.10.62.1, 172.31.37.134, 172.31.37.131
In this configuration the Storage Center is set to Legacy Port mode and the iSCSI Front End ports are on two subnets separate from each other and the management interface. Multipathing is controlled through MPIO.
The first step to configure XenServer for Dell Compellent in Legacy Port mode is to identify the primary iSCSI target IP addresses on each controller the Storage Center. This can be done by going to the controllers listed in Storage Center, expanding IO cards, iSCSI and clicking on each iSCSI port listed.
Page 29
2. Repeat the discovery process for each Target Port 3. Once all the ports are discovered, run the iscsiadm command with Login parameter to connect the host to the Storage Center Iscsiadm -m node --login
Figure 27, log into Storage Center Ports
Configure Server Objects in Enterprise Manager Follow the steps below to configure the server object for access to the Storage Center 1. In Enterprise Manager, go to Storage Center and select Storage Management 2. In the object tree, right click on Servers and select Create Server 3. Complete all options as specified in the Dell Compellent Administrators Guide.
Page 30
After creating the server object the volumes can be created and mapped to the server. See the Dell Compellent documentation for detailed instructions on creating and mapping volumes.
NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool. Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the CLI. Below are the steps for adding storage using XenCenter. Steps for adding storage through the CLI can be found in the XenServer 6.0 Administrators Guide. 1. Select the server or Pool in XenCenter and click on New Storage 2. Select the Software iSCSI option under virtual disk storage, click next
Figure 29, Add iSCSI Disk
3. Give the new Storage Repository a name and click Next 4. Enter the Dell Compellent control ports in the Target Host filed, click Discover IQNs
Page 31
6. Select the LUN to add under Target LUN and click finish NOTE: When Storage Center is in legacy port mode adding storage may return an incomplete list of volumes mapped to the server. This is a know issue with the XenCenter GUI where only the LUNs active on the first IP address in Target Host are returned. To work around this issue, cycle through the Target Hosts IP using the (*) wildcard Target IQN until the Target LUN appears. This is a GUI issue and will not affect multipathing. Page 32
The SR will now be available to the server. Repeat the steps above for mapping and adding storage for any additional SRs.
Page 33
2. Configure IP Address ess for the iSCSI HBA 2.1. In order to set the IP address for the HBA choose option 4 (Port Level Info & Operations), Operations) and then option 2 (Port Network Settings Menu). 2.2. Enter option 4 (Select HBA Port) to select the appropriate HBA port then select option 2 (Configure IP Settings).
Figure 34, Configure HBA IP Address
2.3. Enter the appropriate IP Settings for the HBA adapter port when finished exit and save or select another HBA port to configure. Page 34
2.4. From the Port Network Settings Menu select option 4 to select an additional l HBA port to configure. Enter 2 and to select the second HBA port. Once the second HBA port is selected sel choose option 2 (Configure IP Settings) from the Port network settings menu to input in the appropriate IP settings for the second HBA port.
Figure 36, Enter IP Address Info
Page 35
2.5. Choose option 5 (Save changes and reset HBA (if ne necessary). cessary). Then select Exit until back at the main menu. The iSCSI name or IQN can also be changed using the iscli utility. This menu enu can be access by selecting option 4 (Port Level info & Operations Menu) from the main menu, then selecting Option 3 (Edit Configured Port Settings menu) then Option 3 (Port Firmware Settings Menu), then Option 7 (Configure Advanced Settings). Select <Enter> until reaching iSCSI_Name, then enter a unique IQN name for the adapter. 3. The next step is to establish a target f from rom XenServer so it registers with the Compellent Storage Center. 3.1. From the main interactive nteractive iscli menu select option 4 (Port level info & Operations) 3.2. From the e Port Level Info & Operations m menu select option 7 (---> > Target level Info & Operations) 3.3. On the HBA target menu enu screen select option 6 (Add a Target) 3.3.1. Select Enter nter until reaching the TGT_TargetIPAddress option. Enter the target IP address of the Compellent Controller. (Repeat for each target.) 3.3.1.1. In this example 10.10.64.1 and 10.10.65.2 are used. These are the primary iSCSI connection on both Dell Compellent Storage Center Controllers.
Figure 37, Enter Target IP Address
3.3.2. Once all targets are entered for HBA 0 select option 9 to the save the port information. 3.3.3. Select option 10 to select the second HBA port. 3.3.4. Repeat the steps in section 3.3 for the iSCSI targets. 3.4. Enter option 12 to exit. Enter YES to save the changes. 3.5. Exit out of the iscli utility. 4. Add server iSCSI connection HBAs to the Dell Compellent Storage Center. 4.1. Logon to the Storage Center console. 4.2. Expand Servers and select the location or folder to store the server in. 4.2.1. For ease of use the servers in this view are separated into folders based on function. 4.3. Right click the location to create the server in and select Create Server. Page 36
Note: You may have to uncheck show only active/up connections in the create a server wizard 4.4. Select the appropriate iSCSI HBA/IQNs for the new server object then click Continue. 4.5. Depending on the Storage Center version select the XenServer Operating system or just select Other Multipath OS if XenServer is not listed. 5. Repeat preceding 4 steps for each XenServer in the Pool. 6. Once all the XenServer servers are added to the Compellent Storage Center, create a new volume on the Compellent Storage Center and map it to all the XenServers in the pool with the same LUN Number, or create a Compellent Clustered server object, add all the XenServers to the Cluster, and map the volume to the XenServer Clustered server object. 7. The final step of the process is adding the new Volume to XenServer. 7.1. Logon to XenCenter, right click on the appropriate XenServer to add the connection to, and select New Storage Repository. If the storage is being added to a resource pool, select the Pool instead of the server. 7.2. Select Hardware HBA option as the iSCSI connection is using iSCSI HBAs, then click Next.
Figure 38, Storage Type
There is short delay while XenServer probes for available LUNs. 7.3. Select the appropriate LUN. Give the SR an appropriate Name and click Finish. 7.4. A warning is displayed that the LUN will be formatted and any data present will be destroyed. Click Yes to format the disk.
Page 37
Fibre Channel
Overview
XenServer provides support for shared Storage Repositories (SRs) on Fibre Channel (FC) LUNs. FC is supported on the Dell Compellent SAN by utilizing QLogic or Emulex HBAs. Fibre Channel support is implemented based on the Linux Volume Manager (LVM) and provides the same performance benefits provided by LVM VDIs in the local disk case. Fibre Channel SRs are capable of supporting VM agility using XenMotion: VMs can be started on any XenServer host in a resource pool and migrated between them with no noticeable downtime. The following sections details the steps involved in adding a new Fibre Channel connected volume to a XenServer pool.
3. On the Choose the type of new storage screen select Hardware HBA then click Next.
Page 38
4. On the Select the LUN to reattach or create a new SR screen select the appropriate volume, then enter a descriptive name. Click Finished to continue.
Figure 41, Select LUN
5. A dialog box will appear asking: Do you wish to format the disk? Click Yes to Format the SR. 6. The SR should now be created and mapped to all the servers in the pool.
Page 39
1. As shown below, a volume is created on the Dell Compellent system and named Xen6_P1_SR2. Also note the replay of this volume created at 08:30:00 PM. Replays can be manually or automatically generated on the Compellent system by utilizing the Replay Scheduler or manually through the Storage Center Console.
Page 40
In this example a catastrophe strikes w2k8-xen6 rendering it unbootable. By using Dell Compellent Replays the server can be quickly recovered to the time of the last snapshot. Page 41
3. Verify the VM is shutdown in the XenServer Console. 4. Highlight the Xen6_P1_SR2 volume hosting w2k3-xen5 and select Forget Storage Repository to remove this volume from the XenServer Pool.
Figure 44, Forget SR
5. Go to the Dell Compellent Storage Center Console and highlight the volume containing the VM. In this example this is the Xen6_P1_SR2 volume. 6. Select the Mapping button.
Figure 45, Volume Mapping
7. 8. 9. 10. 11.
Note the LUN number for the mapping. Highlight each of the mappings listed individually and select the Remove Mapping button. Select Yes on the Are you sure screen. Select Yes (Remove Now) on the Warnings screen. Repeat until all mappings are removed from the volume.
Page 42
12. With the volume in question selected from the Dell Compellent Storage Center console, click the Replays button. Right click on the replay to recover to and select Create Volume from Replay. In this example it is the replay dated 09/10/2011 08:30:00 pm.
Figure 47, Local Recovery
13. On the Create Volume from Replay screen enter an appropriate name for the Replay Volume and select the Create Now button. 14. On the Map Volume to Server screen select one of the appropriate servers in the pool to map the view volume to, and then select Continue. 15. Go to the Advanced options screen enter the appropriate LUN number then select Continue. In this example LUN 2 is being used as that was the original volume number. 16. When completed select Create Now. 17. This procedure only mapped the volume to one server, if more mappings are required select the Mappings button and add the appropriate mappings to the volume to represent all the
Page 43
servers in the XenServer Pool. In the example below the server XenServer6P1S1 and XenServer6P1S2 are both added to the new View Volume.
Figure 48, Volume Mappings
18. Return to the XenCenter console, right click on the pool and select New Storage Repository.
Figure 49, New SR
19. Select the appropriate type of storage for the volume then select Next. In this example it is a FC connection so hardware HBA should be selected.
Page 44
20. On the Select the LUN to reattach or create a new SR on screen select the appropriate volume, name it accordingly, then select Finish.
Figure 51, Select LUN
21. A message should appear asking if the SR should be Reattached, Formatted or canceled. Select Reattach.
Page 45
22. With the replay of the SR now attached to the Pool, the virtual disk can be mapped to the virtual machine. From XenCenter highlight the server to be recovered then select the Storage Tab. Notice that the server doesnt have a disks associated with it. 23. Click the Attach button to associate a disk to the VM.
Figure 53, Attach Disk
24. Expand the recovered SR, select the appropriate disk and click Attach.
Page 46
25. The Virtual machine can now be started in the same state it was in at the time of the last Replay. In this example the last Replay was taken at 8:30 pm.
26. If satisfied with the result the original volume can be coalesced into the new view volume by following the remaining steps. CAUTION: Continuing the original volume with the view volume will destroy the original volume. 27. Highlight the original volume, right click on it and choose delete.
Page 47
28. Confirm the action by clicking Yes to move the volume to the Recycle Bin. 29. To completely remove the volume from the system, delete the volume from the recycle bin by expanding the recycle bin, right click on the volume and choose delete.
Figure 57, Delete Volume from Recycle Bin
30. Confirm the delete by clicking Yes. 31. The original volume is not removed leaving the recovery volume as the primary volume. Once the associated replays of the view volume are expired they will be coalesced into the volume as shown below.
Page 48
Page 49
Recovery Scenario XenServer Pool containing two servers, XenServer6P1S1 and XenServer6P1S2. Standalone (Recovery) XenServer named XenRecovery. All servers are connected to the Dell Compellent Storage Center using Fibre Channel and are already zoned accordingly. A replay is created on the volume Xen6_P1_SR2.
1. From the Dell Compellent Storage Center console, select the volume to recover and click the Replays Button.
Page 50
2. Right click on the replay to recover to and select Create Volume from Replay. In the example below the Replay used is dated 09/11/2011 08:09:54 am.
Figure 62, Local Recovery
3. On the Create Volume from Replay screen enter an appropriate name for the Replay volume and click the Create Now button. 4. On the Select a Server to Map screen select one of the recovery servers to map the view volume to, click Continue. 5. In the Map Volume to Server Advanced options, enter the appropriate LUN numbers for the server port. If mapping to multiple servers set each mapping to the same LUN number. In the example LUN 12 is used. Click Create Now. When mapping to multiple servers in a Pool use the Storage Center Cluster Server Object. This will create the mapping to all servers with the same LUN number.
Page 51
6. The next step after mapping the storage to the recovery XenServer is to add the Storage Repository to the recovery server. A separate copy of XenCenter must be used or the original Pool must first be removed from the console. XenCenter will not allow the addition of this Storage Repository to the recovery server if it sees that volume mapped elsewhere.
Page 52
7. From XenCenter right click on recovery XenServer and select New Storage Repository.
Figure 65, New SR
Page 53
9.
10. Select the recovered LUN, name it, and click Finish.
Page 54
11. A warning message should appear stating that an existing SR was found on the selected LUN. click Reattach.
Figure 69, Reattach SR
12. Now that the SR has been added to the recovery server the process of recovering the VMs can be started. The next step is to create a new virtual machine as a placeholder. 13. Right click on the recovery XenServer and choose New VM.
Page 55
14. Select the appropriate template for the server then click Next.
Figure 71, OS Template
15. Enter in a name for the server then click Next. Typically the actual server name of the VM being recovered is used.
Page 56
16. Click Next on the Locate the operating system installation media screen.
Page 57
18. Enter in the appropriate amount of vCPUs and Memory then click Next.
Figure 75, Size CPU and Memory
19. On the screen Enter the information about the virtual disks for the new virtual machine, select a location to store a temporary virtual disk, then click Next. Typically it is best to store the temporary disk on a SR that isnt being used for recovery. Page 58
20. On the Add or remove virtual network interfaces screen click Add, select the appropriate network, then click Next.
Figure 77, Select Network
21. On the Virtual machine configuration is complete screen uncheck Start VM automatically and click Finish.
Page 59
22. From the XenCenter Console select the newly create VM then select the Storage tab. 23. Highlight the virtual disk temporarily attached to the VM and select Delete or Detach. Since this disk contains no information it is OK to delete it.
Figure 79, Detach Disk
Page 60
25. Once the temporary disk is deleted click the Attach button to select the original disk from the recovered Volume. Expand the recovered LUN and select the appropriate disk to attach.
Figure 81, Attach Disk
NOTE: If there are multiple disks in the Storage Repository with no name, it may take some trial and error to connect to the correct disk. Use the Storage Tab to detach and reattach disks until the correct one is selected. Restoring the MetaDate will prevent this issue. If a Virtual Machine MetaData backup has been taken on the Volume, use the procedure outlined in the VM MetaData Back and Recovery section to recover the names. From this point the VM can be started, exported, copied etc. Typically the VM would be exported and imported back into the production Pool.
Page 61
Dynamic Capacity
Dynamic Capacity Overview
Dell Compellent's Thin Provisioning, called Dynamic Capacity, delivers the highest storage utilization possible by eliminating allocated but unused capacity. Dynamic Capacity completely separates storage allocation from utilization, enabling users to allocate any size virtual volume upfront yet only consume actual physical capacity when data is written by the application.
Page 62
Data Progression
Data Progression on XenServer
The foundation of Dell Compellents Automated Tiered Storage patent is our unique Dynamic Block Architecture. Storage Center records and tracks specific information about blocks of data, including time written, time accessed, frequency of access, associated volume, RAID level, and more. Data Progression utilizes all of this metadata, or data about the data to automatically migrate blocks of data to the optimum storage tier based on usage and performance, unlike traditional systems that move entire files.
Figure 82, Data Progression
Data Progression automatically classifies and migrates data to the optimum tier of storage, retaining frequently accessed data on high performance storage and storing infrequently accessed data on lower cost storage. XenServer, like other virtualization hypervisors, will contain virtual machines running Windows, Linux, or other virtual machines that contain stagnant data, data that is read frequently and heavy read/write data such as transaction logs and pagefiles. Take a Virtual Machine running a file server for example. A user copies a new file to the file server. The Dell Compellent system writes the data instantly to Tier 1 Raid 10. The longer the file sits without any reads/writes, the further the blocks of data that make up the file will transition in the tiering structure until it reaches Tier 3, Raid 5. Typically less than 20% of data on the file server is accessed frequently. The Dell Compellent system is optimized to automatically move this data between tiers without any assistance. In a typical storage solution, an Administrator would have to manually move files from one Tier to another. This equates to costs savings by storing static data on low-cost, highcapacity disks and by eliminating the need to manage data manually. Only data that is required to be on Tier 1 Storage will remain on that Tier.
Page 63
Page 64
Backing Up VM MetaData
In XenServer, exporting or importing metadata can be done from the text-based console menu. On the physical console the menu is loaded by default. To start the console menu through the host console screen in XenCenter, type: xsconsole from the command line.
Page 65
To export the VM metadata: 1. 2. 3. 4. 5. 6. Select Backup, Restore and Update from the menu. Select Backup Virtual Machine Metadata. If prompted, log on with root credentials. Select the Storage Repository where the desired VMs are stored. After the metadata backup is done, verify the successful completion on the summary screen. In XenCenter, on the Storage tab of the SR selected in step 4, a new VDI should be created named Pool Metadata Backup.
Page 66
Another option available from the console menu is Schedule Virtual Machine Metadata. This option allows for automated exports of metadata on a daily, weekly, or monthly basis. By default this option is disabled.
Importing VM MetaData
A prerequisite for running the import command in a DR environment is that Storage Repository(s) (where the replicated virtual disk images are located) need to be setup and re-attached to a XenServer. Also make sure that the Virtual Networks are set up correctly by using the same names in the production and DR environment. After the SR is attached, the metadata backup can be restored. From the console menu: 1. 2. 3. 4. 5. 6. 7. 8. Select Backup, Restore and Update from the menu. Select Restore Virtual Machine Metadata. If prompted, log on with root credentials. Select the Storage Repository to restore from. Select the Metadata Backup you want to restore. Select restore only VMs on this SR or all VMs in the pool. After the metadata restore is done, verify the summary screen and check for errors. The VMs are now available in XenCenter and can be started at the new site.
Page 67
Disaster Recovery
XenServer 6 provides the enterprise with functionality designed to recover data from a catastrophic failure of hardware which disables or destroys a whole pool or site. The XenServer 6 Disaster Recovery feature provides the mechanism to backup services and applications while Dell Compellent replication technology provides a means to make this data available at a remote site. Together they provide a high availability solution for mission critical services and applications This functionality is extended with XenServer Virtual Appliance (vApp) technology. A vApp is a logical group of one or more related VMs which can be started as a single entity in the event of a disaster. When a vApp is started, the VMs contained within the vApp are started based on a predefined order, relieving the administrator from manually stating servers. The vApp functionality is useful in DR situation where all VMs in s vApp reside on the same Storage Repository. NOTE: XenServer Disaster Recovery can only be enabled when using LVM over FC/iSCSI HBA, or software iSCSI. A small amount of space will be required on the storage for a new LUN which will contain the pool recovery information.
Replication Overview
XenServer Disaster Recovery takes advantage of Dell Compellents replication technology to provide high availability. Dell Compellent replicates volumes in one direction. In a DR scenario, data is replicated from the primary site to the secondary site. By default, Dell Compellent replication is not bidirectional; therefore it is not possible to XenMotion between source Storage Center (the primary site) and destination Storage Center (the secondary site) unless using Dell Compellent Live Volumes for Replication. The following best practices recommendations for replication and remote recovery should be considered. Compatible XenServer server hardware and OS is required at the DR site to map replicated volumes to in the event the main XenServer Pool becomes inoperable. Since replicated volumes can contain more than one virtual machine, it is recommended to sort virtual machines into specific replicated and non-replicated Storage Repositories. For example, if there are 30 virtual machines in the XenServer Pool, and only eight of them need to be replicated to the DR site, a special "Replicated" volume should be created to place those eight virtual machines on, or utilize a 1:1 mapping of VMs to Volumes and only replicated the required VMs. Take advantage of the Storage Center QOS settings to prioritize the replication bandwidth of certain "mission critical" volumes. For example, two QOS definitions could be created so that the "mission critical" volume would get 80 Mb of the bandwidth, and the lower priority volume would get 20 Mb of the bandwidth.
The following steps should be taken in preparation for a disaster: Configure the VMs and vApps. Note how the VMs and vApps are mapped to the SRs and the SRs to Volumes. Verify that the name_label and name__description are meaningful and will allow an administrator to recognize the SR after a disaster. Configure replication of the SR volume Page 68
After the VM and vApps have been configured the Volumes can be replicate to the secondary DR site. This process is simplified with Dell Compellent Enterprise Manager (EM) GUI. In the example below, an SR Volume that resides on a Storage Center named SC13 at the primary location is replicated to a Storage Center named SC12 at the secondary location. The Dell Compellent Enterprise Manager User Guide outlines the steps necessary to configure replication.
Figure 87 Enterprise Manager Replication
Disaster recovery can be configured once replication is setup and all data has been replicated to the secondary site. Follow the steps below to configure Disaster Recovery NOTE: The examples below server as a reference for the requirements of configuring XenServer DR with Dell Compellent Storage Center. For complete information on configuring and testing XenDesktop DR consult the Citrix XenServer 6.0 Administrators Guide. 1. Select the pool at the primary site that will be protected and go to the Pool menu, Disaster Recovery, and select Configure. This will open the DR configuration window.
Figure 88, Select DR Pool
2. Select the Storage Repositories that will be protected with XenServer DR and click OK to finish.
Page 69
2. Next, map the new View Volume to the servers in the recovery pool. 3. After the View Volume has been created and mapped, run the Disaster Recovery Wizard by selecting the recovery pool in XenCenter going to Pool, Disaster Recovery, and selecting Disaster Recovery Wizard. 4. On the Disaster Recovery Wizard window select Test Failover and click on Next.
Page 70
5. Read the message on the Before You Start screen and click Next to reach the Locate Mirrored SRs screen. From the Find Storage Repositories dropdown box, select the type of mappings used to connect the servers in the Pool to the View Volume, either HBA or software iSCSI. NOTE: Only iSCSI and FC HBAs and Software iSCSI are available for the XenServer DR feature.
Figure 91, Locate Mirrored SR
6. Select the SR to test and click Next to continue. XenServer will mount the SR and discover the VMs and vApps on the volume. 7. On the next screen, select the VMs and vApps to be tested. Also select the desired option for the power state after the recovery. Click Next to continue. Page 71
8. The Disaster Recovery Wizard will check prerequisites on the next screen. Once the failover pre-checks are finished, click the Fail Over button to continue the test. The test may take some time depending on the number of VMs involved. During this time, the VMs and vApps that were selected in the previous step will be created in the secondary Pool and started if that option was selected.
Figure 93 Failover Test Progress
9. The progress screen will show the status of the DR process. 10. Clicking Next will display the summary of the test. Also, the VMs and vApps will be removed from the Pool as well as the replicated volume. 11. Clicking Finish at the Summary of Test Failover screen will conclude the test. Page 72
There are two options to prepare the volume at the secondary Storage Center for a failover. The first option is to create and map a View Volume to the servers in the recovery pool. This is the same process as outlined in the failover test above and is the preferred method for recovering from a disaster. The second option is to remove replication and mount the replicated volume. This can be done by removing the replication in Enterprise Manager and adding mappings to the servers in the recovery pool at the secondary site. 1. To remove replication, go to Replications in EM and select the source Storage Center. 2. Right Click on the volume and select Delete. This will bring up the Delete Replication screen.
Figure 94 Delete Replication in EM
3. Be sure that Put Destination Volume in the Recycle Bin is NOT selected and click OK. 4. Alternatively, if the Storage Center at the source site is not available, the source Storage Center mappings can be removed from the Mapping tab under the destination volumes properties. This will prevent replication to the volume if the source comes back online.
Page 73
Once the replication has stopped the volume at the secondary site can be mapped to the servers in the recovery pool. 1. To begin the failover process, select the recovery pool and go to Pool, Disaster Recovery Wizard and select Failover on the Welcome screen.
Figure 96, Disaster Recovery Failover
2. Click Next on the Before you start screen and use the Find Storage Repositories dropdown to locate the recovery SR that was mapped to the servers in a previous step. Repeated this process for each SR to be recovered. Click Next when finished.
Page 74
3. Select the VMs and vApps that are to be recovered. Select the appropriate Power State after Recovery option and click Next.
Figure 98, Select vApps and VMs to Fail Over
Page 75
4. Resolve any pre-check errors and click Fail Over to begin the failover process. This may take some time depending on the number of VMs and vApps to be recovered.
Figure 99, DR Failover Progress
5. Once the DR process has completed a summary page displays the status of each vApp and VM and its status. Click Finish to exit the wizard.
Page 76
Page 77
After the VM and vApps have been configured the Volumes can be replicate to the secondary DR site. This process is simplified with Dell Compellent Enterprise Manager (EM). In the example below, the SR Volume that resides on a Storage Center named SC13 at the primary location is replicated to a Storage Center named SC12 at the secondary location. The Dell Compellent Enterprise Manager User Guide outlines the steps necessary configure replication between the Storage Centers.
Figure 101 Enterprise Manager Replication
Next, a disaster is simulated by removing the replication jobs between the primary and secondary Storage Center in Enterprise Manager. 1. Replication can be removed in Enterprise Manager by going to Replications and selecting the source Storage Center. This will list the replications from that Storage Center. 2. Right Click on the volume and select delete. This will bring up the delete replication screen. Be sure that Put Destination Volume in the Recycle Bin is NOT selected and click OK. Page 78
NOTE: A disaster test could have been done by simply creating a View Volume from one of the replays on the DR Storage Center system. This process would allow the testing of a DR plan to validate data at any time without disrupting replication 3. Next, the servers at the secondary site are mapped to the volume. In this example, servers XenServer6P2S1 and XenServer6P2S2 are mapped to the volume Repl of Xen6_P1_SR1.
Figure 103, Server Mapping to the Recovery Volume
4. After the volume is mapped to the servers in pool2 at the secondary site it can be attached using the New Storage Wizard in XenCenter. The figure below shows the storage attached to the secondary pool. The VM files are on the storage but not yet available in the pool.
Page 79
5. To add the VMs to the recovery pool the Metadata will need to be restored using the XenServer Console Backup, Restore and Update menu. It is important that the VM networks are named exactly the same in order for this to succeed.
Figure 105, VM Metadata Restored
6. After the Metadata is restored the VMs will be available at the secondary site. Once the Virtual Machine MetaData is restored, the Virtual machines can be started on the remote DR XenServer.
Page 80
After the recovery to the secondary site it may be necessary to fail back to the primary site. The failback process is the same as outlined above, except for modifying the primary and secondary site to reflect the VMs source and destination location.
Page 81
Live Volume is a software-based solution integrated into the Dell Compellent Storage Center controllers. Live Volume is designed to operate in a production environment, allowing both Storage Centers to remain operational during volume migrations. Live Volume increases operational efficiency, reduces planned outages, and enables a site to avoid disruption during anticipated disasters. Live Volume provides these powerful new options: Storage Follows the Application in Virtualized Server Environments. Live Volume automatically migrates data as virtual applications are moved. Zero Downtime Maintenance for Planned Outages. Live Volume enables all data to be moved non-disruptively between Storage Centers, enabling full planned site shutdown without downtime. On-demand Load Balancing. Live Volume enables data to be relocated as desired to distribute workload between Storage Centers. Stretch Microsoft, VMware, and XenServer Volumes between geographically disperse locations. Live Volume allows servers to see the same disk signature on the volume between datacenters thereby allowing the volume to be clustered.
Page 82
Live Volume is designed to fit into existing physical and virtual environments without disruption and without requiring extra hardware or changes to configurations or workflow. Physical and virtual servers see a consistent, unchanging virtual volume. All volume mapping is consistent and transparent before, during, and after migration. Live Volume can be run automatically or manually and is fully integrated into the Storage Center software environment. Live Volume operates asynchronously and is designed for planned migrations where both Storage Centers are simultaneously available. A Live Volume can be created between two Dell Compellent Storage Centers residing in the same datacenter or between two well-connected datacenters. Using Dell Compellent Enterprise Manager, a Live Volume can be created from a new volume, an existing volume, or an existing replication. For more information on creating Live Volume, see the Compellent Enterprise Manager User Guide. For more information on the Best Practices for Live Volume please see the Dell Compellent Storage Center Best Practices Document for Live Volume on the Dell Compellent Knowledge Center Portal at http://kc.compelent.com .
Page 83
Appendix 1 Troubleshooting
XenServer Pool FC Mapping Issue
Occasionally when connecting a FC Volume to a XenServer Pool is the mapping is only made on the Master node in the pool and is not connected on the additional nodes. This typically takes place if attempting to attach the Volume right after the creation of the Volume. In most instances, waiting approximately one hour before mapping the volume will prevent this issue from occuring. The following section details the steps necessary to fix the missing connection issue when mapping a New SR to a XenServer pool without rebooting the hosts or moving the Master. Notice in the figure below that the SR mapped to Pool1 is mapped correctly to the host server XenServer6P1S1 but not to the server XenServer6P1S2.
Figure 108, SR Mapping Broken
To resolve this issue logon to the console of one of the XenServers in the pool and go to the local command shell. This can be done either from the console or from a SSH client such as PuTTy. At the command prompt type: xe host-list to obtain the list of all the server in the pool and their associated UUIDs.
[root@ XenServer6P1S1 ~]# xe host-list uuid ( RO) : 5cd5d2ed-b462-4eba-9761-d874b8e3e564 name-label ( RW): XenServer6P1S 1.techsol.local name-description ( RO): Default install of XenServer uuid ( RO) : be925e21-a95e-438d-8155-b98d09c26351 name-label ( RW): XenServer6P1S 2.techsol.local name-description ( RO): Default install of XenServer [root@ XenServer6P1S1 ~]#
1. Run the SR-Probe command for each of the XenServer hosts not mapping the volume correctly. Type the following to probe the host: xe sr-probe host-uuid=<uuid or server> type=lvmohba Page 84
7. Once the sr-probe command has been completed for all the hosts the SR can be repaired by right clicking the SR from the XenCenter console and selecting Repair Storage Repository.
2. Click the Repair button. 3. When the repair is complete all nodes should report back as Connected.
Figure 110, Repaired SR
Page 85
Cannot perform discovery. Initiatorname required or iscsid is not running. Could not start up automatically using the startup command.
Page 86
Run the iscsiadm -m node --login command to force the iSCSI software initiator to connect both paths
Figure 113, Multipath Active
Following the steps outlined in Citrix Document CTX122852 may resolve this issue.
Page 88