Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Citrix Provisioning Server

Design Considerations

Citrix Consulting

Provisioning Server High Availability Considerations

Overview
The purpose of this document is to give the target audience an overview about the critical components of a Citrix
Provisioning Server infrastructure with regards to a high availability implementation. These considerations focus on the
following areas:
• Virtual Disk (vDisk) Storage
• Write Cache Placement
• SQL Database
• TFTP Service
• DHCP Service

Target Audience
This document was written for information technology (IT) infrastructure specialists who are responsible for planning and
designing Provisioning Server infrastructure. These specialists include consultants, internal IT architects, and others who
are concerned with design decisions related to dynamic server provisioning.

Acknowledgement
Citrix Consulting would like to thank the customer DSV-Gruppe Deutscher Sparkassenverlag (www.dsv-gruppe.de) for
providing the test environment and test cases in order to validate typical customer challenges, which were essential for
the creation of this white paper.
2

vDisk Storage Considerations


The following chapter outlines different options for configuring the Provisioning Server vDisk store.

Local vDisks
Using the local hard disk subsystem of the Provisioning Servers to store the vDisks provides the easiest way of
implementing vDisk high availability without additional cost.
Note: When configuring a vDisk store pointing to a local directory of multiple servers (that is, C:\vDisks) only one item is
showed per vDisk, which allows some degree of central vDisk management.
Pros for this solution are:
• No additional cost
• Easy to implement and maintain
Cons for this solution are:
• vDisks must be manually synchronized between the Provisioning Servers
• I/O performance depends on the capabilities of the hard drive subsystem (usually equal to Network Attached
Storage (NAS))
Recommendations:
• Network Interface Card (NIC) – Teaming should be used to increase the reliability and to I/O between the
Provisioning Servers and the Target Devices
3

Windows Network Share


Using a single Windows network share (Common Internet File System (CIFS)) to store the vDisks is a very easy way of
implementing a central vDisk store for a Provisioning Server implementation at minimal cost.
Pros for this solution are:
• Minimal cost to purchase, implement, and maintain
• Easy to implement and maintain
Cons for this solution are:
• No redundancy for file server outages (all Provisioning Server Target Devices stop responding)
• I/O performance not as good as NAS, an iSCSI or Fibre Channel (FC) Storage Area Network (SAN)
• Limited scalability to support a increasing number of Target Devices
• I/O for vDisk input (loading from the share) and vDisk output (delivering to the Target Devices) is handled by the
same network link. This affects the overall performance of the solution.
Recommendations:
• Network Interface Card (NIC) – Teaming should be used to increase the reliability and the I/O between the
Provisioning Servers, file server and Target Devices
• Where feasible dedicated NICs should be used for loading the vDisks and for delivering the vDisks to the Target
Devices
The following diagram outlines a basic Provisioning Server + Windows File Share (CIFS) architecture:
4

Network Attached Storage (NAS)


Using a single NAS to store the vDisks is an easy way to implement a central vDisk store for a Provisioning Server
implementation at moderate cost.
Pros for this solution are:
• Moderate cost to purchase, implement, and maintain
• Easy to implement and maintain
• Enhanced reliability
• Greater degree of scalability to support a increasing number of Target Devices
o The combination of an NAS head/aggregator and additional storage arrays can be used to effectively load
balance the increase in I/O demands
Cons for this solution are:
• More expensive than a Windows network share
• No redundancy for NAS outages (all Provisioning Server Target Devices stop responding), whereas NAS devices
themselves usually provide high reliability
• Requires software to manage the storage array
• I/O performance not as good as an iSCSI or Fibre Channel SAN
• RAID arrays must be configured on the storage array and assigned to each Provisioning Server
• I/O for vDisk input (loading from the share) and vDisk output (delivering to the Target Devices) is handled by the
same network link. This affects the overall performance of the solution.
Recommendations:
• Network Interface Card (NIC) – Teaming should be used to increase the reliability and the I/O between the
Provisioning Servers, File Server and Target Devices
• Where feasible, dedicated NICs should be used for loading the vDisks and for delivering the vDisks to the Target
Devices
The following diagram outlines a basic Provisioning Server + Windows File Share (CIFS) architecture:
5

Storage Area Network (SAN) – iSCSI


Using a Storage Area Network (SAN) to store the vDisks and accessing it by means of the iSCSI protocol provides a
reliable way (with good performance) to implement a central vDisk store for a Provisioning Server implementation at
higher cost.
Pros for this solution are:
• Moderate cost to purchase, implement and maintain
• Highest levels of reliability through built-in redundant components and features
• High degree of scalability to support a increasing number of Target Devices
o Multiple partitions/LUNs can be created on multiple RAID arrays and combined into a single stripe-set,
utilizing the additional disk heads to support the increase in I/O demands
Cons for this solution are:
• More expensive than a Windows network share or NAS
• Requires software to manage the storage array.
• More software required to implement an iSCSI SAN:
o iSCSI Initiator and MultiPath software must be installed and configured on each Provisioning Server.
o iSCSI Target software must be installed and configured on the file server(s).
• A cluster or parallel file system (that is, Sanbolic MelioFS) is required to ensure the integrity of the
partition/Logical Unit Number (LUN) containing the vDisks.
• I/O performance not as good as a Fibre Channel SAN
• I/O for vDisk input (loading from the share) and vDisk output (delivering to the Target Devices) is handled by the
same network link. This affects the overall performance of the solution.
Recommendations:
• Network Interface Card (NIC) – Teaming should be used to increase the reliability and the I/O between the
Provisioning Servers, File Server and Target Devices
• Dedicated NIC - Teams should be used for loading the vDisks and for delivering the vDisks to the Target Devices
6
7

Storage Area Network (SAN) – Fibre Channel


Using a Storage Area Network (SAN) to store the vDisks and access them by means of Fibre Channel provides a very
reliable way, with the highest level of performance, to implement a central vDisk store for a Provisioning Server
implementation at highest cost.
Pros for this solution are:
• Highest levels of performance
• Highest levels of reliability through built-in redundant components and features
• High degree of scalability to support a increasing number of Target Devices
o Multiple partitions/LUNs can be created on multiple RAID arrays and combined into a single stripe-set,
utilizing the additional disk heads to support the increase in I/O demands
Cons for this solution are:
• Most expensive to purchase, implement and maintain
• Additional Hardware (HBAs) required for every Provisioning Server
• Requires software to manage the storage array.
• A cluster or parallel file system (that is, Sanbolic MelioFS) is required to ensure the integrity of the partition/LUN
containing the vDisks.
Recommendations:
• Network Interface Card (NIC) – Teaming should be used to increase the reliability and to I/O between the
Provisioning Servers and the Target Devices
8

Decisions Matrix
The following table outlines a summary of all discussed solutions and their benefit areas for Provisioning Server
implementations (1 = Least Benefit, 5= Most Benefit).

Ease of
Cost Performance Reliability Scalability
Use

Local vDisks 5 4 3 3 2

Windows File
4 5 2 1 1
Share

NAS 3 3 3 3 3

iSCSI SAN 2 2 4 5 5

Fiber Channel SAN 1 1 5 5 5

Write Cache Placement


The size of the cache file for each provisioned workstation depends on several factors, including types of applications
used, user workflow, and restart frequency. A general estimate of the file cache size for a provisioned workstation running
only text-based applications (such as Microsoft Word and Outlook) that is restarted daily is about 300-500 megabytes
(MB). If workstations are restarted less often that this, or graphic-intensive applications (such as Microsoft PowerPoint,
Visual Studio, or CAD/CAM type applications) are used, cache file sizes can grow much larger. This estimate is based on
past experience and may not accurately reflect each environment. Citrix recommends each organization perform an
analysis to determine the expected cache file size in their environment.

Cache File Location


There are several options for storing the cache file for provisioned systems. Each of these options has benefits and
limitations. Thus, it is important to evaluate the specific requirements of the organization before making the decision for
the cache file location. The benefits and limitations of each option are described below.
• Client-side cache
o This option requires a local disk on each physical system.
o This option generally provides the fastest performance, as the client is using its own physical resources to
store the cache file on the local hard disk.
9

o This option provides added resiliency in the event of a failure because only a single client will be affected
if the local disk associated with the system runs out of space.
• Server-side cache
o The cache file can also be stored within a shared enterprise storage solution (SAN/NAS) accessed
through the Provisioning Server. In this case, the Provisioning Server would act as a proxy between the
clients and the storage solution.
o Proxying the cache traffic through the Provisioning Server in this manner impacts the network IO and
reduces the scalability of each Provisioning Server. This will impact the performance of the desktops
being supported by the Provisioning Server as well as reduce the number of active clients that can be
supported by a single server.
o In this scenario, if the shared storage location fills up and no disk space remains for the cache, all virtual
machines may experience performance issues.
Note: For enterprise deployments, Citrix does not recommend storing the cache file locally on the
Provisioning Server.
• Client-side RAM cache
o RAM is faster than hard disk, so better performance will be seen if RAM is used.
o If RAM is used for the cache file, the file is limited in size by the amount of physical RAM available.
o If the cache file is expected to grow larger than the amount of available physical RAM in the device, the
device’s hard drive should be used instead. If the target devices do not contain hard drives (diskless),
cache files can be stored on shared enterprise storage, proxied through the Provisioning Server as
described above.
o If the target devices have hard drives, there is generally more space available for the cache file on the
hard drive than in memory.
Note: Citrix recommends leveraging the target device RAM or hard disk to store the cache files.

Determining Expected Cache File Size


Citrix recommends using a pilot/Proof of Concepts (POC) environment to determine the expected size of the cache files.
To do so, within the pilot/POC environment, configure the write cache to be located on the Provisioning Server and have
several different types of end users (graphical application users, text-based task workers, and so on) work on their
provisioned desktops. After several full days of heavy use, the administrators can look at the size of each of the cache
files on the Provisioning Server. This will give a rough estimate of how large the cache files can grow to in the production
environment. In the production environment, gracefully restarting the desktops everyday will help reduce the size of the
cache file.

Restart Provisioned Workstations Frequently


The cache file (write cache) for provisioned workstations can grow quite large, and will continue to grow until the
workstation is restarted. Depending on the applications used and the frequency of restarts, the write cache can grow as
large as to impact an organization’s storage solution. The cache file gets cleared out upon workstation restart; thus, the
more frequently the workstation is restarted, the less of an impact the cache files will have. Citrix recommends restarting
workstations daily if possible. If daily restarts are not possible, Citrix recommends restarting workstations at least once
per week to reduce the storage impact of cache files.
Note: A graceful shutdown or restart is required to clear the cache file. Turning off the machine without a graceful
shutdown does not clear out the cache file. XenDesktop allows configuration of logoff behavior so this process can be
automated as well.
10

SQL Database
With the release of Provisioning Server 5.0 the configuration database has been changed from a JET type database to
the more robust Microsoft SQL Server. All editions of Microsoft SQL Server 2005 (even SQL Express, which is included
with the Provisioning Server disbursement) are supported, as stated in the Citrix Knowledge Base article CTX114501.
Unlike in XenApp implementations, the Provisioning Server configuration database is a highly critical component that must
be available at all times for serving Target Devices. In case of an outage of the database, existing sessions will continue
but new sessions cannot be established. Therefore the SQL database must be configured in a fully redundant manner.
Further information about how to configure a high available Microsoft SQL Server environment can be found on Microsoft
TechNet: http://msdn.microsoft.com/en-us/library/ms190202.aspx

TFTP Service High Availability


Within Provisioning Server implementations, the Trivial File Transfer Protocol (TFTP) service is used to deliver the
Provisioning Server bootstrap. This file contains, besides various configuration settings, a list of available Provisioning
Servers within the environment.
All information about location and name of the bootfile is delivered by the DHCP server upon request of the Target
Devices (Dynamic Host Configuration Protocol (DHCP) options 66 [Boot Server Host Name] and 67 [Bootfile Name]).

Note: Alternatives to the usage option 66 and 67 in DHCP are:


• PXE Service: This service is a default component of all Provisioning Servers and uses a
broadcasting technology similar to DHCP to deliver information about the bootstrap. For
serving Target Devices in different subnets, DHCP Helper or Proxy DHCP entries
(RFC1542, RFC3046) are required.
• Boot Device Management Utility: Can be leveraged to create an ISO file, which
contains the bootstrap and various other configuration settings.

As the DHCP options required for this task are "single entry" options, which means that only one value per option is
allowed, a limited number of configurations for providing High Availability are possible.
• DNS Round Robin: Instead of an IP address a fully qualified domain name (FQDN) (that is,
pvstftp.mycorp.local) can be configured within DHCP option 66. This FQDN can be configured for DNS Round
Robin, which means that it contains a list of multiple IPs instead of a single IP. In this scenario all systems
corresponding to the IPs configured, are used rotationally.
The downside to this is that DNS does not check whether the systems are operational or not. So if you experience
an outage on one system in a two-system bond, 50 percent of the booting Provisioning Server Target Devices will
not be served. To minimize the impact of an outage, a very short DNS time to live (TTL) for the FQDN can be
configured.
• Hardware-based load balancing (NetScaler): When using hardware-based load balancers (such as NetScaler)
all load balanced services can be checked for availability and functionality at regular intervals. If you experience
an outage on one of the servers or services, it gets automatically removed from the load balancing list. So instead
of configuring DHCP option 66 for a FQDN or referring directly to a TFTP server, a NetScaler vServer could be
created to provide all levels of redundancy for this service and its IP address would be used for this DHCP option.
11

Load Balancing Provisioning Server TFTP by means of NetScaler


Load balancing a TFTP service by means of Citrix NetScaler requires certain configuration steps outlined within the
knowledge base article CTX116337. A very important part of this configuration is setting the IP address of the default
gateway of the TFTP server to the IP address of the NetScaler MIP or SNIP. This means that all communication between
the TFTP server and the TFTP clients will be handled by the NetScaler.
As within a standard Provisioning Server environment, the TFTP service is hosted on the Provisioning Server itself, and
the vDisk deployment would be handled by the NetScalers as well. To prevent this additional network hop, to reduce the
network load on the NetScaler and to minimize the complexity of a Provisioning Server environment, a standalone TFTP
server (such as free TFTP from Solarwinds) can be used.
The Provisioning Server and TFTP service architecture of such a configuration is outlined below:
12

DHCP High Availability


Within a typical Provisioning Server environment, DHCP plays a very central role as it provides the Target Devices with
the IP address configuration and information about the Provisioning Server bootstrap (location and filename). Therefore it
is necessary to implement DHCP in a reliable way. Within this section we will focus on Microsoft based DHCP services,
but the main concepts for fault tolerance remain the same even if hosted of other operating systems.
Typical concepts of a full, redundant DHCP implementation are:
• DHCP Split Scopes
• DHCP Cluster
• DHCP Stand-by Server
Further information about this topic can been gathered from Microsoft TechNet “Enterprise Design for DHCP”
(http://www.microsoft.com/technet/solutionaccelerators/wssra/raguide/NetworkServices/ignsbp_3.mspx).
Recommended for further information: Microsoft DHCP Best Practices

Note: Using dynamic DHCP for the Target Devices might cause in XenDesktop VDA Registration and XenApp XML
communication issues due to inconsistent DNS (FQDN) resolutions. This can be caused by TTL settings of DNS records.
13

Notice

The information in this publication is subject to change without notice.

THIS PUBLICATION IS PROVIDED “AS IS” WITHOUT WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-
INFRINGEMENT. CITRIX SYSTEMS, INC. (“CITRIX”), SHALL NOT BE LIABLE FOR TECHNICAL OR EDITORIAL
ERRORS OR OMISSIONS CONTAINED HEREIN, NOR FOR DIRECT, INCIDENTAL, CONSEQUENTIAL OR ANY
OTHER DAMAGES RESULTING FROM THE FURNISHING, PERFORMANCE, OR USE OF THIS PUBLICATION,
EVEN IF CITRIX HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES IN ADVANCE.

This publication contains information protected by copyright. Except for internal distribution, no part of this publication
may be photocopied or reproduced in any form without prior written consent from Citrix.

The exclusive warranty for Citrix products, if any, is stated in the product documentation accompanying such products.
Citrix does not warrant products other than its own.

Product names mentioned herein may be trademarks and/or registered trademarks of their respective companies.

Copyright © 2008 Citrix Systems, Inc., 851 West Cypress Creek Road, Ft. Lauderdale, Florida 33309-2009
U.S.A. All rights reserved.

Version History
Author(s) Version Change Log Date
Thomas Berger 1.0 Initial documentation December 1, 2008
Principal Consultant
Consulting Services Central Europe

Tarkan Koçoğlu
Senior Architect
Worldwide Field Readiness & Productivity

Bob Hesseltine
Principal Consultant
Citrix Consulting Americas

851 West Cypress Creek Road Fort Lauderdale, FL 33309 954-267-3000 http://www.citrix.com

Copyright © 2008 Citrix Systems, Inc. All rights reserved. Citrix, the Citrix logo, Citrix ICA, Citrix MetaFrame, and other Citrix product names are
trademarks of Citrix Systems, Inc. All other product names, company names, marks, logos, and symbols are trademarks of their respective owners.

You might also like