Download as pdf or txt
Download as pdf or txt
You are on page 1of 100

HP Converged Infrastructure Reference Architecture for VMware View

Building a cost effective, scalable, enterprise VDI solution


Technical white paper

Table of contents
Introduction ......................................................................................................................................... 3 How to accelerate your VMware View adoption .................................................................................. 3 Software used for this document ........................................................................................................ 4 Other architectures .............................................................................................................................. 5 The changing landscape ................................................................................................................... 5 The platform ........................................................................................................................................ 7 Scaling and sizing VDI ....................................................................................................................... 17 Overview ...................................................................................................................................... 17 Server sizing ................................................................................................................................. 17 Storage sizing ............................................................................................................................... 21 Other factors effecting sizing ........................................................................................................... 28 POD scaling by user type ................................................................................................................ 30 Building the platform .......................................................................................................................... 36 Creating the first POD..................................................................................................................... 36 Installing the first hypervisor host and CMC ....................................................................................... 49 Configuring the SAN using the Centralized Management Console VM ................................................. 53 Configuring hypervisor hosts ........................................................................................................... 64 Configuring storage for hypervisor hosts ........................................................................................... 68 Creating the initial volumes for management ..................................................................................... 69 Volume layout................................................................................................................................ 75 Set up management VMs ................................................................................................................ 76 Creating the clusters ....................................................................................................................... 78 Creating the View environment ........................................................................................................ 79 Scaling to other enclosures .............................................................................................................. 79 Scaling beyond the enclosure .......................................................................................................... 82 Conclusions ...................................................................................................................................... 82 Appendix A ...................................................................................................................................... 83 Notes and resources....................................................................................................................... 83 Appendix B Virtual Connect technology ............................................................................................. 84 Virtual Connect Flex-10................................................................................................................... 85 Appendix C Sample enclosure configuration script.............................................................................. 87 Appendix D HP Thin Clients ............................................................................................................. 93 HP Device Manager ....................................................................................................................... 94 Appendix E AppSense Environment Manager..................................................................................... 95

Appendix F Volume names ............................................................................................................... 96 Appendix G Networking.................................................................................................................. 97 Appendix H Bill of materials ............................................................................................................. 98 For more information ........................................................................................................................ 100

Introduction
How to accelerate your VMware View adoption
Virtual Desktop Infrastructure (VDI) or Hosted Virtual Desktops (HVD) as part of an overall client virtualization strategy offers tremendous benefits to enterprises. Among these benefits are enhanced data control and security, lower overall power consumption, insurance of regulatory compliance, enhanced client management capabilities, lower support costs, simplified deployment and patch management as well as extended device lifecycles. The growth in VDI adoption is a testament to the reality of many of these benefits. There have been adoption challenges along the way. The need to bring separate IT suppliers, end users and IT teams together to deploy and manage VDI has led to extended evaluation cycles and sub-optimal configurations as some groups have brought their favorites to the game rather than the optimal components for a solution. Some organizations are structured in a way that makes justifying VDI difficult as the cost savings are not reflected on their individual metrics. Planning and sizing suggestions have not always equated to practical and realistic numbers for implementation making it difficult to hit return-on-investment (ROI) and total cost of ownership (TCO) targets. Perhaps the biggest stigma has been a cost that, at times, is dramatically more expensive than a desktop. Early on, implementing VDI meant starting with 100 users and working toward 1,000. Large scale implementations have become reality in the past couple of years, but the approach to scaling and sizing remains the same. This has created a level of difficulty for customers seeking to maintain a level of cost effectiveness at both small and large scales. The platform HP presents in this document is designed from the outset to scale to the thousands of users, not to the hundreds. With the release of the HP Converged Infrastructure Enterprise Reference Architecture for VMware View, HP is addressing these issues and clearing barriers to adoption. Document purpose This paper introduces the reference architecture and reintroduces the POD scaling concept to a technical audience. It seeks to explain the design and merits of the platform as well as how it is implemented. The paper is intended to be used in conjunction with VMwares reference architectures for VMware View and associated documents (http://www.vmware.com/products/view/resources.html). Abbreviations and naming conventions Table 1 is a list of abbreviations and names used throughout this document and their intended meaning.
Table 1. Abbreviations and names used in this document

Convention Microsoft RDP PCoIP VDI OA LUN IOPS

Definition Microsoft Remote Desktop Protocol Teradici PC over IP protocol Virtual Desktop Infrastructure Onboard Administrator Logical Unit Number Input/Output Operations per Second

POD

The scaling unit of this reference architecture

Intended audience The audience for this document is intended to be technical. Examples of targeted users include IT architects, HP and HP Partner services personnel, implementation specialists and IT planners.

Software used for this document


This document references numerous software components. The acceptable version of each OS and versions of software used for test are listed in this section. Hypervisor hosts
Components OS Software description VMware ESX 4.0.0, Build 2081671

Management server operating systems


Components vCenter server View Manager server HP Systems Insight Manager (SIM) server HP P4000 Central Management Console server AppSense Environment Manager server SQL Server server Software description Microsoft Windows Server 2003, R2 x32 Microsoft Windows Server 2003, R2 x32 Microsoft Windows Server 2003, R2 x32

Microsoft Windows Server 2003, R2 x32

Microsoft Windows Server 2003, R2 x32 Microsoft Windows Server 2003, R2 x32

Management software
Components VMware vCenter HP Systems Insight Manager HP StorageWorks P4000 SAN/iQ Centralized Management Console Microsoft SQL Server Software description VMware vCenter Server 4.0.0, Build 208111 HP Systems Insight Manager 6.0

HP StorageWorks P4000 SAN/iQ Centralized Management Console (CMC) 8.5 Microsoft SQL Server 2005 Enterprise Edition, Service Pack 3

Use native NIC drivers delivered with the installation media.

View 4 components
Components VMware View Manager VMware View Composer VMware ThinApp Software description VMware View Connection Server 4.0.1 VMware View Composer 2.0.1 VMware ThinApp Enterprise, 4.5

Firmware revisions
Components HP Onboard Administrator HP Virtual Connect HP ProLiant Server System ROM Shared SAS Switch HP Integrated Lights-Out 2 (iLO 2) Broadcom NetXtreme II Ethernet Network Controllers HP Smart Array P700m HP StorageWorks MDS600 Version 3.0 3.01 Varies by server 2.2.4 1.82

Latest supported version

7.08 2.60

Virtual machines
Components Operating System Connection Protocol Software description Windows XP, latest patches as of test date Microsoft RDP 6

Other architectures
The changing landscape
While the platform in this paper addresses large scale enterprise deployments, HP is well equipped to handle a variety of deployment scenarios. These range from 12 user micro branches to 250 plus user departmental deployments to 800 plus user small and medium sized businesses (SMB) and large branch deployments and all points in between. Figure 1 shows HPs applied architectures for a variety of use cases.

Figure 1. HPs applied architectures for VDI

Micro branch HP is enabling the micro branch for VDI. Very small branches have traditionally had issues implementing VDI. This has tended to be a result of either network pipes that were too small, lack of local management resources or both. HPs Micro-Branch Architecture POD will alleviate issues with micro branch deployment. SMB/branch VDI is not just for the enterprise. Businesses of all sizes as well as branch locations for large IT departments can reap the benefits of centralized management and data control as well as power efficiency that VDI brings. HPs Branch Architecture POD will bring cost effective virtual desktops to a broader range of use cases. Departmental Even the largest IT shops have a need for segmented, simple to scale resources. HP is well positioned to handle departmental deployments of thousands of users with PODs built on HP BladeSystem and HP StorageWorks P4500 G2 SAN. Simple to scale while remaining cost effective, these PODs are a simple solution to VDI in the department space. Enterprise Outlined in this document, HPs approach to Enterprise VDI is to make it easy to scale by large numbers of users while keeping management simple and deployment even simpler. Regardless of your VDI needs, HP offers cost effective solutions across use cases that embrace a centralized management and control paradigm that helps maximize return on investment by minimizing complexity. HP offers more than just products for VDI. Engaging HP Services is a great way to insure that you have the broadest possible range of experience and skills to help with your VDI implementation.

The platform
Crossing boundaries The HP Converged Infrastructure Enterprise Reference Architecture for VMware View is a reflection of the business drivers that led to its creation. Customers repeatedly told HP that VDI as a technology crossed numerous boundaries inside of their IT organizations like few other technologies. To deploy and manage VDI took the cooperation of the storage, desktop, virtualization, server/infrastructure and network teams as well as internal support teams. HP as a technology provider is able to cross all of these boundaries. Very early into the VDI wave HP coordinated efforts across all of these product categories and it has resulted in a strong product and solution set around VDI. As requests grew for larger and larger VDI implementations it became evident that HP could use its breadth and depth of products and services to become the premier provider of VDI solutions. The combining of storage, servers, infrastructure and virtualization From the outset, the POD approach to VDI was meant to help solve the issues of multiple points of contact in the VDI space. While the reference platform is built to easily support traditionally segmented IT departments, it also serves as a catalyst to enable new paradigms of management in the client virtualization space. Figure 2 highlights the platform pieces and the resulting converged infrastructure.

Figure 2. The HP Reference Platform for VDI Enterprise POD

HP BladeSystem c7000 Enclosures HP Virtual Connect Flex-10 Interconnect Modules HP ProLiant BL490c G6 Servers HP StorageWorks P4800 BladeSystem SAN built on HP P4000sb storage blades HP 3Gb Shared SAS Switches HP StorageWorks 600 Modular Disk Systems (MDS600s)

HP ProLiant BL490c G6 HP offers a broad range of servers in form factors designed to fit with all management environments. HP selected the ProLiant BL490c G6 server with Intel Xeon X5500 processors for this reference platform. The BL490c provides large memory capacity in a small form factor with exceptionally high performance and power/space density all desirable characteristics for VDI deployments. The halfheight form factor provides the added benefit of maximizing cable reduction and infrastructure costs while maintaining high levels of bandwidth on a per system and per virtual machine (VM) basis. The HP ProLiant BL490c G6 servers recommended are outfitted as in Table 2. Sizing recommendations in this document are based on this configuration.
Table 2. HP ProLiant BL490c G6 configuration.

Item CPU Memory Local Disk NICs Expansion

Configuration 2 Intel Xeon Processor X5570 (2.93 GHz, 8M L3 Cache) 72 GB PC3-10600R, 18 4GB memory DIMMs 1 HP 64GB 1.5G SATA NHP SFF SP ENT SSD Embedded NC532i Dual Port Flex-10 10GbE Multifunction Mezzanine 1 open for use. Mezzanine 2 is reserved and should not be used.

The ProLiant BL490c G6 is upgradeable to include a second SSD as well as memory expansion up to 144GB. Hypervisor hosts for this architecture are customer selectable. You may select any HP BladeSystem ProLiant server blades that appear on the VMware HCL. StorageWorks P4800 BladeSystem SAN The HP StorageWorks P4800 BladeSystem SAN takes the convergence of blades and storage to new levels and speaks to the unique capabilities HP has to develop IT platforms. As the foundation for each enterprise POD, the P4800 is not only robust and performant, but also simple to deploy and maintain. It also drives new paradigms for VDI in terms of data storage, security and ownership. Figure 3 shows the P4800 components.

Figure 3. The HP StorageWorks P4800 BladeSystem SAN

The HP StorageWorks P4800 SAN runs on HP P4000 SAN/iQ software which offers unique SAN scalability and data high availability. The P4800 consists of 140, 15,000 rpm Large Form Factor SAS disks of 450GB each. With over 22TB of useable space, it is possible to build very large PODs of VDI resources on a highly scalable and replicated basis. The P4800 is robust enough to handle boot storms, login events and sustained write activity. Figure 4 shows the relationship of each controller to each storage drawer in the MDS600. On a percontroller basis, a P4000sb storage blade uses an HP Smart Array P700m controller connected redundantly through SAS switches to a drawer of 35 disks in an HP StorageWorks MDS600. The disks are broken into 7 groups of 5 disks protected at the array controller level via a RAID5 configuration. P4000 SAN/iQ software then creates a user selectable net RAID configuration across all drawers. Each block written can be replicated two, three or four times to ensure both data integrity and data high availability.

Figure 4. Controller to drawer relationships of the P4800

Storage configuration on a per volume basis will be described later in this document in the section entitled Building the platform. The integrated nature of this approach along with the simplicity of the P4000 SAN/iQ management interface may, in some IT shops, encourage and enable the management of server, infrastructure, hypervisor and storage resources by one person. This can result in greater agility, lower overall management costs and quicker change management. HP BladeSystem c7000 enclosure The HP BladeSystem c7000 enclosure has been designed to tackle the toughest problems facing todays IT infrastructures: cost, time, energy, and change. The c7000 enclosure consolidates the essential elements of a datacenter power, cooling, management, connectivity, redundancy, and security into a modular, self-tuning unit with built-in intelligence. In addition, this enclosure provides flexibility, scalability and support for future technologies. Figure 5 shows an example of an HP BladeSystem scale-out infrastructure.

10

Figure 5. Solving infrastructure issues with the HP BladeSystem c7000 enclosure

For more information on this powerful 10U enclosure, refer to http://h18004.www1.hp.com/products/blades/components/enclosures/c-class/c7000/. HP Onboard Administrator The Onboard Administrator for the HP BladeSystem c7000 enclosure is the brains of the c-Class infrastructure. Together with the enclosures HP Insight Display, the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class components and provides the following capabilities: Wizards for simple, fast setup and configuration Highly available and secure access to the HP BladeSystem infrastructure Security roles for server, network, and storage administrators Automated power and cooling of the HP BladeSystem infrastructure Agent-less device health and status Thermal Logic power and cooling information and control Each c7000 enclosure is shipped with one Onboard Administrator module and firmware. For redundancy, you can add a second unit. For more information on the Onboard Administrator, refer to http://h18004.www1.hp.com/products/blades/components/onboard/.

11

HP Virtual Connect Before Virtual Connect, you had two basic choices for interconnects: pass-thru or switch. Pass-thrus are simple but require a large number of cumbersome cables and create complexity; blade switches reduce the number of cables but add to the workload of LAN and SAN administrators. With either option, multiple people are needed to perform even very simple server tasks. Virtual Connect provides a better way to connect your HP BladeSystem c-Class enclosure to network LANs and SANs, allowing you to simplify and converge your server edge connections, and integrate into any standards-based networking infrastructure while reducing complexity and cutting your costs. Rather than tying profiles to specific blades, you create a profile for each of the bays in an HP BladeSystem enclosure. Virtual Connect then maps physical LAN or SAN connections to these profiles, allowing you to manage connectivity between blades and networks without involving LAN or SAN administrators. In addition, if a server blade were to fail, you could move its associated profile to a bay containing a spare blade, thus restoring availability without needing to wait for assistance. For more information, refer to Appendix B Virtual Connect technology. HP Thin Clients With much longer life spans than traditional desktop PCs, HP Thin Clients are the ideal client device for VDI deployments providing increased security, simplified management, and lower cost of ownership. HP offers a full portfolio of Thin Clients with varying capabilities enabling IT to optimize both end-user experience and their IT budget by deploying a mix of client devices. Additionally, all HP Thin Clients have been tested and approved for use within VMware View environments For this reference platform, HP tested and configured two thin clients; the HP t5740 and the HP t5545. A Mainstream series client, the t5545 is designed for most business productivity applications typically found with Task and Productivity workers. For Knowledge workers who require use of more advanced applications and media streaming, the t5740s higher processing performance and memory configurations make it the ideal choice.
Table 3. HP Thin Clients

HP t5740 CPU OS Memory HP t5545 CPU OS Memory

Configuration Intel Atom N280 Processor 1.66GHz Microsoft Windows Embedded Standard 2009 2GB Flash, 2GB DDR3 SDRAM Configuration VIA Eden Processor 1.0 GHz HP ThinPro 512MB Flash, 512MB DDR2 SDRAM

12

Figure 6 shows the HP t5740 Thin Client with integrated wireless and dual monitors.

Figure 6. HP t5740 Thin Client

For more information on HP Thin Clients, refer to Appendix D HP Thin Clients. Additional services To make it easier for you to deploy this solution, HP offers a range of Thin Client Management Services. More information is available from the following sources: http://h10134.www1.hp.com/services/thinclientmgmt/ http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA2-7923ENW&cc=us&lc=en http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/18964-18964-3644431-36462073763975-3646216.html?jumpid=reg_R1002_USEN HP Client Automation Enterprise Client virtualization offers many business benefits such as lower costs, shared infrastructure and data security. However, managing these environments can present added management complexities especially for enterprises ending up with a mix of the traditional client environment of desktop and notebook PCs, thin clients and the virtual desktops running in the datacenter. HPs Client Automation software offers a single and integrated management framework to manage the complete client virtualization environment including thin clients, the virtual desktop environment and even traditional desktop and notebook PCs so you can avoid the additional complexity and cost that other solutions may require. HP Client Automation offers: Reduced administrative overhead by integrating with leading application virtualization technologies, including Citrix XenApp, Microsoft Application Virtualization (App-V), and VMware ThinApp Reduced complexity by automating day-to-day operations to simplify management for traditional as well as virtual client environments including VMware View and Citrix XenDesktop Improved security by managing security, compliance, and vulnerability for your physical end points as well as the virtual desktops running in the datacenter. Reduced costs by offering a comprehensive client management solution so your IT administrators do not have to learn and use 3 or 4 different tools to manage your environment.

13

Overall HP Client Automation software allows enterprises to rapidly adopt client virtualization to realize its benefits without adding to the management complexities and costs. For more information on how HP Client Automation can help you, visit http://www.hp.com/go/clientautomation. HP Insight Control suite Insight Control suite provides time-smart management software that delivers deep insight, precise control, and ongoing optimization to unlock the potential of your HP ProLiant and BladeSystem infrastructure. Based on HP Systems Insight Manager (HP SIM), Insight Control suite offers comprehensive proactive health management, remote control, patch management as well as rapid server deployment, VM management, and power management in one easy to install package. The reference configuration provides the licenses required to take advantage of Insight Control suite. For more information on creating a well-run infrastructure, refer to http://h18013.www1.hp.com/products/servers/management/ice/index.html. HP Insight Control for VMware vCenter The vCenter management platform from VMware is a proven solution for managing the virtualized infrastructure, with tens of thousands of customers utilizing it to keep their virtual datacenter operating efficiently and delivering on Service Level Agreements. For years, these same customers have relied on Insight Control from HP to complete the picture by delivering an in-depth view of the underlying host systems that support the virtual infrastructure. While delivering a complete picture of both the virtual and physical datacenter assets, the powerful combination of vCenter Server and Insight Control has required customers to monitor two separate consoles until now. The HP Insight Control extension for VMware vCenter Server delivers powerful HP hardware management capabilities to virtualization administrators, enabling comprehensive monitoring, remote control and power optimization directly from the vCenter console. In addition, Insight Control delivers robust deployment capabilities and is an integration point for the broader portfolio of infrastructure management, service automation and IT operations solutions available from HP. Key capabilities integrated into the vCenter console include: Combined physical and virtual view: From a single pane of glass, monitor status and performance of virtual machines and the underlying host systems that support them. Integrated troubleshooting: Receive prefailure and failure alerts on HP server components and invoke HP management tools, such as Systems Insight Manager and Onboard Administrator, incontext, directly from the vCenter console. Powerful Remote Control: Remotely manage and troubleshoot HP ProLiant and BladeSystem servers using HP Integrated Lights-Out Advanced capabilities directly from the vCenter console. Proactive power management: Get the most out of your existing power envelope by comprehending and proactively managing power for hosts and pools of virtual machines across hosts A core component of HP Insight Control, the VMware vCenter Server extension is included with HP Insight Control, which can be purchased as a single license, in bulk quantities, or bundled with HP ProLiant and BladeSystem hardware. Existing Insight Control customers who are under a current Software Updates contract can even download this extension free of charge!

14

VMware View VMware View value proposition and benefits Purpose-built for delivering desktops as a managed service, VMware View provides the best end user experience and transforms IT by simplifying and automating desktop management. Centrally maintaining desktops, applications and data; reduces costs and improves security while at the same time increases availability and flexibility for end users. Unlike other desktop virtualization products, VMware View is a tightly integrated end to end solution built on the industry leading virtualization platform allowing customers to extend powerful business continuity and disaster recover features to their desktops and standardize on a common platform from the desktop through the datacenter to the cloud. VMware View solution provides a wide range of benefits. Simplify and automate desktop management. VMware View lets you manage all desktops centrally in the datacenter and provision desktops instantly to new users, departments or offices. Create instant clones from a standard image, and dynamic pools or groups of desktops. Optimize end-user experience. VMware View PCoIP display protocol provides a superior end user experience over any network. Adaptive technology ensures an optimized virtual desktop delivery on both the LAN and the WAN. Address the broadest list of use cases and deployment options with a single protocol. Access personalized virtual desktops complete with applications and end-user data and settings anywhere and anytime with VMware View. Lower costs. VMware View reduces overall costs of desktop computing by up to 50% by centralizing management, administration and resources and removing IT infrastructure from remote offices. Enhance security. Since all data is maintained within the corporate firewall, VMware View minimizes risk and data loss. Built-in SSL encryption provides secure tunneling to virtual desktops from unmanaged devices or untrusted networks. Increase business agility and user flexibility. VMware View accommodates changing business needs, such as adding new desktop users or groups of users, while providing a consistent experience to every user from any network point. Built in business continuity and disaster recovery. VMware View is built on industryleading VMware vSphere allowing you to easily extend features such as High Availability and Fault Tolerance to your desktops without the need to purchase expensive clustering solutions. Automate desktop back-up and recovery as a business process in the datacenter Standardize on a common platform. VMware View includes VMware vSphere and brings all the benefits and enterprise features of the datacenter to the desktop. Extend features such as vMotion, High Availability, Distributed Resources Scheduler and Fault Tolerance to your desktops providing a built in disaster recovery and business continuity solution. Optimized specifically for desktop workloads, VMware vSphere is able to handle the high loads associated with desktop operations such as boot up and suspend operations. Standardize your virtualization platform and use a single solution to manage both servers and desktops from the datacenter through to the cloud.

15

VMware View architecture VMware View provides Unified Access to virtual desktops and applications running in a central secure datacenter and accessible from a wide variety of devices. VMware View Composer streamlines image management while reducing storage needs through the use of VMware Linked Clone technology. Figure 7 highlights the architecture.

Figure 7. VMware View 4.0 overview

VMware

16

Scaling and sizing VDI


Overview
VDI sizing is really a two part process. Sizing servers and storage is required to insure an optimal level of investment on the part of customers. Over time HP has observed a similar set of server sizings in the field based on platform type. Storage has been far more complex. This section seeks to discuss HPs approach and methodologies for sizing.

Server sizing
History HP is now in its sixth generation of server sizing for VDI. The goal from the beginning has been to provide reliable, conservative sizing estimates that help our customers to decide whether or not VDI is the right platform for client virtualization for their intended use cases as well as to build the business cases needed to adopt VDI as a technology. Historically, VDI testing has largely been an outgrowth of old server based computing test paradigms. These tests can be summed up in general as follows. A script or minimal set of scripts is run using Microsoft RDP as a connection protocol against a set of OSs running a very minimal application set on a single host. A base time is established in some fashion for an action or set of actions in the script to complete. More VMs are added to the test with either the same script or slightly different set of scripts until timings for actions have increased to a level deemed unacceptable, generally a percentage time increase. HP used similar test methods in its first VDI reference architecture release in 2006 and in its second and third test generations. Results of those tests tended to be highly optimistic when viewed in relation to real world implementations. In seeking to understand where the differences came from when comparing tests and real world deployments, HP has changed test methodologies twice for the launch of its G5 and G6 servers. HP believes that its test results were much closer to production reality than other published tests, but in adjusting scripts the actions on a per user basis were not as real world. For the latest sizing HP reexamined old test methodologies and in fact changed the underlying assumptions about what performance means in a VDI environment. Testing today HP took lessons learned from prior test generations and created the current set of tests for server sizings. The approach produces what HP believes to be sizings that can be replicated in most customer environments and that should be used for planning from both a business justification and equipment purchase standpoint. In the HP test scenario, three types of users are defined and tested. These users are designed to match representative, real world user patterning. The users are defined as follows. Knowledge workers A knowledge worker uses a broad application set (in the case of the testbed, up to 17 different applications are available for a user to run). They interact with the VM throughout the entire scope of the test. Applications for this user may demand higher resource utilization. Examples of such applications include Eclipse, Windows Media Player and Microsoft Visio. This user will have multiple applications running at any given point in time. Productivity workers A productivity worker uses a fairly broad application set (up to 14 different applications available in our test suite). One worker per test operation is left idle, all other workers interact for the duration of the test. Microsoft Visio is the only application that may demand higher resource utilization and is used minimally throughout the test. This user has multiple applications running at any given point in time and every user in the test leaves Microsoft Outlook open for the entire test duration.

17

Task workers A task worker is defined as a user that interacts with a minimal set of applications (Microsoft Office, Zip programs, Internet Explorer and Adobe Acrobat). The user generally has only one or two applications open at a time. One idle user per 60 working users is tested. The majority of work produced is keystrokes. During testing it became clear that the task worker was the easiest to scale as memory requirements for such a user are low and storage patterns were very simple. Methodology Scripts were recorded individually using AutoHotKey with application launch times noted not just for when the process appears, but for when the user is first able to start interacting with the application. Timings were then increased by between 20% and 100% depending on application. Initial timings were judged to be of acceptable performance and were representative of launch times on individual VMs, local desktops and laptops. Increased timings were judged to be sluggish. Even slight increases in timing caused the perception of the solution as slow. It is HPs belief that the use of acceptable timings is a better gauge than a simple percentage increase. There are more than 80 scripts written with 8 being tied to task workers, 12 to knowledge workers and the remainder tied to productivity workers. To arrive at a final sizing, a predefined mix of these scripts is run and ESXTOP is used to record server side performance data. These tests are monitored during playback and server sizing is revealed when more than one script fails to execute as designed. In other words, when a script cannot complete as designed with other scripts running within the prescribed acceptable timeframe, the server has reached capacity. These failures show up in a variety of forms but are best described as unintended application behaviors. All tests were conducted on an HP ProLiant DL380 G6 with 2 Intel Xeon X5570 processors. The system had 72GB of memory installed during test. Results are believed valid for any 2 socket ProLiant server running an identical memory and processor configuration. The OS was VMware ESX 4.0 Update 1 (Build 208167). Virtual machines were based on Windows XP with all patches and service packs released prior to January of 2010 applied. A variety of applications were used during test. HP suggests customers test with their own images and applications to further refine results. All VMs were initially given 1GB of memory, but memory utilization did not end up playing a role in any of the final sizings. Microsoft RDP v6 was the protocol used during the test. Other protocols may have an effect on overall sizing, generally reducing the number of users based on a given workload. HP is working to implement other protocols into its testing procedures and will update results as appropriate. All scripts are run after login traffic has had a chance to settle to eliminate any effect. Logins, HA failures and boots are studied separately. It is important to note that these results should not be compared to other results using different test methodologies. Based on experience, it is expected that these results will be valid for HP systems running the same versions of software and configured with the same processors and memory. No lab sizing can be completely matched to a given customer implementation. Knowledge and productivity workers These two worker types are grouped together as they end up looking very similar from an overall server sizing standpoint. Differences are observed mostly in storage patterning. Memory utilization, largely a function of many different and concurrent applications being opened, shows little difference and CPU patterning is close, showing more of a difference in %READY values than in overall utilization. For knowledge and productivity workers, HP recommends that a range of 64-72 users per host (8-9 users per core on the referenced processors) be used for planning purposes. This number allows for HA to function with an impact to users in the event another server within the same cluster fails. That impact will generally show up in the form of extended application response times and sometimes

18

sluggish behavior. Consistent single script failures were observed starting at 66 users per host. Multiple script failures began appearing consistently above 72 users per host. Note: Some heavy use cases with knowledge workers such as developers can substantially reduce per server user counts. Such environments have been observed to be as low as 4-5 users per core or 32-40 users per host on the referenced processors, especially when utilizing a lower than recommended memory footprint. Figure 8 highlights server CPU utilization at 66 users. Note that absolute utilization rarely exceeds 60%.

Figure 8. CPU utilization at runtime for productivity workers

CPU Utilization
100 90 80 70 60 CPU % 50 40 30 20 10 0 1000 1200 1400 1600 Time (Seconds) 1800 2000 2200

Memory utilization during these runs was different for the two user types, but levels of memory overcommit were high and absolute memory consumption did not play a role. No swapping was observed. Peak memory usage per host was between 40 GB and 45 GB. Task workers To emulate task workers, a small set of 8 scripts with a limited application set was used. Applications used were Microsoft Office (Outlook, Word, Excel and PowerPoint), Adobe Acrobat Reader and Microsoft Internet Explorer. For task workers, HP recommends that a number between 100 and 110 users per host (12 14 users per core on the referenced processors) be used for planning purposes. This number allows HA to

19

function with an impact to users in the event another server within the same cluster fails. Under test single script failures began occurring at 100 users with multiple script failures appearing at 110 users. CPU utilization is far more steady as is expected of the less complex script environment. Peak utilization is similar to the 66 user productivity worker run. Figure 9 shows CPU utilization during a 110 user run.

Figure 9. CPU utilization at runtime for productivity workers

CPU Utilization
100 90 80 70 60 CPU% 50 40 30 20 10 0 1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000

Time (Seconds)

Memory utilization stayed relatively constant during the course of the test at around 45 GB. No swapping was observed and memory was not a factor in final sizing. It should be noted that task workers by definition are simplistic users. Understanding your users prior to categorizing them is critical to properly sizing servers. It has been HPs experience that most users are closer to productivity workers defined in this document than to task workers.

20

Storage sizing
Overview Storage sizing for VDI can vary based on a number of factors. In HPs experience, per VM I/O performance ranges from 2.5 to more than 70 IOPS and can exhibit patterns ranging from 20% writes to 97% writes. With such a vast difference in pattern sizing, architecting the right storage solution can be a challenge. This section seeks to characterize three different workloads. The section entitled Other factors effecting sizing will address causes for dramatically different sizing characteristics and offer suggestions on eliminating potential issues. Methodology HP has developed a storage sizing tool that allows us to use a minimal amount of clients to load large storage systems for sizing purposes. While the technology has a number of uses, it was in fact developed to better understand client workloads in a variety of VDI scenarios. The tool, Spyder, allows HP to take a fibre channel trace between any client and LUN in the test harness and record it. The trace can then be played back through an iSCSI storage system by Spyder. The tool replays block by block and reports statistics about variables such as: Variances in commit times between storage systems Full statistics for reads, writes, latencies and queues Per LUN statistics for any number of targets The tool has the added advantage of being agnostic as to any factors on the recording side. Traces can be captured as a single live user, single function script, multiple scripts and on any combination of LUNs and VMs. HP applies this methodology to both size P4000 SANs for VDI and also to understand and analyze storage patterning in order to optimize its product set to perform in a VDI environment. Note: Storage sizing will be variable. In our own test environment, HP has found variable levels of read caching. These levels have a large effect on the cumulative IOPS available on the storage system. Your particular workload will have an effect. Sizings given in this section reflect maximums. The following sections discuss storage patterning for the three major user categories. Overall storage layout and final sizing is diagrammed and discussed in the section entitled Building the platform.

21

Task workers To analyze patterning for task workers, a host was loaded with 64 task workers attached to a single LUN. All I/O activity was recorded post login using a fibre channel trace analyzer. The resulting trace was then prepared and played back through the P4800 SAN. The Centralized Management Console performance monitor captured all data. I/O patterning for task workers was found to be the following. Figure 10 shows read and write traffic patterning. There is an obvious bias toward reads in the test workload with an overall observed pattern from the fibre channel trace of approximately 60% reads and 40% writes.

Figure 10. Overall reads and writes for task workers on a single LUN

Task Worker Patterns


700 600 500 400 IOPS 300 200 100 0 0 100 200 300 Time (Seconds) 400 500 600 Reads Writes

22

Cumulative I/O shows a peak I/O of just under 800 IOPS for the 64 users under test. Figure 11 reflects this number.

Figure 11. Cumulative I/O for task workers on a single LUN

Task Worker IOPS


900 800 700 600 IOPS 500 400 300 200 100 0 0 100 200 300 Time (Seconds) 400 500 600

I/O ranges from 2 to 13 IOPS per user with an overall average of 6 IOPS per user. It should again be noted that this is reflective of this test and a very limited application set. Over the course of the test the average I/O size was 16K with a standard deviation of 4K. HP recommends up to 2,400 task workers be placed on the HP StorageWorks P4800 BladeSystem SAN. Layout of the storage system is discussed later in this document.

23

Productivity workers The productivity worker workload is similar in scope to the task worker, but features a much broader application set. Overall I/O patterns for productivity workers based on the test workload show a heavy read bias. Overall read/write ratios for a given test run are around 80% reads.

Figure 12. Overall reads and writes for productivity workers on a single LUN

Productivity Worker Patterns


1200

1000

800 IOPS

600 Reads 400 Writes

200

0 0 100 200 300 Time (Seconds) 400 500 600

24

Cumulative I/O as shown in Figure 13 shows a peak of almost 1400 IOPS for the test. IOPS on a per user basis range from 2 to 21 IOPS per user with an average of 9. Average I/O size is 14K with a standard deviation of 3K.

Figure 13. Cumulative I/O for productivity workers on a single LUN

Productivity Worker IOPs


1400 1200 1000 800 600 400 200 0 0 100 200 300 Time (Seconds) 400 500 600

HP recommends that up to 1,584 productivity workers be placed on an HP StorageWorks P4800 BladeSystem SAN. Layout of the storage system is discussed later in this document.

IOPS

25

Knowledge workers The knowledge worker script is substantially different in a number of ways from the other two user types. Most importantly, it differs in application set and workload to a degree and this actually shifts the patterns of activity for the LUN considerably. The net effect is that a knowledge worker will be considerably more difficult to scale in a cost effective manner than will a task or productivity worker. Figure 14 shows overall patterns for knowledge workers. It should be noted that peaks on the graph are from actual application utilization. The most apparent difference is in overall pattern with only around 40% of the workload showing up as reads. This heavier write pattern suggests that this workload will need spindles to scale.

Figure 14. Overall reads and writes for knowledge workers on a single LUN

Knowledge Worker Patterns


2000

1500 IOPS

1000

Reads Writes

500

0 0 100 200 300 400

Time (Seconds)

26

Cumulative I/O as shown in Figure 15 shows a peak of almost 4000 IOPS for the test. IOPS on a per user basis range from 7 to 59 IOPS per user with an average of 22 IOPS. Average I/O size is 16K with a standard deviation of 4K.

Figure 15. Cumulative I/O for knowledge workers on a single LUN

Knowledge Worker IOPS


4000 3500 3000 2500 IOPS 2000 1500 1000 500 0 0 100 200 Time (Seconds) 300 400

HP recommends that a maximum of 660 knowledge workers be placed on an HP StorageWorks P4800 BladeSystem SAN. It should be noted that sizing for this user type is highly variable.

27

Overview of user characteristics Table 4 gives a high level overview of various characteristics by user type observed in the lab. Lab observations are useful in helping to define quantitative user types. HP recommends an engagement with HP Technical Services or a skilled HP Partner to determine how your users fit within these ranges. Averages can be used as planning numbers in environments using linked clones.
Table 4. Characteristics observed in the lab by user type

Characteristic CPU (Cumulative) at moment of decline I/O Pattern (Cumulative) R/W IOPS (Per User) Range IOPS (Per User) Average Block Size Average Block Size Standard Deviation Application Mix (Number of Apps) Number of Apps Open Concurrently Sizing Difficulty

Task Workers 65-70%

Productivity Workers 60-65%

Knowledge Workers 60-65%

60/40 2 13 6 16K 4K

80/20 2 21 9 14K 3K

40/60 7 59 22 16K 4K

< 10

10+

15+

<2 Low

Many Moderate

Many High

Other factors effecting sizing


As the previous section suggests, there is variability between user types and it may be very pronounced. A question that has arisen frequently from customers facing especially challenging I/O patterns over time has been What is it that effects such dramatic, and expensive differences?. Use case Perhaps the most underappreciated source of unexpected I/O patterning is from the given VDI use case itself. A use case can change a task worker into a knowledge worker and a perceived knowledge worker into a user in need of a workstation. Consider a shared access model for VDI. A simple set of applications is installed on what are nonpersistent VMs. In a shared access model, many more users exist than clients available to service them. As a result, a continuous queue forms where every time a user logs out, a new one will log into the client. The VM itself is generally reset each time. In a scenario where usage is short at each logon, this creates a continuous login storm of sorts that can drive I/O to extremely high levels. In addition to excessive storage utilization it also consumes CPU resources on host systems and may result in lower overall performance despite the fact that users are performing simple tasks while logged on.

28

Image The base image is a common area for unexpected sizing issues. An image that is loaded with scheduled services, extraneous programs and is generally un-optimized to be shared among a large number of users is a problem before the first VM is deployed. Such images reduce user counts on a per server and per storage target basis and can raise VDI costs considerably. As an example, consider a virus scan scheduled in a fully provisioned VM (HP recommends that linked clones be used for VDI use cases whenever possible). For extremely dense environments with large numbers of users, scheduling virus scans inevitably means that multiple scans will need to occur at any given point in time. The net effect of having this service built into the image is an almost continuous drop in quality of service as shared resources are strained from the scan. Application set The choice of what applications to host in a VDI environment as well as how to present them to the VM can have a considerable effect on the overall performance of the VM as well as the characterization of the end users. As an example, consider an application that does continuous aggregation of information when open. Certain development environments will do this as developers write code. The net effect varies depending on where the application resides in relation to the VM. If it is a local client application there is generally no effect. If the application is streamed into the VM it is possible that I/O and CPU penalties will be observed on other systems and only a minimal overhead will be shown inside the VM. Loading the application locally inside the VM would insure that any overhead is felt within the VM and any other users on the same shared resources would be affected. In some circumstances, users that require a particularly difficult application may simply need to be removed from consideration as viable for a shared environment. In other cases, it may mean that the applications need only be removed from the local VM. Proper user identification Proper identification of user types is also key to a successful deployment. Many IT shops are unaware of user patterns and may not have an in-depth understanding of exactly what applications are in use and by whom. Making assumptions about user type may in some circumstances be justified. An example is the task worker described in this section. If there is truly only a limited application set and workers do not require a great deal of local storage then it is safe to call them task workers. Some job categories may be seen as task workers when in fact the role is far more complex than imagined. IT administrators need to be aware of work patterns to insure proper sizing. Under all of these circumstances HP can help with your move to VDI. HP services can be engaged and will come into your environment and help you ascertain what users to move into a VDI environment. Storage contention In some environments, storage can become a point of contention. This may manifest itself with higher than expected user I/O or vastly different patterning than expected or it may arise due to a requirement to have very large user data space without expanding to network attached storage. Perhaps the most common form of storage contention as it pertains to sizing is choosing users that cannot or will not be moved to a linked clone based implementation. This document does not cover these implementations. The installer should be aware that the effect on overall sizing can be dramatic and the corresponding effect on per user cost quite large. The HP StorageWorks P4800 BladeSystem SAN is designed to be cost effective relative to other storage solutions in this type of environment.

29

POD scaling by user type


Scalability of the POD will change based on a number of factors. The following sections establish baseline configurations for the three major user types using the sizings described in this paper. Sizings shown are considered maximum recommended configurations. Figure 16 shows, at a high level, the components involved in building a POD (example shown is for productivity workers).

Figure 16. Overview of components found within a POD

BladeSystem c7000 enclosure

ProLiant BL490c G6 Intel Xeon 72GB memory Onboard SSD VMware ESX 4 VMware View HP StorageWorks P4800 BladeSystem SAN Four P4000sb Storage Blades Two SAS Switch Modules with cables (rear) Two MDS600 enclosures with 140 450GB 15K SAS drives

HP Virtual Connect Flex-10 Modules

30

Task and productivity workers Figure 17 shows the configuration of a POD for task and productivity workers. This POD houses up to 2,400 task workers or up to 1,600 productivity workers based on the definitions provided in this document. Reducing user counts improves overhead for operations and management of the entire stack.

Figure 17. Task and productivity worker POD scaling

31

For task workers, the suggested maximum configuration is shown in Figure 18 by cluster layout. The installer may choose to distribute VMware clusters equally across enclosures as well.

Figure 18. Task worker cluster distribution

32

Figure 19 shows the maximum suggested configuration for productivity workers. The installer may choose to distribute VMware clusters equally across enclosures as well.

Figure 19. Productivity worker cluster distribution

33

Knowledge workers Figure 20 shows the scaling for knowledge workers in a VDI POD. Each POD will house 660 knowledge workers based on the definitions provided in this document.

Figure 20. Knowledge worker POD design

34

Figure 21 shows the suggested configuration of clusters for knowledge workers. Note that the actual POD that is the basis for scaling is a single BladeSystem enclosure and SAN.

Figure 21. Knowledge worker cluster distribution

35

Building the platform


Creating the first POD
The basic POD building block described in this document consists of a single c7000 enclosure containing an HP StorageWorks P4800 BladeSystem SAN (two (2) HP BladeSystem 3Gb SAS switches, two (2) HP StorageWorks MDS600s and four (4) P4000sb storage blades). When shipped, the HP StorageWorks P4800 BladeSystem SAN has all cabling in place and SAS switch zoning is in place. This configuration will form the basis of this section. The P4800 must be ordered with Factory Integration in a c7000 enclosure, two Flex-10 modules, PDU and a rack system. Note: Do not alter SAS switch zoning or local array configuration of the P4000sb nodes in bays 7, 8, 15 and 16 for any reason. At a minimum, a network cable to your management network must be connected to at least one Onboard Administrator on your HP BladeSystem c7000 enclosure. The enclosure should be configured to meet the standards set by your IT organization and once configured, all interconnects and iLOs within the enclosure should be reachable on your management network. It is highly recommended that your POD be cabled to your network as shown in Figure 22. This document assumes you will keep iSCSI traffic internal to the enclosure. Appendix G of this document includes a more detailed sample diagram of how networking is cabled to the network core. At the network core switches where the Virtual Connect modules connect, VLANs should be defined on the port for vMotion, Management and Production networks.

Figure 22. Cabling diagram for a single enclosure POD

OA Cabled to management network 100Mb or 1Gb Ethernet

Flex-10 Modules Cabled to network core 10GbE fibre or SFP+ cable VLANs for: o vMotion network o Management network o Production (VM) network

36

If iSCSI traffic will egress the rack under the control of network and storage teams the recommended cabling method is to cable an extra, dedicated 10GbE link pair to the network core with one port per Flex-10 module assigned to this network. Do not share this cable and port with any other traffic. Consult Chapter 4 of the HP StorageWorks P4000 SAN Solution User Guide at http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c02063204/c02063204.pdf for instructions on configuring iSCSI networks external to the enclosure. SAS configuration is assumed as in Figure 23. The enclosure on delivery should be cabled for SAS connectivity.

Figure 23. SAS cabling of the enclosure as delivered

37

Power up Prior to beginning any configuration, please insure that you follow the power up sequence outlined below in Table 5. Following this power on sequence will insure that all components communicate with each other as intended. In the event that the system has been powered up before configuration starts you may use the power buttons from within the Onboard Administrator to power down the SAS switches and P4000sb nodes and reapply power in the correct order (this requires administrator access to the OA).
Table 5. Power on sequence required for initial setup of POD

Sequence

Item

Notes Power on each MDS600 fully. MDS600s must be online prior to SAN/iQ. These must be powered on manually. Do not power on the enclosure until these are powered on for 3 minutes. This allows all drives to spin up. To insure that these power up first it may help to unseat the Virtual Connect Flex-10 modules from Interconnect Bays 1 and 2 during initial power on only. Set power on sequence via the OA. Insure SAS switches are powered up. Set power on sequence via the OA. Insure that the SAS switches have been powered up for 1-2 minutes prior to powering on. Use the OA to set the power on sequence. If the intent is to isolate iSCSI traffic to within the rack, this host will need to have VMware ESX installed and a VM created on local storage initially to install the CMC. This is covered in the section titled Installing the first hypervisor host and CMC. Set power on sequence via the OA. If you use an external CMC and do not isolate your iSCSI network you may combine steps 5 and 6.

MDS600

SAS Switches

Flex-10 Modules

P4000sb

Hypervisor Host 1

Hypervisor Hosts

May power on in any order after the prior sequence is completed.

In the event of catastrophic power failure, you can set the power on order via the OA (requires OA firmware 3.0 or later). The next section discusses how to perform the configuration. Configuring the enclosure Instructions for configuring your enclosure are included with the enclosure and are also available on the web in the BladeSystem documentation. Visit http://www.hp.com/go/bladesystem to find documents. This section will concentrate on specifics that relate to the configuration of a POD for VDI using the HP StorageWorks P4800 BladeSystem SAN. A sample script is included in Appendix C of this document to serve as a guideline to scripting the configuration of the enclosure. Handle initial IP address configuration for the enclosure from the Insight Display. After initial setup from the Insight Display on the front of the c7000 the primary objective should be to update the firmware. Install the following firmware to meet the levels listed in the Firmware revisions section of this doc. Firmware to be updated at this point will be the OA, Virtual Connect Flex-10 modules and blade firmware on the hypervisor hosts. Power down your hypervisor hosts once firmware is upgraded. All P4800 components should be at the correct firmware levels from the factory.

38

Once firmware is updated, you can set the power on sequence as per the previous section. This is useful for recovery during catastrophic power failures. To do this, log into the OA using the Username Administrator and the Password provided on the tag that came with the OA. Once logged in, expand the Enclosure Information and then Enclosure Settings tabs as shown in Figure 24. NOTE: Setting the power on sequence in the OA is optional. Following proper power on sequence is mandatory. From the left menu, click the link Device Power Sequence.

Figure 24. Expand the OA settings section

39

A series of tabs will appear. Click on the tab labeled Interconnect Bays as in Figure 25. You should see four columns labeled Bay, Device, Enabled and Delay. You will be setting interconnect bays 1 and 2 (your Flex-10 modules) to power on after 3 minutes and 30 seconds and interconnect bays 5 and 6 (your SAS switches) to power on after an initial delay of 3 minutes. This insures that the MDS600 disks have time to spin up. Click the Apply button when done.

Figure 25. Setting power on timings for the interconnect bays

40

Click on the tab labeled Device Bays. Set Device Bays 7, 8, 15 and 16 to power on after 270 seconds. Set all other device bays to No Poweron as in Figure 26. You may change this after initial setup any time or simply leave it at the default. Click Apply.

Figure 26. Setting power on timings for device bays

Finish configuring your enclosure settings. Be sure to change the Administrator password and add an administrative user to the OA. Configuring SAS switching All SAS configuration is completed at the factory prior to shipping. Do not make any changes to the SAS configuration in the P4800 enclosure.

41

Configuring Virtual Connect Flex-10 Prior to completing this section it is recommended that you read the administrative documentation pertaining to Virtual Connect Flex-10 as well as Appendix B of this document. Documentation can be found at http://www.hp.com/go/virtualconnect. In the left menu of the Onboard Administrator, click on the link to the Virtual Connect Manager. This will launch a new window or tab. Log into the Virtual Connect domain with the username Administrator and the password on the tag of the active Virtual Connect module (generally bay 1). The logon screen is shown in Figure 27.

Figure 27. The Virtual Connect logon screen

42

Once connected, a wizard will start. Figure 28 shows the first wizard screen. Click on the Next button to begin using the wizard. NOTE: This document assumes that you will not be using Virtual Connect assigned MAC addresses.

Figure 28. The Virtual Connect Domain Setup Wizard

Enter a Username and Password for an administrator on the local OA on the next screen. Figure 29 shows this screen. Click on the Next button when done.

Figure 29. Enter a Username and Password for an administrative user

43

You will be asked to import the enclosure as in Figure 30. Make sure the radio button to Create a new Virtual Connect domain by importing this enclosure. is checked and then click on the Next button. A warning will appear that network access to servers will be disabled until networks and profiles are defined. This is expected and you may click Yes to complete the import process.

Figure 30. Importing the enclosure

Once the enclosure is imported you will get a confirmation screen. You can click Next to continue. The next step is to configure the name of the Virtual Connect domain as in Figure 31.

Figure 31. Naming the Virtual Connect Domain

Enter a name of up to 31 characters. It is important to note that any other enclosures added to this rack will use this domain name as well. As you scale to more racks each rack will have its own name. As such you may wish to consider a naming scheme that will uniquely identify the rack as a member among a larger group of racks. Click on the Next button once you have assigned the name.

44

The next screen asks you to create local user accounts. Create at least one alternate administrator account at this time. You will have the ability to add more users later. Be sure once the wizard is complete to change the password for the default Administrator account. It is recommended that you click on the Advanced button and change the settings for default password length to 6 or more characters and to require strong passwords. Once accounts are created click on Next to proceed. You will click on Finish on the next screen to start the Network Setup Wizard. At the Welcome to the Network Setup Wizard screen you can click the Next button to proceed. Figure 32 highlights selecting to use native MAC addresses. Click on Next when done.

Figure 32. Selecting to use native MAC addresses

At the next screen you as the installer will need to make decisions about how VLANs are handled based on your individual networks. This document assumes that Map VLAN Tags has been selected. If you cannot use this setting you will need to make alterations to uplinks as you go forward.

45

At the Define Network Connection screen select to create a network with connections carrying multiple networks as in Figure 33.

Figure 33. Defining network connections

After clicking on Next, you will be asked to define a shared uplink set as in Figure 34.

Figure 34. Defining a shared uplink set

46

Give a name such as External_Conn to this uplink set and assign the 10Gb ports you have connected to the network core to it. You should have one 10GbE port per VC module assigned to the shared uplink set. You will define your management, vMotion and production networks as VLAN tagged networks. Click on the Apply button when done. At the next page, choose to create a new network. As in Figure 35, choose to use a connection with uplinks dedicated to a single network.

Figure 35. Create a new dedicated network.

You will define your iSCSI network. Do not assign any uplinks to the network. Click on Apply when done. If you have an extra enclosure connected and have been configuring a dual enclosure domain iSCSI traffic will be associated with the stacking links only. Verify that you have defined all networks and click on Next and then Finish. You will now be at the Virtual Connect home screen as in Figure 36.

47

Figure 36. The Virtual Connect home screen

Creating server profiles During this process you will essentially create two profiles that will be assigned to either hypervisor/ management hosts or P4000sb storage servers. We will start with the P4000sb storage server profile. NOTE: Insure all servers in the enclosure are powered off prior to creating profiles. Click on the Define drop down menu and choose Server Profile from the menu. Give the profile a name. With G6 servers you can choose to hide the Flex-10 iSCSI Connections. Define the following profile on two connections as in Table 6.
Table 6. Server profile for P4000sb storage blades

Network iSCSI iSCSI

Bandwidth Custom Full 10Gb Custom Full 10Gb

Assign this profile to bay 7. You can then copy it, assigning a new name each time, to bays 8, 15 and 16.

48

Create a new profile and this time assign it a name that reflects that it is part of the management/ hypervisor host schema. You will make use of the ability to carve up FlexNICs with this profile. Create 6 new network connections (total of 8) and assign networks as in Table 7.
Table 7. Server profile for hypervisor hosts

Network iSCSI iSCSI Management Management Production Production vMotion vMotion

Bandwidth 6Gb 6Gb 800Mb 800Mb 1.2Gb 1.2Gb 2Gb 2Gb

This profile should be copied and assigned to bays 1 6 and 9 14. This will form the basis for your hypervisor and management hosts. NOTE: Even if you are merely undergoing a pilot implementation and do not yet have a full enclosure, copy all profiles at this time. Once complete, you should have 16 profiles created and assigned to their respective bays. Complete any remaining customizations and configurations to your Virtual Connect domain. When done, create a backup copy of the domain configuration for future restoration. You should make a backup of the Virtual Connect domain configuration whenever you make changes to any portion of the infrastructure.

Installing the first hypervisor host and CMC


Install VMware ESX 4.0 Update 1 on the local SSD of the first host as your practices dictate. During the installation process, configure networking information as well as creating a local VMFS volume on the SSD. Note: Check the support matrices for proper NIC driver and firmware combinations. You may need to install an updated driver during the VMware installation. Once VMware ESX is configured you will need to connect to the host using the Virtual Infrastructure Client (VIC) and configure some key options. If you do not have a current copy of the VIC to match the VMware ESX release version simply point a web browser at the IP address of the ESX host and you will have a link for download.

49

After connecting to the host, click on the configuration tab. Make the following general configurations to the host as in Table 8.
Table 8. Configuration changes for initial ESX host

Category Time Configuration

Subcategory NTP

Changes Configure an NTP server for the host to point to SSH server and client enabled, vSphere Web Access, Software iSCSI Client Optionally enable update ports

Security Profile Security Profile

Firewall Firewall

Once done you will need to configure networking for the host. Click on the Networking link under the Configuration tab. Figure 37 shows the suggested network configuration. The basic principles of this configuration are: Left side and right side adapters for each vSwitch Separate vSwitches for iSCSI, Management, Production and vMotion VM networks for management and production networks Select adapters based on speeds. The speeds will match the bandwidth numbers you plugged in during the Creating server profiles section of this document.

50

Figure 37. vSwitches and layout in the vCenter Networking Configuration screen

NOTE: With the 1.48 version of the network driver you should enable beacon probing for network failover detection on your vSwitches. This first host will become part of a cluster of hosts that will house all management VMs. For now, it must first become the host that creates the Centralized Management Console VM that will allow you to build your SAN. NOTE: If you made the choice to configure an iSCSI network that egresses the rack and is available at the network core, you may skip the creation of this management server and use an external CMC server on the iSCSI network.

51

Creating the Centralized Management Console virtual machine You will need to create a VM to install the initial CMC. This virtual machine will be placed initially on the local disk. Table 9 highlights the choices to make during the creation of the VM.
Table 9. VM creation options

Category Configuration Datastore VM Version Guest Operating System CPUs Memory Network SCSI Controller Select a Disk Create a Disk Advanced Options

Choice Custom Local VMFS volume Virtual Machine Version: 7 Microsoft Windows Server 2003 R2x32; Linux 1 2GB 2 Adapters: 1 Management Network, 1 iSCSI Network LSI Logic Parallel Create a new virtual disk Variable size based on OS an standard practices, store with the virtual machine Node SCSI (0:0)

Once the VM has been created and the OS installed you should power it on and apply any updates and patches. It is expected that the private iSCSI network will not have DHCP services. You should decide on a networking schema for the iSCSI network and assign a static IP address to the CMC VM. You should consider installing a DNS server on the private iSCSI network. It is recommended that you minimize access to this VM. Since this is the access point to the internal iSCSI network, all efforts should be made to eliminate unneeded services, close off any unneeded ports and eliminate any non-critical users from accessing the system on the management network. If accessing the host via Microsoft RDP, port 3389 should be open. Configure the P4000sb storage blades Prior to installing the CMC, you can configure the P4000sb storage blades. These are found in enclosure device bays 7, 8, 15 and 16. For each node you will need to do the following. Login to the iLO and launch a remote console to the blade. This will give you local access to keyboard, video and mouse for the blade. At the initial login screen, type the word start and press enter. Press enter when the blue < Login > screen appears. A menu entitled Configuration Interface will appear.

52

Highlight the Network TCP/IP Settings menu item and press enter. Highlight the < eth0 > adapter and press enter. Enter a hostname and choose to manually set the IP address by highlighting Use the following IP address. You will need to enter an address for this adapter. Press OK when done. Return to the main menu by highlighting back and press enter. Highlight < Log out > and press enter. Repeat for the remaining three nodes. Installing the P4000 Failover Manager A Failover Manager (FOM) should be installed on the local disks of any hypervisor host in the management cluster. This Failover Manager protects the SAN by maintaining quorum in the event more than one node fails. The FOM must stay on the local disks and be connected to the iSCSI network. The FOM is available on the DVD media shipped with the P4800. The FOM should be the first node in the management group that is brought up and the last shut down. Install CMC in the dual homed VM In order to configure the storage it will be necessary to install the Centralized Management Console in the VM you created. The executable for the CMC is included with the software provided with your P4800 SAN. Run the executable and follow the instructions in order to complete the install. Once the installation is complete, the Find Nodes Wizard will launch.

Configuring the SAN using the Centralized Management Console VM


Understanding the SAN hierarchy With P4000 SANs, there is a hierarchy of relationships between nodes and between SANs that should be comprehended. In order to create a P4800 SAN, you will need to define the nodes, cluster and a management group. A node in P4000 SAN parlance is an individual storage server. A cluster is a group of nodes that when combined form a SAN. A management group houses one or more clusters/SANs and serves as the management point for those devices.

53

Detecting nodes You will locate the four (4) P4000sb nodes that you just configured. Figure 38 shows the initial wizard for identifying nodes. Click on the Next button to proceed.

Figure 38. CMC find nodes wizard

54

Click the radio button to search globally and then click on Next as in Figure 39.

Figure 39. Find nodes by subnet and mask

55

At the next screen, enter the subnet mask and subnet of the private iSCSI network. Click on OK and then Finish when done. The nodes should appear in the CMC under Available Nodes.

Figure 40. CMC find nodes wizard

Once you have validated that all nodes are present in the CMC you can move on to the next section to create the management group.

56

Creating the management group When maintaining an internal iSCSI network each rack must have its own CMC and Management Group. The management group is the broadest level where the administrator will manage and maintain the P4800 SAN. To create the first management group, click on the Management Groups, Clusters, and Volumes Wizard at the Welcome screen of the CMC as shown in Figure 41.

Figure 41. CMC Welcome screen

57

Click on the Next button when the wizard starts. This will take you to the Choose a Management Group screen as in Figure 42. Select the New Management Group radio button and then click on the Next button.

Figure 42. Choose a Management Group screen

58

This will take you to the Management Group Name screen. Assign a name to the group and insure all four (4) P4000sb nodes and the FOM are selected prior to clicking on the Next button. Figure 43 shows the screen.

Figure 43. Name the management group and choose the nodes

It will take quite a bit of time for the management group creation to complete. You can use Appendix A to record the IP addresses of the P4000sb nodes as well as the planned IP address for your cluster. When the wizard finishes, click on Next to continue.

59

Figure 44 shows the resulting screen where you will be asked to add an administrative user.

Figure 44. Creating the administrative user

Enter the requested information to create the administrative user and then click on Next. You will have the opportunity to create more users in the CMC after the initial installation. Enter an NTP server on the iSCSI network if available at the next screen and click on Next. If unavailable, manually set the time. The next screen will begin the process of cluster creation described in the following section.

60

Create the cluster At the Create a Cluster screen, and select the radio button to choose a Standard Cluster as in Figure 45. Click on the Next button once you are done.

Figure 45. Create a standard cluster

61

At the next screen, enter a Cluster Name as in Figure 46 and insure all 4 nodes are highlighted. Click on the Next button.

Figure 46. Name the cluster

62

At the next screen, you will be asked to enter a virtual IP address for the cluster as in Figure 47.

Figure 47. Select a virtual IP address

Click on Add and enter an IP address and Subnet Mask on the private iSCSI network. This will serve as the target address for your hypervisor side iSCSI configuration. Click on Next when done. At the resulting screen, check the box in the lower right corner that says Skip Volume Creation and then click Finish. You will create volumes in the next section. Once done, close all windows. At this point, the sixty (60) day evaluation period for SAN/iQ 8.5 begins. You will need to license and register each of the 4 nodes within this sixty (60) day period.

63

Configuring hypervisor hosts


Figure 48 highlights the naming of the individual hosts within the enclosure. The remainder of this document will reference these names. Please reference this diagram for any questions about location references.

Figure 48. Individual host naming for this document

64

Installing the hypervisor For the remaining hypervisor hosts (device bays 2-6 and 9-14) install the hypervisor as you did in the section entitled Installing the first hypervisor host and CMC. This includes assigning the same networks as was done to host 1. On each host, click on the Configuration tab in the VIC and then Storage Adapters and highlight the software based iSCSI adapter as in Figure 49. Click on the Properties link for the adapter.

Figure 49. Highlighting the iSCSI software adapter in the VIC

65

A new window appears as in Figure 50. Click on the Configure button to proceed.

Figure 50. Properties window for the iSCSI adapter

When the new window appears, click on Enabled and then click OK to continue. Click on the Configure button again. The window in Figure 51 should appear.

Figure 51. Configure window

There is now an iSCSI name attached to the device. You may choose to alter this name and make it simpler to remember by eliminating characters after the : or you may leave it as is. Record this name in Appendix A. You will use these in the section entitled Configuring storage for hypervisor hosts.

66

From the Properties window, click on the Dynamic Discovery tab and then on Add as in Figure 52.

Figure 52. Adding a target

Enter the IP address of your cluster in the window as in Figure 53.

Figure 53. Defining the cluster IP

If you are using CHAP (challenge handshake authentication protocol) you should configure it now. Click on OK when you have finished and close out the windows. When you close the window you will be asked to rescan the adapter. Skip this step at this time.

67

Repeat this process for each hypervisor host.

Configuring storage for hypervisor hosts


Configuring the P4800 and hypervisor hosts for iSCSI communication is a two part process. Each hypervisor host must have its software based iSCSI initiator enabled and pointed at the target address of the P4800. The P4800 must have each host that will access it defined in its host list by a logical name and iSCSI initiator name. This section covers the configuration of servers within the CMC. At a minimum the management hosts should be configured for SAN connection as per the Installing the hypervisor section before proceeding to the volume creation section. From the CMC virtual machine, start a session with the CMC as the administrative user. Highlight the Management Group you created earlier and log in if prompted. Right click on Servers in the CMC and select New Server as in Figure 54.

Figure 54. Adding a new server from the CMC

68

The resulting window as in Figure 55 appears.

Figure 55. The New Server window in the CMC

Enter a name for the server (the hostname of the server works well), a brief description of the host and then enter the initiator node name you recorded in Appendix A. If you are using CHAP you should configure it at this time. Click on OK when done. Repeat this process for every hypervisor host that will attach to this SAN.

Creating the initial volumes for management


You will need to create an initial set of volumes that will house your management VMs. The overall space to be dedicated will vary. 1TB of space is suggested and is used in this document. For this example three (3) volumes of 300GB will be created and one (1) volume of 50 GB will be created. From the CMC, insure that all management servers have been properly defined in the servers section before proceeding. The volumes you are about to create will be assigned to these servers after the vCenter virtual machine has been created. They will be assigned to the first management server in this section.

69

From the CMC, expand the cluster as in Figure 56 and click on Volumes (0) and Snapshots (0).

Figure 56. Volumes and snapshots in the CMC

Click the drop down labeled Tasks. From the drop down, select the option for New Volume as in Figure 57.

Figure 57. Volumes Create a new volume

70

In the New Volume window under the Basic tab, enter a volume name and short description. Enter a volume size of 50GB. This volume will house ThinApp executables. Figure 58 shows the window.

Figure 58. New Volume window

Once you have entered the data, click on the Advanced tab. As in Figure 59, insure you have selected your cluster, RAID-10 replication and then click the radio button for Thin Provisioning.

Figure 59. The Advanced tab of the New Volume window

Click on the OK button when done. Repeat this process to create three (3) 300GB management volumes. One will house management servers, one will house OS images, templates and test VMs and the other will serve as a production

71

test volume for testing the deployment of VMs as well as testing patch application and functionality of new master VMs. When all volumes have been created, record names in Appendix F of this document and then return to the Servers section of the CMC under the main Management Group. You will initially assign the volumes you just created to the first management host. In this document this host is in device bay 1. Right click on your first management server as in Figure 60 and choose to Assign and Unassign Volumes.

Figure 60. Server options

72

The window in Figure 61 appears.

Figure 61. Server options

Click on the check boxes under the Assigned column for all volumes you just created to assign them to this host. You will repeat these steps when you create your other volumes after the vCenter server has been created. NOTE: You may script the creation and assignment of volumes using the CLIQ utility shipped on your P4000 software DVD.

73

Use the VIC to attach to management host 1. Under the Configuration tab click on the Storage Adapters link in the left menu and then click on Rescan as in Figure 62. Click on OK when prompted to rescan.

Figure 62. VIC storage adapters screen

This will cause the host to see the volumes you just assigned to it in the CMC. To make these volumes active you will need to format them as VMFS. Under the Configuration tab in the VIC, click on Storage as in Figure 63.

Figure 63. VIC storage screen

Click on Add Storage as in the figure. Follow the onscreen instructions to create the datastores on this host. Use all available space per volume. When you have finished, your storage should appear similar to that shown in Figure 64.

74

Figure 64. Storage for management server 1

The next section serves as a guideline for you to understand the overall volume layout for the POD. You should not create and assign the volumes listed in that section until after you have built your management servers.

Volume layout
This section suggests layouts for all volumes on the SAN. Use the process from the prior section or script the creation of these volumes. All volumes on the P4800 will be created as thinly provisioned. You should either manually select this option or script it during creation. The number, size and layout of volumes will vary based on user type. Insure that you understand the consequences of changing volume sizes to linked clones and user data storage prior to making changes. This document assumes that user data will be housed on separate filesystems. This assists with system maintenance, availability and the management of filesystem growth. The document further assumes that maintenance plans for the recomposition of clones will be aggressive and that VMs will be patched and rebuilt on a regular basis. Use Appendix F of this document to record all volume names and VMware cluster assignments. Personality volumes Personality volumes should be stored independently on volumes sized to comprehend the size of local profiles for users. Volume size should be minimized as monolithic volumes may result in performance degradation. As an example, if a survey of local profiles finds the average local profile among selected users to be 500MB, a personality volume of 200GB could be used to support 400 users. Insure that the volume size adequately addresses space requirements for all volumes. Replica / clone volumes There are numerous factors to consider when sizing volumes for replicas and clones including the size of the base image and the frequency with which clones will be refreshed. This section defers size considerations to VMwares documentation (http://www.vmware.com/products/view/resources.html) rather than making specific recommendations. This section does specify ratios that will need to be considered when laying out volumes for replicas and clones. For each VMware cluster, it is recommended that 5-8 hosts be used as the basis for the pool that will be created. Each volume that will be associated with a cluster will have between 16 and 32 linked clones per volume.

75

Note: VMware recommends up to 64 linked clones per volume. HP strongly recommends 16 or 32 linked clones per volume based on testing of the P4800. Using prior sizing recommendations we can derive volume counts. For a five (5) node cluster where each host has 66 virtual desktops per host we can calculate total virtual machine counts through simple multiplication. = In our above example that is 5 x 66 or 330 virtual machines. Volume count is simply a matter of dividing the total number of VMs by clones per volume. = In our example this equates to 330 VMs divided by 16 VMs per volume for a total of 21 volumes per cluster. Once again, consult VMwares View documentation for determining volume sizing. Management volumes A volume will need to be created that will house management server virtual machines. A single 1TB volume or two (2) 500GB volumes are generally adequate with the assumption that an external SQL Server instance is available outside of the enclosure. The next section titled Set up management VMs covers the suggested VMs to be created. Fileshare volumes This paper suggests that user data be housed on separate fileshares. In the event you choose to host these from management virtual machines within the enclosure you should consider the costs and benefits. These shares are highly available as virtual machines and due to the way they are networked are also highly performant and cost effective. For some system maintenance activities it may be necessary to take the shares offline.

Set up management VMs


In this document, each management server with the exception of Microsoft SQL Server is housed in a virtual machine. You may choose to virtualize SQL Server. It is not virtualized in this configuration because the assumption is a large, redundant SQL Server entity exists on the management network that can handle the stacking of the databases required for this configuration. By keeping the management components virtualized it insures that the majority of the overall architecture is made highly available via standard VMware practices and reduces the server count required to manage the overall stack. Each of the following management servers should be created based on the best practices of the software vendor. The vendors installation instructions should be followed to produce an optimized VM for the particular application service. These VMs, along with the CMC VM that was created first, should be created on the first management host in the 300GB management servers volume. VMware vCenter/View Composer vCenter is the backbone for the management of all virtualized resources. The vCenter server when complete (post deployment) will be at almost half of the recommended maximum capacity. When sizing this virtual server it is important to consider the enterprise nature of the overall configuration. It should be noted as well that it is expected that the individual vCenter virtual machines in Enterprise deployments will be federated at a higher level. This also needs to be considered when designing the virtual machine.

76

View Manager VMware View Manager is the central component that determines how desktops are delivered in a VDI environment. HPs reference platform is capable of supporting very large numbers of users. You should consult VMwares documentation in order to determine proper VM sizing. An approach that seeks to maximize overall scaling is heavily recommended. Documentation can be found at http://www.vmware.com/products/view/resources.html. You should consider any and all recommended methods for maximizing not only performance, but also availability and security. Creating load balanced, highly available View Manager nodes between racks is recommended as a way to maximize availability and uptime. Fileshare hosts Three fileshare VMs have been created for demonstration purposes rather than utilizing external storage. One host will be used to share VMware ThinApp application executables and should be sized according to VMwares best practices documentation available at http://www.vmware.com/products/thinapp/related-resources.html. The remaining two hosts will be used to host user data fileshares and should be configured based on OS version best practices. At a minimum, it is recommended that the hosts be granted two vCPUs and 3GB of memory for 32 bit operating systems. 64 bit operating system VMs should be sized with at least 4GB of memory. If you decide to house user data on VM fileshares, the hosts will need to be part of the production domain. Systems Insight Manager HP Systems Insight Manager is the clear choice for managing HP servers and storage by being the easiest, simplest and least expensive way for HP system administrators to maximize system uptime and health: Provides hardware level management for HP ProLiant, Integrity, and HP 9000 servers, HP BladeSystem, HP StorageWorks Modular Smart Array (MSA), Enterprise Virtual Array (EVA), and XP storage arrays Integrated with HP Insight Remote Support Advanced, provides contracts and warranty management, and automates remote support Enables control of Windows, HP-UX, Linux, OpenVMS and NonStop environments Integrates easily with Insight Control and Insight Dynamics suites that enable you not only to proactively manage your server health whether physical or virtual but also to deploy servers quickly, optimize power consumption, and optimize infrastructure confidently with capacity planning HP recommends that Systems Insight Manager be installed to monitor components within the reference platform and beyond. AppSense HP recommends customers implement AppSense Environment Manager for Policy and Personalization management in a VDI environment. AppSense technology enables VDI users to experience a consistent and personalized working environment regardless of where and how they access their virtual desktop. Since AppSense dynamically personalizes virtual desktops on access, virtual desktop image standardization is possible, significantly reducing back-end management and storage costs. AppSense Environment Manager combines company policy and users personal settings to deliver an optimum virtual desktop without cumbersome scripting and high maintenance of traditional profiling methods. Capabilities such as profile migration, personalization streaming, self-healing and registry

77

hiving combine to provide an enterprise-scalable solution managing all aspects of the user in a VDI environment. For more information on best practice user environment management in VDI, please visit http://www.appsense.com/solutions/virtualdesktops.aspx. Appendix E of this document makes some recommendations around best practices for utilizing AppSense. Migrate the CMC Once all management volumes and servers have been created and assigned, right click on the CMC VM from within vCenter. Click on Migrate. Perform a storage migration by highlighting the Change datastore radio button and click Next. Choose the appropriate SAN management volume as the destination and click on Next. Choose to use Same format as source and then click Next. Click on Finish. The CMC is now housed on shared storage. Failover Manager You must leave the FOM on the local disk of a hypervisor host. In the event that the host goes down, this will require the restart/repair of that host in order to bring the FOM back to functional levels.

Creating the clusters


For this single enclosure configuration you will create three clusters within vCenter. All of these clusters should be housed under one datacenter. For each cluster, HA and Distributed Resource Scheduler (DRS) should be turned on and VMwares best practices should be followed. The following clusters are recommended as in Table 10. Cluster names are completely at your discretion and should conform to the guidelines of your IT shop.
Table 10. VMware clusters

Cluster

Hosts

Function House management VMs as well as serving as a test bed for VMs, Templates and patches prior to deployment. First group of VDI hosts Second group of VDI hosts

Management

Bays 1 and 9

VDI001 VDI002

Bays 2 6 Bays 10 14

The management VMs will show up in the management cluster once the management hosts are added.

78

Figure 65 shows the first 4 hosts configured and entered into vCenter.

Figure 65. vCenter Datacenter, Cluster and Hosts

Creating the View environment


Once all management pieces, hypervisor hosts and storage volumes are in place, the process of creating a functional VMware View environment can begin. This document does not make any further recommendations around optimizations for this environment. HP recommends you consult VMwares Reference Architecture for VMware View found at http://www.vmware.com/products/view/resources.html.

Scaling to other enclosures


The previous sections outlined the configuration and setup for a single enclosure and single P4800. For task and productivity workers it is expected that configurations will extend into a second and in some cases a third enclosure. This section discusses how to add an enclosure into the existing configuration and cable it appropriately.

79

Enclosure location Prior to configuring the second and possibly third enclosures it is necessary to place them within the rack. Figure 66 highlights the location of the new enclosures within the rack.

Figure 66. Additional enclosure placement within a 42U rack

80

After the enclosure has been added you will need to cable the Virtual Connect Flex-10 modules to allow the formation of a Virtual Connect domain and the inter-enclosure communication path. Figure 67 shows the cabling for a two (2) enclosure domain.

Figure 67. Two enclosure Virtual Connect Flex-10 cabling. Left diagram uses CX-4 cabling. Right diagram uses SFP+ cabling

Figure 68 shows cabling for a 3 enclosure domain. Note that in both configurations you may choose to use CX4 cabling to link enclosures. CX4 connectors are tied to uplink 1 on the Virtual Connect Flex-10 modules. If you use CX4 you will not be able to hook an SFP+ cable or fibre connection to uplink 1.

Figure 68. 3 enclosure Virtual Connect Flex-10 cabling. Red cables represent CX4 cabling.

Within the Virtual Connect management console you will need to make changes as well. Rather than a single enclosure definition for Virtual Connect, you will need to create a domain that includes all modules and enclosures. It is highly recommended that you read the Virtual Connect Multi-Enclosure Stacking Reference Guide located at http://www.hp.com/go/virtualconnect. It is recommended that each pair of Virtual Connect Flex-10 modules have its own set of redundant links to the network core for the redundant VLANd management, VM and vMotion networks. Consult Appendix G for a single enclosure example.

81

Scaling beyond the enclosure


The Enterprise POD concept is designed to scale as a set of individual units. Once the first POD has been built, deployed and validated there is a resultant set of discoveries that shape a deployment and management practice. Chief among these is the number and types of users that the overall solution supports in a given environment from all aspects storage, network, server and client. To scale from there you build another POD. One advantage of this approach is that contention points should become obvious before they are encountered. As an example, if an individual POD is found to use 10% of outbound network capacity the architects can begin to plan for either expansion of network or a scaling limit at 7, 8 or perhaps even 9 PODs. With multiple PODs it is recommended that the overall solution become as distributed as possible. This includes doing the following: Placing load balancing in front of all servers running VMware View Manager to enable the distribution and load balancing of connections. Mixing user types across solutions where possible to maximize uptime for all user types. Centralizing data on other filesystems and replicating those filesystems between locations. Federating vCenter servers and other management pieces to define larger management units for the overall solution.

Conclusions
This document has provided an overview of solution sizing and discussed the buildout of a single enclosure solution for knowledge workers. While it has attempted to provide a broad range of information about scaling to various workloads, HP understands that all customer environments are not identical and that adaptations need to be made. The POD approach is designed to provide the flexibility needed to make those adaptations. For large scale implementations, HP recommends engaging our trained, experienced resources to provide the optimal implementation experience. For more information about HP service resources, see the links in the For more information section of this document.

82

Appendix A
Notes and resources
Table A-1. Important IP addresses

Component OA1 OA2 OA3 VC Manager P4800 Cluster P4800 Node 1 P4800 Node 2 P4800 Node 3 P4800 Node 4

IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . .

Notes

Table A-2. iSCSI Initiator names by host device bay

Host 1 2 3 4 5 6 9 10 11 12 13

iSCSI Name

83

Appendix B Virtual Connect technology


Optional Virtual Connect Ethernet and Fibre Channel modules are interconnects that can be deployed in HP BladeSystem c7000 or c3000 enclosures in place of conventional pass-thru or managed-switch modules. Virtual Connect provides server-edge I/O virtualization, creating an abstraction layer between blades and external networks. Now, rather than individual server blades, the local area network (LAN) or storage area network (SAN) sees a pool of blades. On the server-side of the abstraction layer, connectivity is achieved via profiles created by the server administrator rather than using default hardware identifiers that is, Media Access Control (MAC) addresses2 or World Wide Names (WWNs)3. You create profiles for the bays in each HP BladeSystem enclosure rather than tying the profiles to specific blades. Virtual Connect maps physical LAN or SAN connections to these profiles, allowing you to manage connectivity between blades and networks without involving LAN or SAN administrators. Consider the example shown in Figure B-1 where Server Blades A and B are connected to four LANs. When Server Blade A fails, you can move its profile to a bay containing a spare blade to restore availability without waiting for assistance from the LAN administrator.

Figure B-1. After Server Blade A fails, the system administrator moves the desired profile to a bay containing a spare blade

2 3

Conventionally used to identify NICs for the LAN Conventionally used to identify host bus adapters (HBAs) for the LAN

84

Virtual Connect Flex-10


Virtual Connect Flex-10 technology extends the capabilities of Virtual Connect by allowing a server blades 10 Gb network ports to be partitioned. A single port now represents four physical NICs (known as FlexNICs) with a combined bandwidth of 10 Gb; to the operating system, each FlexNIC appears to be a discrete NIC with its own driver. Though FlexNICs share the same physical port, traffic between a particular FlexNIC and the Virtual Connect Flex-10 interconnect module is isolated using individual MAC addresses and virtual LAN (vLAN) tags. You can control the bandwidth available to each FlexNIC4 through the Virtual Connect Manager interface, as shown in Figure B-2.

Figure B-2. Configuring a dual-port NIC with eight FlexNICs

In increments of 100 Mb

85

In this reference configuration, Figure B-3 shows how the 10Gb network was partitioned into 4 virtual networks and configured for varying bandwidth: Management Network Configured for 800Mbps; communication to ESX hosts, iLO processors and the OA as well as interconnects Production Network Configured for 1.2Gbps; communication with users and outside network traffic occurs here. This network supports the users View session, Internet and corporate network access iSCSI Network Configured for 6Gbps; communication with the iSCSI network occurs here vMotion Network Configured for 2Gbps; vMotion traffic occurs across this network

Figure B-3. Flex-10 virtual network configuration

Management Network 800Mb Production Network 1.2Gb

iSCSI Network 6Gb vMotion Network 2Gb

Flex-10

Slot A
HP ProCurve 2p 10-GbE CX4 al Module
J9149A

Slot B

CAUTION: MULTIPLE POWER SOURCES


Disconnect AC power cord, and EPS and RPS cables, to completely remove power from the unit.

al
Mdl Status

HP ProCurve 2p 10-GbE CX4 al Module


J9149A

al

Use al modules only

Mdl Status

12V System Power (RPS) Input


Link

54V PoE+ (EPS) Input

2 ProCurve Switch al
Link

Link

2 ProCurve Switch al
Link

Mode

Mode

Mode

Mode

Line: 50/60 Hz.


Connect ProCurve 630 EPS only

100-127 V~ 10 A 200-240 V~ 5 A

For more information, refer to http://h18004.www1.hp.com/products/blades/virtualconnect/index.html.

86

Appendix C Sample enclosure configuration script


This is an automatically generated sample script for single enclosure with Virtual Connect Flex-10 configuration. You will need to alter settings as appropriate to your environment. Consult the BladeSystem documentation at http://www.hp.com/go/bladesystem for information on individual script options. #Script Generated by Administrator #Set Enclosure Time SET TIMEZONE CST6CDT #SET DATE MMDDhhmm{{CC}YY} #Set Enclosure Information SET ENCLOSURE ASSET TAG "TAG NAME" SET ENCLOSURE NAME "ENCL NAME" SET RACK NAME "RACK NAME" SET POWER MODE REDUNDANT SET POWER SAVINGS ON #Power limit must be within the range of 2700-16400 SET POWER LIMIT OFF #Enclosure Dynamic Power Cap must be within the range of 2013-7822 #Derated Circuit Capacity must be within the range of 2013-7822 #Rated Circuit Capacity must be within the range of 2082-7822 SET ENCLOSURE POWER_CAP OFF SET ENCLOSURE POWER_CAP_BAYS_TO_EXCLUDE None #Set PowerDelay Information SET INTERCONNECT POWERDELAY 1 SET INTERCONNECT POWERDELAY 2 SET INTERCONNECT POWERDELAY 3 SET INTERCONNECT POWERDELAY 4 SET INTERCONNECT POWERDELAY 5 SET INTERCONNECT POWERDELAY 6 SET INTERCONNECT POWERDELAY 7 SET INTERCONNECT POWERDELAY 8 SET SERVER POWERDELAY 1 0 SET SERVER POWERDELAY 2 0 SET SERVER POWERDELAY 3 0 SET SERVER POWERDELAY 4 0 SET SERVER POWERDELAY 5 0 SET SERVER POWERDELAY 6 0 SET SERVER POWERDELAY 7 240 SET SERVER POWERDELAY 8 240 SET SERVER POWERDELAY 9 0 SET SERVER POWERDELAY 10 0 SET SERVER POWERDELAY 11 0 SET SERVER POWERDELAY 12 0 SET SERVER POWERDELAY 13 0 SET SERVER POWERDELAY 14 0 SET SERVER POWERDELAY 15 240 SET SERVER POWERDELAY 16 240

210 210 0 0 30 30 0 0

# Set ENCRYPTION security mode to STRONG or NORMAL.

87

SET ENCRYPTION NORMAL #Configure Protocols ENABLE WEB ENABLE SECURESH DISABLE TELNET ENABLE XMLREPLY ENABLE GUI_LOGIN_DETAIL #Configure Alertmail SET ALERTMAIL SMTPSERVER 0.0.0.0 DISABLE ALERTMAIL #Configure Trusted Hosts #REMOVE TRUSTED HOST ALL DISABLE TRUSTED HOST #Configure NTP SET NTP PRIMARY 10.1.0.2 SET NTP SECONDARY 10.1.0.3 SET NTP POLL 720 DISABLE NTP #Set SNMP Information SET SNMP CONTACT "Name" SET SNMP LOCATION "Locale" SET SNMP COMMUNITY READ "public" SET SNMP COMMUNITY WRITE "private" ENABLE SNMP #Set Remote Syslog Information SET REMOTE SYSLOG SERVER "" SET REMOTE SYSLOG PORT 514 DISABLE SYSLOG REMOTE #Set Enclosure Bay IP Addressing (EBIPA) Information for Device Bays #NOTE: SET EBIPA commands are only valid for OA v3.00 and later SET EBIPA SERVER 10.0.0.1 255.0.0.0 1 SET EBIPA SERVER GATEWAY NONE 1 SET EBIPA SERVER DOMAIN "vdi.net" 1 ENABLE EBIPA SERVER 1 SET EBIPA SERVER 10.0.0.2 255.0.0.0 2 SET EBIPA SERVER GATEWAY NONE 2 SET EBIPA SERVER DOMAIN "vdi.net" 2 ENABLE EBIPA SERVER 2 SET EBIPA SERVER 10.0.0.3 255.0.0.0 3 SET EBIPA SERVER GATEWAY NONE 3 SET EBIPA SERVER DOMAIN "vdi.net" 3 ENABLE EBIPA SERVER 3 SET EBIPA SERVER 10.0.0.4 255.0.0.0 4 SET EBIPA SERVER GATEWAY NONE 4 SET EBIPA SERVER DOMAIN "vdi.net" 4 ENABLE EBIPA SERVER 4 SET EBIPA SERVER 10.0.0.5 255.0.0.0 5 SET EBIPA SERVER GATEWAY NONE 5 SET EBIPA SERVER DOMAIN "vdi.net" 5 ENABLE EBIPA SERVER 5

88

SET EBIPA SERVER 10.0.0.6 255.0.0.0 6 SET EBIPA SERVER GATEWAY NONE 6 SET EBIPA SERVER DOMAIN "vdi.net" 6 ENABLE EBIPA SERVER 6 SET EBIPA SERVER 10.0.0.7 255.0.0.0 7 SET EBIPA SERVER GATEWAY NONE 7 SET EBIPA SERVER DOMAIN "vdi.net" 7 ENABLE EBIPA SERVER 7 SET EBIPA SERVER 10.0.0.8 255.0.0.0 8 SET EBIPA SERVER GATEWAY NONE 8 SET EBIPA SERVER DOMAIN "vdi.net" 8 ENABLE EBIPA SERVER 8 SET EBIPA SERVER 10.0.0.9 255.0.0.0 9 SET EBIPA SERVER GATEWAY NONE 9 SET EBIPA SERVER DOMAIN "vdi.net" 9 ENABLE EBIPA SERVER 9 SET EBIPA SERVER 10.0.0.10 255.0.0.0 10 SET EBIPA SERVER GATEWAY NONE 10 SET EBIPA SERVER DOMAIN "vdi.net" 10 ENABLE EBIPA SERVER 10 SET EBIPA SERVER 10.0.0.11 255.0.0.0 11 SET EBIPA SERVER GATEWAY NONE 11 SET EBIPA SERVER DOMAIN "vdi.net" 11 ENABLE EBIPA SERVER 11 SET EBIPA SERVER 10.0.0.12 255.0.0.0 12 SET EBIPA SERVER GATEWAY NONE 12 SET EBIPA SERVER DOMAIN "vdi.net" 12 ENABLE EBIPA SERVER 12 SET EBIPA SERVER 10.0.0.13 255.0.0.0 13 SET EBIPA SERVER GATEWAY NONE 13 SET EBIPA SERVER DOMAIN "vdi.net" 13 ENABLE EBIPA SERVER 13 SET EBIPA SERVER 10.0.0.14 255.0.0.0 14 SET EBIPA SERVER GATEWAY NONE 14 SET EBIPA SERVER DOMAIN "vdi.net" 14 ENABLE EBIPA SERVER 14 SET EBIPA SERVER NONE NONE 14A SET EBIPA SERVER GATEWAY 10.65.1.254 14A SET EBIPA SERVER DOMAIN "" 14A SET EBIPA SERVER 10.0.0.15 255.0.0.0 15 SET EBIPA SERVER GATEWAY NONE 15 SET EBIPA SERVER DOMAIN "vdi.net" 15 ENABLE EBIPA SERVER 15 SET EBIPA SERVER 10.0.0.16 255.0.0.0 16 SET EBIPA SERVER GATEWAY NONE 16 SET EBIPA SERVER DOMAIN "vdi.net" 16 ENABLE EBIPA SERVER 16 #Set Enclosure Bay IP Addressing (EBIPA) Information for Interconnect Bays #NOTE: SET EBIPA commands are only valid for OA v3.00 and later SET EBIPA INTERCONNECT 10.0.0.101 255.0.0.0 1 SET EBIPA INTERCONNECT GATEWAY NONE 1 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 1 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 1 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 1 ENABLE EBIPA INTERCONNECT 1 SET EBIPA INTERCONNECT 10.0.0.102 255.0.0.0 2 SET EBIPA INTERCONNECT GATEWAY NONE 2 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 2 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 2

89

SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 2 ENABLE EBIPA INTERCONNECT 2 SET EBIPA INTERCONNECT 10.0.0.103 255.0.0.0 3 SET EBIPA INTERCONNECT GATEWAY NONE 3 SET EBIPA INTERCONNECT DOMAIN "" 3 SET EBIPA INTERCONNECT NTP PRIMARY NONE 3 SET EBIPA INTERCONNECT NTP SECONDARY NONE 3 ENABLE EBIPA INTERCONNECT 3 SET EBIPA INTERCONNECT 10.0.0.104 255.0.0.0 4 SET EBIPA INTERCONNECT GATEWAY NONE 4 SET EBIPA INTERCONNECT DOMAIN "" 4 SET EBIPA INTERCONNECT NTP PRIMARY NONE 4 SET EBIPA INTERCONNECT NTP SECONDARY NONE 4 ENABLE EBIPA INTERCONNECT 4 SET EBIPA INTERCONNECT 10.0.0.105 255.0.0.0 5 SET EBIPA INTERCONNECT GATEWAY NONE 5 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 5 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 5 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 5 ENABLE EBIPA INTERCONNECT 5 SET EBIPA INTERCONNECT 10.0.0.106 255.0.0.0 6 SET EBIPA INTERCONNECT GATEWAY NONE 6 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 6 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 6 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 6 ENABLE EBIPA INTERCONNECT 6 SET EBIPA INTERCONNECT 10.0.0.107 255.0.0.0 7 SET EBIPA INTERCONNECT GATEWAY NONE 7 SET EBIPA INTERCONNECT DOMAIN "" 7 SET EBIPA INTERCONNECT NTP PRIMARY NONE 7 SET EBIPA INTERCONNECT NTP SECONDARY NONE 7 ENABLE EBIPA INTERCONNECT 7 SET EBIPA INTERCONNECT 10.0.0.108 255.0.0.0 8 SET EBIPA INTERCONNECT GATEWAY NONE 8 SET EBIPA INTERCONNECT DOMAIN "" 8 SET EBIPA INTERCONNECT NTP PRIMARY NONE 8 SET EBIPA INTERCONNECT NTP SECONDARY NONE 8 ENABLE EBIPA INTERCONNECT 8 SAVE EBIPA #Uncomment following line to remove all user accounts currently in the system #REMOVE USERS ALL #Create Users add at least 1 administrative user ADD USER "admin" SET USER CONTACT "Administrator" SET USER FULLNAME "System Admin" SET USER ACCESS ADMINISTRATOR ASSIGN SERVER 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1A,2A,3A,4A,5A,6A,7A,8A,9A,10A,11A ,12A,13A,14A,15A,16A,1B,2B,3B,4B,5B,6B,7B,8B,9B,10B,11B,12B,13B,14B,15B,1 6B "Administrator" ASSIGN INTERCONNECT 1,2,3,4,5,6,7,8 "Administrator" ASSIGN OA "Administrator" ENABLE USER "Administrator" #Password Settings ENABLE STRONG PASSWORDS SET MINIMUM PASSWORD LENGTH 8

90

#Session Timeout Settings SET SESSION TIMEOUT 1440 #Set LDAP Information SET LDAP SERVER "" SET LDAP PORT 0 SET LDAP NAME MAP OFF SET LDAP SEARCH 1 "" SET LDAP SEARCH 2 "" SET LDAP SEARCH 3 "" SET LDAP SEARCH 4 "" SET LDAP SEARCH 5 "" SET LDAP SEARCH 6 "" #Uncomment following line to remove all LDAP accounts currently in the system #REMOVE LDAP GROUP ALL DISABLE LDAP #Set SSO TRUST MODE SET SSO TRUST Disabled #Set Network Information #NOTE: Setting your network information through a script while # remotely accessing the server could drop your connection. # If your connection is dropped this script may not execute to conclusion. SET OA NAME 1 VDIOA1 SET IPCONFIG STATIC 1 10.0.0.255 255.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 SET NIC AUTO 1 DISABLE ENCLOSURE_IP_MODE SET LLF INTERVAL 60 DISABLE LLF #Set VLAN Information SET VLAN FACTORY SET VLAN DEFAULT 1 EDIT VLAN 1 "Default" ADD VLAN 21 "VDI" ADD VLAN 29 VMOTION ADD VLAN 93 PUB_ISCSI ADD VLAN 110 "MGMT_VLAN" SET VLAN SERVER 1 1 SET VLAN SERVER 1 2 SET VLAN SERVER 1 3 SET VLAN SERVER 1 4 SET VLAN SERVER 1 5 SET VLAN SERVER 1 6 SET VLAN SERVER 1 7 SET VLAN SERVER 1 8 SET VLAN SERVER 1 9 SET VLAN SERVER 1 10 SET VLAN SERVER 1 11 SET VLAN SERVER 1 12 SET VLAN SERVER 1 13 SET VLAN SERVER 1 14 SET VLAN SERVER 1 15 SET VLAN SERVER 1 16 SET VLAN INTERCONNECT 1 1

91

SET VLAN INTERCONNECT 1 2 SET VLAN INTERCONNECT 1 3 SET VLAN INTERCONNECT 1 4 SET VLAN INTERCONNECT 1 5 SET VLAN INTERCONNECT 1 6 SET VLAN INTERCONNECT 1 7 SET VLAN INTERCONNECT 1 8 SET VLAN OA 1 DISABLE VLAN SAVE VLAN DISABLE URB SET URB URL "" SET URB PROXY URL "" SET URB INTERVAL DAILY 0

92

Appendix D HP Thin Clients


Selecting the appropriate thin client can help optimize both the user experience and your IT budget; thus, HP offers product families designed to fit the needs of most environments. Table D-1 summarizes HP Thin Client models recommended for the VMware View environment.
Table D-1. Recommended models

Series Essential Series Ideal for basic, taskoriented applications and terminal services

Features Simple and affordable Marvell Smart ARM-based architecture with HP ThinPro Basic multimedia support (lower resolution, smaller window size) Essential peripheral support

Model

t5325

Mainstream Series Ideal for most business productivity applications

Enhanced features, mainstream use Choice of Microsoft Windows Embedded Compact (CE) 6.0 or HP ThinPro Media player Terminal emulation Wide peripheral support Secure USB compartment t5540 (CE 6.0) t5545 (HP ThinPro)

Flexible Series Rich performance for almost any server-based or remote computing environment

Powerful, flexible, innovative Choice of Microsoft Windows Embedded Standard (WES) or HP ThinPro Next-generation Intel Atom N280 processor (select models) Integrated wireless 802.11a/b/g/n (select models) Highly configurable operating system with support for local applications Secure USB compartment and PCIe/PCI expansion option Write filter and firewall Broad peripheral support Supports dual monitors t5740 (WES) t5745 (HP ThinPro) t5630w (WES)

Specialty Series Mobile, more secure with 13.3-inch diagonal display

Mobile thin clients 13.3-inch diagonal LED-backlit HD anti-glare display Integrated wireless Support for local applications 4320t

Each HP Thin Client ships with HP Device Manager, which can simplify configuration and administration.

93

HP Device Manager
HP Device Manager allows you to track, configure, upgrade, clone, and manage up to thousands of thin clients with ease. For greater business agility, Device Manager can dramatically simplify device deployment, task automation, compliance management, and policy-based security management. Benefits include: Simplify client administration through automated management tool and administrative templates Clone and group devices, enforce policies, automate tasks, and run system reports Update scheduling and status reporting for all thin clients in the enterprise Enable encrypted traffic; enhance security by issuing certificates between thin clients, gateways, and servers For more information on Device Manager, refer to http://www.hp.com/go/thinclient. In addition, for organizations with a mix of thin and traditional desktop clients, HP Client Automation Software Standard software can provide a single management tool for an entire desktop environment. For more information, refer to http://www.hp.com/go/clientautomation.

94

Appendix E AppSense Environment Manager


AppSense Environment Manager provides a management framework that allows administrators to segregate user data from the OS and applications while optimizing profiles for size and performance. Data can be joined dynamically in a seamless and optimized fashion, feeding data on demand as needed rather than carrying all data across the wire every time a user logs in or out. This Appendix addresses, at a high level, suggestions for user profile and data management in a VDI environment using AppSense Environment Manager. This Appendix is not comprehensive. HP recommends consulting AppSense documentation at http://www.appsense.com/products/environmentmanager. For a well planned migration and implementation, HP Services can help. Solution approach A solution implementing Environment Manager will be composed of the following approaches. Mandatory profiles A mandatory profile is a read-only profile. As such it is not prone to corruption and maintains what is generally a very manageable size over time. These profiles are very fast loading at logon and are near zero management as they tend to require attention only when overall profile changes are implemented. Once created and optimized, the base mandatory profile is assigned to end users from a fileshare. It is recommended that the user profile shares in this reference configuration are stored on VM based fileshares with volumes hosted on the P4800. This insures very fast disk access as well as optimized network pipes to insure logon performance is optimized. Policy configuration Once mandatory profiles are assigned to users, it is necessary to configure policies. This includes configuring the redirection of various folders to network shares (also hosted on the P4800) and making registry level changes. User personalization With the advent of Environment Manager 8.0, AppSense allows the streaming of user personalization settings on-demand. With on-demand user personalization, only a subset of profile data is carried across the wire when users logon. Application data is streamed as needed from a centralized SQL Server instance and only changes made by the user during a session are carried back across the wire at logoff. The result of this finer grained optimization is that logon times tend to be much faster. It also means that personalization settings can be rolled back in the event of an unplanned change by a user. Policy enforcement Assigning policies to groups of users will be an exercise in futility if the policies are not enforced. Insure that user access to locations and applications is appropriately restricted. AppSense Environment Manager Lockdown technology prevents access to files that you do not want touched. This not only functions at a macro level, but can also be used to prevent access to application settings, context menus and shortcut keys.

95

Appendix F Volume names


Replicate this page as needed.
Table F-1. Volume names, sizes and cluster assignments

Volume Name

Size (GB)

Assigned to VMware Cluster

96

Appendix G Networking
Figure G-1 provides an example of networking to the core.

Figure G-1. Networking to the core

97

Appendix H Bill of materials


The following bill of materials creates a configuration for knowledge workers as defined in this document. Consult with you HP sales representative or HP reseller to determine materials needed for larger deployments. Management servers
QTY 2 2 2 4 24 Part No. 498357-B21 509322-L21 509322-B21 461203-B21 500658-B21 Description HP ProLiant BL490c G6 CTO Blade Intel Xeon Processor E5540 (2.53 GHz, 8MB L3 Cache, 80W, DDR3-1066, HT, Turbo 1/1/2/2) BL490c G6 FIO Kit Intel Xeon Processor E5540 (2.53 GHz, 8MB L3 Cache, 80W, DDR3-1066, HT, Turbo 1/1/2/2) BL490c G6 FIO Kit HP 64GB 1.5G SATA NHP SFF SP ENT SSD HP 4GB 2Rx4 PC3-10600R-9 Kit

View hypervisor hosts


QTY 10 10 10 20 180 Part No. 498357-B21 509319-B21 509319-L21 461203-B21 500658-B21 Description HP ProLiant BL490c G6 CTO Blade Intel Xeon X5570 (2.93 GHz, 8MB L3 Cache, 95W, DDR3-1333, HT, Turbo 2/2/3/3) BL490c G6 Processor Option Kit Intel Xeon X5570 (2.93 GHz, 8MB L3 Cache, 95W, DDR3-1333, HT, Turbo 2/2/3/3) BL490c G6 Processor Option Kit HP 64GB 1.5G SATA NHP SFF SP ENT SSD HP 4GB 2Rx4 PC3-10600R-9 Kit

Enclosure and interconnects


QTY 1 1 1 1 1 2 Part No. 507019-B21 413379-B21 517521-B21 517520-B21 456204-B21 455880-B21 Description HP BLc7000 CTO 3 IN LCD ROHS Encl HP BLc7000 Single Phase Power Module, FIO 6x HP 2400W High Efficiency Hot-Plug Power Supply Bundle, FIO 6x HP Active Cool 200 Fan Bundle, FIO HP c7000 Onboard Administrator with KVM Option HP Virtual Connect Flex-10 10Gb Ethernet Module for the c-Class BladeSystem If more than one enclosure will be in use per POD, consider ordering 591973-B21 which includes licensing for Virtual Connect Enterprise Manager.

Rack and power


QTY 1 Part No. AF034A Description 10642 G2 (42U) Rack Cabinet 1200mm Deep - Shock Pallet

98

1 1 4 2 6

AF054A AF009A 252663-D75 AF500A AF574A

10642 G2 (42U) Side Panels (set of two) (Graphite Metallic) HP 10642 G2 Front Door ALL HP 8,3k VA Modular PDU 40 Amp Core Only HP Two C-13 PDU Extension Bars HP 16A IEC320-C19 to C20 2m /6ft PDU cord, Grey

SAN
QTY 1 Part No. BM480A Description HP StorageWorks P4800 SAN

Services
QTY 1 1 1 1 1 Part No. UE602E UE603E HA124A1-58H (UJ714E) HA124A1-58J (UJ715E) HA124A1-58K (UJ716E) Description HP BladeSystem c7000 Infrastructure Installation and Startup Service for Blade Hardware and Insight Control Software, Electronic HP BladeSystem C7000 Enhanced Network Installation and Startup Service, Electronic Base P4000 setup 1-4 nodes - Same site/building Base P4000 setup 1-4 nodes - Multiple sites Additional 1-4 nodes for an existing P4000 configuration

Thin Clients
QTY 660 Part No. VU902AT#ABA VU900AA#ABA VU899AT#ABA 660 660 NM360A8#ABA EM870AA Description HP t5740 Thin Client Intel N280 Atom 2GF/2GR wifi HP t5740 Thin Client Intel N280 Atom 2GF/2GR HP t5740 Thin Client Intel N280 Atom 2GF/1GR HP Compaq LA1905wg 19 Widescreen LCD Monior HP Quick Release Kit

Software
QTY 6 6 660 660 1 1 Part No. 507575-B21 507576-B21 T9950AA T9955AA TB722AA 417688-B23 Description VMware View Premier 100 User VMware View Premier 10 User HP Client Automation Enterprise App Mgr Thn Cli 1-999 SW LTU HP Client Automation Enterprise OS Mgr Thn Cli 1-999 SW LTU HP Client Automation Enterprise PC Mgmt&Migr Ste 1-999 SW LTU HP Insight Control for c-Class BladeSystem, 16 licenses HP Insight Control for VMware vCenter Included with HP Insight Control

99

For more information


HP and Client Virtualization, www.hp.com/go/clientvirtualization HP and VDI, www.hp.com/go/VDI HP and VMware, www.hp.com/go/VMware HP BladeSystem, www.hp.com/go/bladesystem HP Virtual Connect, http://h18004.www1.hp.com/products/blades/virtualconnect/index.html HP c7000 Enclosure, http://h18004.www1.hp.com/products/blades/components/enclosures/cclass/c7000/ HP Onboard Administrator, http://h18004.www1.hp.com/products/blades/components/onboard/ HP Insight Control Software, http://h18013.www1.hp.com/products/servers/management/ice/index.html HP Insight Control Integrations, http://h18000.www1.hp.com/products/servers/management/integration.html Thin Client Management Services from HP, http://h10134.www1.hp.com/services/thinclientmgmt http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA2-7923ENW&cc=us&lc=en http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/18964-18964-3644431-36462073763975-3646216.html?jumpid=reg_R1002_USEN HP P4000 SAN Page, http://www.hp.com/go/p4000 HP Client Automation, http://www.hp.com/go/clientautomation HP Services, http://h20219.www2.hp.com/services/us/en/business-it-services.html, http://www.hp.com/large/contact/enterprise/index.html VMware VMware VMware VMware VMware View 4 documentation and resources, View, http://www.vmware.com/products/view/resources.html View Reference Architecture, http://www.vmware.com/resources/techresources/1084 vSphere 4, http://www.vmware.com/products/vsphere ThinApp, http://www.vmware.com/products/thinapp/related-resources.html

AppSense documentation and resources, AppSense, http://www.appsense.com AppSense and VMware View, http://appsense.com/solutions/vmware.aspx

To help us improve our documents, please provide feedback at http://h20219.www2.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html.

Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Adobe and Acrobat are trademarks of Adobe Systems Incorporated. 4AA1-9256ENW, Created June 2010

You might also like