Professional Documents
Culture Documents
4aa1 9256enw
4aa1 9256enw
Table of contents
Introduction ......................................................................................................................................... 3 How to accelerate your VMware View adoption .................................................................................. 3 Software used for this document ........................................................................................................ 4 Other architectures .............................................................................................................................. 5 The changing landscape ................................................................................................................... 5 The platform ........................................................................................................................................ 7 Scaling and sizing VDI ....................................................................................................................... 17 Overview ...................................................................................................................................... 17 Server sizing ................................................................................................................................. 17 Storage sizing ............................................................................................................................... 21 Other factors effecting sizing ........................................................................................................... 28 POD scaling by user type ................................................................................................................ 30 Building the platform .......................................................................................................................... 36 Creating the first POD..................................................................................................................... 36 Installing the first hypervisor host and CMC ....................................................................................... 49 Configuring the SAN using the Centralized Management Console VM ................................................. 53 Configuring hypervisor hosts ........................................................................................................... 64 Configuring storage for hypervisor hosts ........................................................................................... 68 Creating the initial volumes for management ..................................................................................... 69 Volume layout................................................................................................................................ 75 Set up management VMs ................................................................................................................ 76 Creating the clusters ....................................................................................................................... 78 Creating the View environment ........................................................................................................ 79 Scaling to other enclosures .............................................................................................................. 79 Scaling beyond the enclosure .......................................................................................................... 82 Conclusions ...................................................................................................................................... 82 Appendix A ...................................................................................................................................... 83 Notes and resources....................................................................................................................... 83 Appendix B Virtual Connect technology ............................................................................................. 84 Virtual Connect Flex-10................................................................................................................... 85 Appendix C Sample enclosure configuration script.............................................................................. 87 Appendix D HP Thin Clients ............................................................................................................. 93 HP Device Manager ....................................................................................................................... 94 Appendix E AppSense Environment Manager..................................................................................... 95
Appendix F Volume names ............................................................................................................... 96 Appendix G Networking.................................................................................................................. 97 Appendix H Bill of materials ............................................................................................................. 98 For more information ........................................................................................................................ 100
Introduction
How to accelerate your VMware View adoption
Virtual Desktop Infrastructure (VDI) or Hosted Virtual Desktops (HVD) as part of an overall client virtualization strategy offers tremendous benefits to enterprises. Among these benefits are enhanced data control and security, lower overall power consumption, insurance of regulatory compliance, enhanced client management capabilities, lower support costs, simplified deployment and patch management as well as extended device lifecycles. The growth in VDI adoption is a testament to the reality of many of these benefits. There have been adoption challenges along the way. The need to bring separate IT suppliers, end users and IT teams together to deploy and manage VDI has led to extended evaluation cycles and sub-optimal configurations as some groups have brought their favorites to the game rather than the optimal components for a solution. Some organizations are structured in a way that makes justifying VDI difficult as the cost savings are not reflected on their individual metrics. Planning and sizing suggestions have not always equated to practical and realistic numbers for implementation making it difficult to hit return-on-investment (ROI) and total cost of ownership (TCO) targets. Perhaps the biggest stigma has been a cost that, at times, is dramatically more expensive than a desktop. Early on, implementing VDI meant starting with 100 users and working toward 1,000. Large scale implementations have become reality in the past couple of years, but the approach to scaling and sizing remains the same. This has created a level of difficulty for customers seeking to maintain a level of cost effectiveness at both small and large scales. The platform HP presents in this document is designed from the outset to scale to the thousands of users, not to the hundreds. With the release of the HP Converged Infrastructure Enterprise Reference Architecture for VMware View, HP is addressing these issues and clearing barriers to adoption. Document purpose This paper introduces the reference architecture and reintroduces the POD scaling concept to a technical audience. It seeks to explain the design and merits of the platform as well as how it is implemented. The paper is intended to be used in conjunction with VMwares reference architectures for VMware View and associated documents (http://www.vmware.com/products/view/resources.html). Abbreviations and naming conventions Table 1 is a list of abbreviations and names used throughout this document and their intended meaning.
Table 1. Abbreviations and names used in this document
Definition Microsoft Remote Desktop Protocol Teradici PC over IP protocol Virtual Desktop Infrastructure Onboard Administrator Logical Unit Number Input/Output Operations per Second
POD
Intended audience The audience for this document is intended to be technical. Examples of targeted users include IT architects, HP and HP Partner services personnel, implementation specialists and IT planners.
Microsoft Windows Server 2003, R2 x32 Microsoft Windows Server 2003, R2 x32
Management software
Components VMware vCenter HP Systems Insight Manager HP StorageWorks P4000 SAN/iQ Centralized Management Console Microsoft SQL Server Software description VMware vCenter Server 4.0.0, Build 208111 HP Systems Insight Manager 6.0
HP StorageWorks P4000 SAN/iQ Centralized Management Console (CMC) 8.5 Microsoft SQL Server 2005 Enterprise Edition, Service Pack 3
View 4 components
Components VMware View Manager VMware View Composer VMware ThinApp Software description VMware View Connection Server 4.0.1 VMware View Composer 2.0.1 VMware ThinApp Enterprise, 4.5
Firmware revisions
Components HP Onboard Administrator HP Virtual Connect HP ProLiant Server System ROM Shared SAS Switch HP Integrated Lights-Out 2 (iLO 2) Broadcom NetXtreme II Ethernet Network Controllers HP Smart Array P700m HP StorageWorks MDS600 Version 3.0 3.01 Varies by server 2.2.4 1.82
7.08 2.60
Virtual machines
Components Operating System Connection Protocol Software description Windows XP, latest patches as of test date Microsoft RDP 6
Other architectures
The changing landscape
While the platform in this paper addresses large scale enterprise deployments, HP is well equipped to handle a variety of deployment scenarios. These range from 12 user micro branches to 250 plus user departmental deployments to 800 plus user small and medium sized businesses (SMB) and large branch deployments and all points in between. Figure 1 shows HPs applied architectures for a variety of use cases.
Micro branch HP is enabling the micro branch for VDI. Very small branches have traditionally had issues implementing VDI. This has tended to be a result of either network pipes that were too small, lack of local management resources or both. HPs Micro-Branch Architecture POD will alleviate issues with micro branch deployment. SMB/branch VDI is not just for the enterprise. Businesses of all sizes as well as branch locations for large IT departments can reap the benefits of centralized management and data control as well as power efficiency that VDI brings. HPs Branch Architecture POD will bring cost effective virtual desktops to a broader range of use cases. Departmental Even the largest IT shops have a need for segmented, simple to scale resources. HP is well positioned to handle departmental deployments of thousands of users with PODs built on HP BladeSystem and HP StorageWorks P4500 G2 SAN. Simple to scale while remaining cost effective, these PODs are a simple solution to VDI in the department space. Enterprise Outlined in this document, HPs approach to Enterprise VDI is to make it easy to scale by large numbers of users while keeping management simple and deployment even simpler. Regardless of your VDI needs, HP offers cost effective solutions across use cases that embrace a centralized management and control paradigm that helps maximize return on investment by minimizing complexity. HP offers more than just products for VDI. Engaging HP Services is a great way to insure that you have the broadest possible range of experience and skills to help with your VDI implementation.
The platform
Crossing boundaries The HP Converged Infrastructure Enterprise Reference Architecture for VMware View is a reflection of the business drivers that led to its creation. Customers repeatedly told HP that VDI as a technology crossed numerous boundaries inside of their IT organizations like few other technologies. To deploy and manage VDI took the cooperation of the storage, desktop, virtualization, server/infrastructure and network teams as well as internal support teams. HP as a technology provider is able to cross all of these boundaries. Very early into the VDI wave HP coordinated efforts across all of these product categories and it has resulted in a strong product and solution set around VDI. As requests grew for larger and larger VDI implementations it became evident that HP could use its breadth and depth of products and services to become the premier provider of VDI solutions. The combining of storage, servers, infrastructure and virtualization From the outset, the POD approach to VDI was meant to help solve the issues of multiple points of contact in the VDI space. While the reference platform is built to easily support traditionally segmented IT departments, it also serves as a catalyst to enable new paradigms of management in the client virtualization space. Figure 2 highlights the platform pieces and the resulting converged infrastructure.
HP BladeSystem c7000 Enclosures HP Virtual Connect Flex-10 Interconnect Modules HP ProLiant BL490c G6 Servers HP StorageWorks P4800 BladeSystem SAN built on HP P4000sb storage blades HP 3Gb Shared SAS Switches HP StorageWorks 600 Modular Disk Systems (MDS600s)
HP ProLiant BL490c G6 HP offers a broad range of servers in form factors designed to fit with all management environments. HP selected the ProLiant BL490c G6 server with Intel Xeon X5500 processors for this reference platform. The BL490c provides large memory capacity in a small form factor with exceptionally high performance and power/space density all desirable characteristics for VDI deployments. The halfheight form factor provides the added benefit of maximizing cable reduction and infrastructure costs while maintaining high levels of bandwidth on a per system and per virtual machine (VM) basis. The HP ProLiant BL490c G6 servers recommended are outfitted as in Table 2. Sizing recommendations in this document are based on this configuration.
Table 2. HP ProLiant BL490c G6 configuration.
Configuration 2 Intel Xeon Processor X5570 (2.93 GHz, 8M L3 Cache) 72 GB PC3-10600R, 18 4GB memory DIMMs 1 HP 64GB 1.5G SATA NHP SFF SP ENT SSD Embedded NC532i Dual Port Flex-10 10GbE Multifunction Mezzanine 1 open for use. Mezzanine 2 is reserved and should not be used.
The ProLiant BL490c G6 is upgradeable to include a second SSD as well as memory expansion up to 144GB. Hypervisor hosts for this architecture are customer selectable. You may select any HP BladeSystem ProLiant server blades that appear on the VMware HCL. StorageWorks P4800 BladeSystem SAN The HP StorageWorks P4800 BladeSystem SAN takes the convergence of blades and storage to new levels and speaks to the unique capabilities HP has to develop IT platforms. As the foundation for each enterprise POD, the P4800 is not only robust and performant, but also simple to deploy and maintain. It also drives new paradigms for VDI in terms of data storage, security and ownership. Figure 3 shows the P4800 components.
The HP StorageWorks P4800 SAN runs on HP P4000 SAN/iQ software which offers unique SAN scalability and data high availability. The P4800 consists of 140, 15,000 rpm Large Form Factor SAS disks of 450GB each. With over 22TB of useable space, it is possible to build very large PODs of VDI resources on a highly scalable and replicated basis. The P4800 is robust enough to handle boot storms, login events and sustained write activity. Figure 4 shows the relationship of each controller to each storage drawer in the MDS600. On a percontroller basis, a P4000sb storage blade uses an HP Smart Array P700m controller connected redundantly through SAS switches to a drawer of 35 disks in an HP StorageWorks MDS600. The disks are broken into 7 groups of 5 disks protected at the array controller level via a RAID5 configuration. P4000 SAN/iQ software then creates a user selectable net RAID configuration across all drawers. Each block written can be replicated two, three or four times to ensure both data integrity and data high availability.
Storage configuration on a per volume basis will be described later in this document in the section entitled Building the platform. The integrated nature of this approach along with the simplicity of the P4000 SAN/iQ management interface may, in some IT shops, encourage and enable the management of server, infrastructure, hypervisor and storage resources by one person. This can result in greater agility, lower overall management costs and quicker change management. HP BladeSystem c7000 enclosure The HP BladeSystem c7000 enclosure has been designed to tackle the toughest problems facing todays IT infrastructures: cost, time, energy, and change. The c7000 enclosure consolidates the essential elements of a datacenter power, cooling, management, connectivity, redundancy, and security into a modular, self-tuning unit with built-in intelligence. In addition, this enclosure provides flexibility, scalability and support for future technologies. Figure 5 shows an example of an HP BladeSystem scale-out infrastructure.
10
For more information on this powerful 10U enclosure, refer to http://h18004.www1.hp.com/products/blades/components/enclosures/c-class/c7000/. HP Onboard Administrator The Onboard Administrator for the HP BladeSystem c7000 enclosure is the brains of the c-Class infrastructure. Together with the enclosures HP Insight Display, the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class components and provides the following capabilities: Wizards for simple, fast setup and configuration Highly available and secure access to the HP BladeSystem infrastructure Security roles for server, network, and storage administrators Automated power and cooling of the HP BladeSystem infrastructure Agent-less device health and status Thermal Logic power and cooling information and control Each c7000 enclosure is shipped with one Onboard Administrator module and firmware. For redundancy, you can add a second unit. For more information on the Onboard Administrator, refer to http://h18004.www1.hp.com/products/blades/components/onboard/.
11
HP Virtual Connect Before Virtual Connect, you had two basic choices for interconnects: pass-thru or switch. Pass-thrus are simple but require a large number of cumbersome cables and create complexity; blade switches reduce the number of cables but add to the workload of LAN and SAN administrators. With either option, multiple people are needed to perform even very simple server tasks. Virtual Connect provides a better way to connect your HP BladeSystem c-Class enclosure to network LANs and SANs, allowing you to simplify and converge your server edge connections, and integrate into any standards-based networking infrastructure while reducing complexity and cutting your costs. Rather than tying profiles to specific blades, you create a profile for each of the bays in an HP BladeSystem enclosure. Virtual Connect then maps physical LAN or SAN connections to these profiles, allowing you to manage connectivity between blades and networks without involving LAN or SAN administrators. In addition, if a server blade were to fail, you could move its associated profile to a bay containing a spare blade, thus restoring availability without needing to wait for assistance. For more information, refer to Appendix B Virtual Connect technology. HP Thin Clients With much longer life spans than traditional desktop PCs, HP Thin Clients are the ideal client device for VDI deployments providing increased security, simplified management, and lower cost of ownership. HP offers a full portfolio of Thin Clients with varying capabilities enabling IT to optimize both end-user experience and their IT budget by deploying a mix of client devices. Additionally, all HP Thin Clients have been tested and approved for use within VMware View environments For this reference platform, HP tested and configured two thin clients; the HP t5740 and the HP t5545. A Mainstream series client, the t5545 is designed for most business productivity applications typically found with Task and Productivity workers. For Knowledge workers who require use of more advanced applications and media streaming, the t5740s higher processing performance and memory configurations make it the ideal choice.
Table 3. HP Thin Clients
Configuration Intel Atom N280 Processor 1.66GHz Microsoft Windows Embedded Standard 2009 2GB Flash, 2GB DDR3 SDRAM Configuration VIA Eden Processor 1.0 GHz HP ThinPro 512MB Flash, 512MB DDR2 SDRAM
12
Figure 6 shows the HP t5740 Thin Client with integrated wireless and dual monitors.
For more information on HP Thin Clients, refer to Appendix D HP Thin Clients. Additional services To make it easier for you to deploy this solution, HP offers a range of Thin Client Management Services. More information is available from the following sources: http://h10134.www1.hp.com/services/thinclientmgmt/ http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA2-7923ENW&cc=us&lc=en http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/18964-18964-3644431-36462073763975-3646216.html?jumpid=reg_R1002_USEN HP Client Automation Enterprise Client virtualization offers many business benefits such as lower costs, shared infrastructure and data security. However, managing these environments can present added management complexities especially for enterprises ending up with a mix of the traditional client environment of desktop and notebook PCs, thin clients and the virtual desktops running in the datacenter. HPs Client Automation software offers a single and integrated management framework to manage the complete client virtualization environment including thin clients, the virtual desktop environment and even traditional desktop and notebook PCs so you can avoid the additional complexity and cost that other solutions may require. HP Client Automation offers: Reduced administrative overhead by integrating with leading application virtualization technologies, including Citrix XenApp, Microsoft Application Virtualization (App-V), and VMware ThinApp Reduced complexity by automating day-to-day operations to simplify management for traditional as well as virtual client environments including VMware View and Citrix XenDesktop Improved security by managing security, compliance, and vulnerability for your physical end points as well as the virtual desktops running in the datacenter. Reduced costs by offering a comprehensive client management solution so your IT administrators do not have to learn and use 3 or 4 different tools to manage your environment.
13
Overall HP Client Automation software allows enterprises to rapidly adopt client virtualization to realize its benefits without adding to the management complexities and costs. For more information on how HP Client Automation can help you, visit http://www.hp.com/go/clientautomation. HP Insight Control suite Insight Control suite provides time-smart management software that delivers deep insight, precise control, and ongoing optimization to unlock the potential of your HP ProLiant and BladeSystem infrastructure. Based on HP Systems Insight Manager (HP SIM), Insight Control suite offers comprehensive proactive health management, remote control, patch management as well as rapid server deployment, VM management, and power management in one easy to install package. The reference configuration provides the licenses required to take advantage of Insight Control suite. For more information on creating a well-run infrastructure, refer to http://h18013.www1.hp.com/products/servers/management/ice/index.html. HP Insight Control for VMware vCenter The vCenter management platform from VMware is a proven solution for managing the virtualized infrastructure, with tens of thousands of customers utilizing it to keep their virtual datacenter operating efficiently and delivering on Service Level Agreements. For years, these same customers have relied on Insight Control from HP to complete the picture by delivering an in-depth view of the underlying host systems that support the virtual infrastructure. While delivering a complete picture of both the virtual and physical datacenter assets, the powerful combination of vCenter Server and Insight Control has required customers to monitor two separate consoles until now. The HP Insight Control extension for VMware vCenter Server delivers powerful HP hardware management capabilities to virtualization administrators, enabling comprehensive monitoring, remote control and power optimization directly from the vCenter console. In addition, Insight Control delivers robust deployment capabilities and is an integration point for the broader portfolio of infrastructure management, service automation and IT operations solutions available from HP. Key capabilities integrated into the vCenter console include: Combined physical and virtual view: From a single pane of glass, monitor status and performance of virtual machines and the underlying host systems that support them. Integrated troubleshooting: Receive prefailure and failure alerts on HP server components and invoke HP management tools, such as Systems Insight Manager and Onboard Administrator, incontext, directly from the vCenter console. Powerful Remote Control: Remotely manage and troubleshoot HP ProLiant and BladeSystem servers using HP Integrated Lights-Out Advanced capabilities directly from the vCenter console. Proactive power management: Get the most out of your existing power envelope by comprehending and proactively managing power for hosts and pools of virtual machines across hosts A core component of HP Insight Control, the VMware vCenter Server extension is included with HP Insight Control, which can be purchased as a single license, in bulk quantities, or bundled with HP ProLiant and BladeSystem hardware. Existing Insight Control customers who are under a current Software Updates contract can even download this extension free of charge!
14
VMware View VMware View value proposition and benefits Purpose-built for delivering desktops as a managed service, VMware View provides the best end user experience and transforms IT by simplifying and automating desktop management. Centrally maintaining desktops, applications and data; reduces costs and improves security while at the same time increases availability and flexibility for end users. Unlike other desktop virtualization products, VMware View is a tightly integrated end to end solution built on the industry leading virtualization platform allowing customers to extend powerful business continuity and disaster recover features to their desktops and standardize on a common platform from the desktop through the datacenter to the cloud. VMware View solution provides a wide range of benefits. Simplify and automate desktop management. VMware View lets you manage all desktops centrally in the datacenter and provision desktops instantly to new users, departments or offices. Create instant clones from a standard image, and dynamic pools or groups of desktops. Optimize end-user experience. VMware View PCoIP display protocol provides a superior end user experience over any network. Adaptive technology ensures an optimized virtual desktop delivery on both the LAN and the WAN. Address the broadest list of use cases and deployment options with a single protocol. Access personalized virtual desktops complete with applications and end-user data and settings anywhere and anytime with VMware View. Lower costs. VMware View reduces overall costs of desktop computing by up to 50% by centralizing management, administration and resources and removing IT infrastructure from remote offices. Enhance security. Since all data is maintained within the corporate firewall, VMware View minimizes risk and data loss. Built-in SSL encryption provides secure tunneling to virtual desktops from unmanaged devices or untrusted networks. Increase business agility and user flexibility. VMware View accommodates changing business needs, such as adding new desktop users or groups of users, while providing a consistent experience to every user from any network point. Built in business continuity and disaster recovery. VMware View is built on industryleading VMware vSphere allowing you to easily extend features such as High Availability and Fault Tolerance to your desktops without the need to purchase expensive clustering solutions. Automate desktop back-up and recovery as a business process in the datacenter Standardize on a common platform. VMware View includes VMware vSphere and brings all the benefits and enterprise features of the datacenter to the desktop. Extend features such as vMotion, High Availability, Distributed Resources Scheduler and Fault Tolerance to your desktops providing a built in disaster recovery and business continuity solution. Optimized specifically for desktop workloads, VMware vSphere is able to handle the high loads associated with desktop operations such as boot up and suspend operations. Standardize your virtualization platform and use a single solution to manage both servers and desktops from the datacenter through to the cloud.
15
VMware View architecture VMware View provides Unified Access to virtual desktops and applications running in a central secure datacenter and accessible from a wide variety of devices. VMware View Composer streamlines image management while reducing storage needs through the use of VMware Linked Clone technology. Figure 7 highlights the architecture.
VMware
16
Server sizing
History HP is now in its sixth generation of server sizing for VDI. The goal from the beginning has been to provide reliable, conservative sizing estimates that help our customers to decide whether or not VDI is the right platform for client virtualization for their intended use cases as well as to build the business cases needed to adopt VDI as a technology. Historically, VDI testing has largely been an outgrowth of old server based computing test paradigms. These tests can be summed up in general as follows. A script or minimal set of scripts is run using Microsoft RDP as a connection protocol against a set of OSs running a very minimal application set on a single host. A base time is established in some fashion for an action or set of actions in the script to complete. More VMs are added to the test with either the same script or slightly different set of scripts until timings for actions have increased to a level deemed unacceptable, generally a percentage time increase. HP used similar test methods in its first VDI reference architecture release in 2006 and in its second and third test generations. Results of those tests tended to be highly optimistic when viewed in relation to real world implementations. In seeking to understand where the differences came from when comparing tests and real world deployments, HP has changed test methodologies twice for the launch of its G5 and G6 servers. HP believes that its test results were much closer to production reality than other published tests, but in adjusting scripts the actions on a per user basis were not as real world. For the latest sizing HP reexamined old test methodologies and in fact changed the underlying assumptions about what performance means in a VDI environment. Testing today HP took lessons learned from prior test generations and created the current set of tests for server sizings. The approach produces what HP believes to be sizings that can be replicated in most customer environments and that should be used for planning from both a business justification and equipment purchase standpoint. In the HP test scenario, three types of users are defined and tested. These users are designed to match representative, real world user patterning. The users are defined as follows. Knowledge workers A knowledge worker uses a broad application set (in the case of the testbed, up to 17 different applications are available for a user to run). They interact with the VM throughout the entire scope of the test. Applications for this user may demand higher resource utilization. Examples of such applications include Eclipse, Windows Media Player and Microsoft Visio. This user will have multiple applications running at any given point in time. Productivity workers A productivity worker uses a fairly broad application set (up to 14 different applications available in our test suite). One worker per test operation is left idle, all other workers interact for the duration of the test. Microsoft Visio is the only application that may demand higher resource utilization and is used minimally throughout the test. This user has multiple applications running at any given point in time and every user in the test leaves Microsoft Outlook open for the entire test duration.
17
Task workers A task worker is defined as a user that interacts with a minimal set of applications (Microsoft Office, Zip programs, Internet Explorer and Adobe Acrobat). The user generally has only one or two applications open at a time. One idle user per 60 working users is tested. The majority of work produced is keystrokes. During testing it became clear that the task worker was the easiest to scale as memory requirements for such a user are low and storage patterns were very simple. Methodology Scripts were recorded individually using AutoHotKey with application launch times noted not just for when the process appears, but for when the user is first able to start interacting with the application. Timings were then increased by between 20% and 100% depending on application. Initial timings were judged to be of acceptable performance and were representative of launch times on individual VMs, local desktops and laptops. Increased timings were judged to be sluggish. Even slight increases in timing caused the perception of the solution as slow. It is HPs belief that the use of acceptable timings is a better gauge than a simple percentage increase. There are more than 80 scripts written with 8 being tied to task workers, 12 to knowledge workers and the remainder tied to productivity workers. To arrive at a final sizing, a predefined mix of these scripts is run and ESXTOP is used to record server side performance data. These tests are monitored during playback and server sizing is revealed when more than one script fails to execute as designed. In other words, when a script cannot complete as designed with other scripts running within the prescribed acceptable timeframe, the server has reached capacity. These failures show up in a variety of forms but are best described as unintended application behaviors. All tests were conducted on an HP ProLiant DL380 G6 with 2 Intel Xeon X5570 processors. The system had 72GB of memory installed during test. Results are believed valid for any 2 socket ProLiant server running an identical memory and processor configuration. The OS was VMware ESX 4.0 Update 1 (Build 208167). Virtual machines were based on Windows XP with all patches and service packs released prior to January of 2010 applied. A variety of applications were used during test. HP suggests customers test with their own images and applications to further refine results. All VMs were initially given 1GB of memory, but memory utilization did not end up playing a role in any of the final sizings. Microsoft RDP v6 was the protocol used during the test. Other protocols may have an effect on overall sizing, generally reducing the number of users based on a given workload. HP is working to implement other protocols into its testing procedures and will update results as appropriate. All scripts are run after login traffic has had a chance to settle to eliminate any effect. Logins, HA failures and boots are studied separately. It is important to note that these results should not be compared to other results using different test methodologies. Based on experience, it is expected that these results will be valid for HP systems running the same versions of software and configured with the same processors and memory. No lab sizing can be completely matched to a given customer implementation. Knowledge and productivity workers These two worker types are grouped together as they end up looking very similar from an overall server sizing standpoint. Differences are observed mostly in storage patterning. Memory utilization, largely a function of many different and concurrent applications being opened, shows little difference and CPU patterning is close, showing more of a difference in %READY values than in overall utilization. For knowledge and productivity workers, HP recommends that a range of 64-72 users per host (8-9 users per core on the referenced processors) be used for planning purposes. This number allows for HA to function with an impact to users in the event another server within the same cluster fails. That impact will generally show up in the form of extended application response times and sometimes
18
sluggish behavior. Consistent single script failures were observed starting at 66 users per host. Multiple script failures began appearing consistently above 72 users per host. Note: Some heavy use cases with knowledge workers such as developers can substantially reduce per server user counts. Such environments have been observed to be as low as 4-5 users per core or 32-40 users per host on the referenced processors, especially when utilizing a lower than recommended memory footprint. Figure 8 highlights server CPU utilization at 66 users. Note that absolute utilization rarely exceeds 60%.
CPU Utilization
100 90 80 70 60 CPU % 50 40 30 20 10 0 1000 1200 1400 1600 Time (Seconds) 1800 2000 2200
Memory utilization during these runs was different for the two user types, but levels of memory overcommit were high and absolute memory consumption did not play a role. No swapping was observed. Peak memory usage per host was between 40 GB and 45 GB. Task workers To emulate task workers, a small set of 8 scripts with a limited application set was used. Applications used were Microsoft Office (Outlook, Word, Excel and PowerPoint), Adobe Acrobat Reader and Microsoft Internet Explorer. For task workers, HP recommends that a number between 100 and 110 users per host (12 14 users per core on the referenced processors) be used for planning purposes. This number allows HA to
19
function with an impact to users in the event another server within the same cluster fails. Under test single script failures began occurring at 100 users with multiple script failures appearing at 110 users. CPU utilization is far more steady as is expected of the less complex script environment. Peak utilization is similar to the 66 user productivity worker run. Figure 9 shows CPU utilization during a 110 user run.
CPU Utilization
100 90 80 70 60 CPU% 50 40 30 20 10 0 1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000
Time (Seconds)
Memory utilization stayed relatively constant during the course of the test at around 45 GB. No swapping was observed and memory was not a factor in final sizing. It should be noted that task workers by definition are simplistic users. Understanding your users prior to categorizing them is critical to properly sizing servers. It has been HPs experience that most users are closer to productivity workers defined in this document than to task workers.
20
Storage sizing
Overview Storage sizing for VDI can vary based on a number of factors. In HPs experience, per VM I/O performance ranges from 2.5 to more than 70 IOPS and can exhibit patterns ranging from 20% writes to 97% writes. With such a vast difference in pattern sizing, architecting the right storage solution can be a challenge. This section seeks to characterize three different workloads. The section entitled Other factors effecting sizing will address causes for dramatically different sizing characteristics and offer suggestions on eliminating potential issues. Methodology HP has developed a storage sizing tool that allows us to use a minimal amount of clients to load large storage systems for sizing purposes. While the technology has a number of uses, it was in fact developed to better understand client workloads in a variety of VDI scenarios. The tool, Spyder, allows HP to take a fibre channel trace between any client and LUN in the test harness and record it. The trace can then be played back through an iSCSI storage system by Spyder. The tool replays block by block and reports statistics about variables such as: Variances in commit times between storage systems Full statistics for reads, writes, latencies and queues Per LUN statistics for any number of targets The tool has the added advantage of being agnostic as to any factors on the recording side. Traces can be captured as a single live user, single function script, multiple scripts and on any combination of LUNs and VMs. HP applies this methodology to both size P4000 SANs for VDI and also to understand and analyze storage patterning in order to optimize its product set to perform in a VDI environment. Note: Storage sizing will be variable. In our own test environment, HP has found variable levels of read caching. These levels have a large effect on the cumulative IOPS available on the storage system. Your particular workload will have an effect. Sizings given in this section reflect maximums. The following sections discuss storage patterning for the three major user categories. Overall storage layout and final sizing is diagrammed and discussed in the section entitled Building the platform.
21
Task workers To analyze patterning for task workers, a host was loaded with 64 task workers attached to a single LUN. All I/O activity was recorded post login using a fibre channel trace analyzer. The resulting trace was then prepared and played back through the P4800 SAN. The Centralized Management Console performance monitor captured all data. I/O patterning for task workers was found to be the following. Figure 10 shows read and write traffic patterning. There is an obvious bias toward reads in the test workload with an overall observed pattern from the fibre channel trace of approximately 60% reads and 40% writes.
Figure 10. Overall reads and writes for task workers on a single LUN
22
Cumulative I/O shows a peak I/O of just under 800 IOPS for the 64 users under test. Figure 11 reflects this number.
I/O ranges from 2 to 13 IOPS per user with an overall average of 6 IOPS per user. It should again be noted that this is reflective of this test and a very limited application set. Over the course of the test the average I/O size was 16K with a standard deviation of 4K. HP recommends up to 2,400 task workers be placed on the HP StorageWorks P4800 BladeSystem SAN. Layout of the storage system is discussed later in this document.
23
Productivity workers The productivity worker workload is similar in scope to the task worker, but features a much broader application set. Overall I/O patterns for productivity workers based on the test workload show a heavy read bias. Overall read/write ratios for a given test run are around 80% reads.
Figure 12. Overall reads and writes for productivity workers on a single LUN
1000
800 IOPS
200
24
Cumulative I/O as shown in Figure 13 shows a peak of almost 1400 IOPS for the test. IOPS on a per user basis range from 2 to 21 IOPS per user with an average of 9. Average I/O size is 14K with a standard deviation of 3K.
HP recommends that up to 1,584 productivity workers be placed on an HP StorageWorks P4800 BladeSystem SAN. Layout of the storage system is discussed later in this document.
IOPS
25
Knowledge workers The knowledge worker script is substantially different in a number of ways from the other two user types. Most importantly, it differs in application set and workload to a degree and this actually shifts the patterns of activity for the LUN considerably. The net effect is that a knowledge worker will be considerably more difficult to scale in a cost effective manner than will a task or productivity worker. Figure 14 shows overall patterns for knowledge workers. It should be noted that peaks on the graph are from actual application utilization. The most apparent difference is in overall pattern with only around 40% of the workload showing up as reads. This heavier write pattern suggests that this workload will need spindles to scale.
Figure 14. Overall reads and writes for knowledge workers on a single LUN
1500 IOPS
1000
Reads Writes
500
Time (Seconds)
26
Cumulative I/O as shown in Figure 15 shows a peak of almost 4000 IOPS for the test. IOPS on a per user basis range from 7 to 59 IOPS per user with an average of 22 IOPS. Average I/O size is 16K with a standard deviation of 4K.
HP recommends that a maximum of 660 knowledge workers be placed on an HP StorageWorks P4800 BladeSystem SAN. It should be noted that sizing for this user type is highly variable.
27
Overview of user characteristics Table 4 gives a high level overview of various characteristics by user type observed in the lab. Lab observations are useful in helping to define quantitative user types. HP recommends an engagement with HP Technical Services or a skilled HP Partner to determine how your users fit within these ranges. Averages can be used as planning numbers in environments using linked clones.
Table 4. Characteristics observed in the lab by user type
Characteristic CPU (Cumulative) at moment of decline I/O Pattern (Cumulative) R/W IOPS (Per User) Range IOPS (Per User) Average Block Size Average Block Size Standard Deviation Application Mix (Number of Apps) Number of Apps Open Concurrently Sizing Difficulty
60/40 2 13 6 16K 4K
80/20 2 21 9 14K 3K
40/60 7 59 22 16K 4K
< 10
10+
15+
<2 Low
Many Moderate
Many High
28
Image The base image is a common area for unexpected sizing issues. An image that is loaded with scheduled services, extraneous programs and is generally un-optimized to be shared among a large number of users is a problem before the first VM is deployed. Such images reduce user counts on a per server and per storage target basis and can raise VDI costs considerably. As an example, consider a virus scan scheduled in a fully provisioned VM (HP recommends that linked clones be used for VDI use cases whenever possible). For extremely dense environments with large numbers of users, scheduling virus scans inevitably means that multiple scans will need to occur at any given point in time. The net effect of having this service built into the image is an almost continuous drop in quality of service as shared resources are strained from the scan. Application set The choice of what applications to host in a VDI environment as well as how to present them to the VM can have a considerable effect on the overall performance of the VM as well as the characterization of the end users. As an example, consider an application that does continuous aggregation of information when open. Certain development environments will do this as developers write code. The net effect varies depending on where the application resides in relation to the VM. If it is a local client application there is generally no effect. If the application is streamed into the VM it is possible that I/O and CPU penalties will be observed on other systems and only a minimal overhead will be shown inside the VM. Loading the application locally inside the VM would insure that any overhead is felt within the VM and any other users on the same shared resources would be affected. In some circumstances, users that require a particularly difficult application may simply need to be removed from consideration as viable for a shared environment. In other cases, it may mean that the applications need only be removed from the local VM. Proper user identification Proper identification of user types is also key to a successful deployment. Many IT shops are unaware of user patterns and may not have an in-depth understanding of exactly what applications are in use and by whom. Making assumptions about user type may in some circumstances be justified. An example is the task worker described in this section. If there is truly only a limited application set and workers do not require a great deal of local storage then it is safe to call them task workers. Some job categories may be seen as task workers when in fact the role is far more complex than imagined. IT administrators need to be aware of work patterns to insure proper sizing. Under all of these circumstances HP can help with your move to VDI. HP services can be engaged and will come into your environment and help you ascertain what users to move into a VDI environment. Storage contention In some environments, storage can become a point of contention. This may manifest itself with higher than expected user I/O or vastly different patterning than expected or it may arise due to a requirement to have very large user data space without expanding to network attached storage. Perhaps the most common form of storage contention as it pertains to sizing is choosing users that cannot or will not be moved to a linked clone based implementation. This document does not cover these implementations. The installer should be aware that the effect on overall sizing can be dramatic and the corresponding effect on per user cost quite large. The HP StorageWorks P4800 BladeSystem SAN is designed to be cost effective relative to other storage solutions in this type of environment.
29
ProLiant BL490c G6 Intel Xeon 72GB memory Onboard SSD VMware ESX 4 VMware View HP StorageWorks P4800 BladeSystem SAN Four P4000sb Storage Blades Two SAS Switch Modules with cables (rear) Two MDS600 enclosures with 140 450GB 15K SAS drives
30
Task and productivity workers Figure 17 shows the configuration of a POD for task and productivity workers. This POD houses up to 2,400 task workers or up to 1,600 productivity workers based on the definitions provided in this document. Reducing user counts improves overhead for operations and management of the entire stack.
31
For task workers, the suggested maximum configuration is shown in Figure 18 by cluster layout. The installer may choose to distribute VMware clusters equally across enclosures as well.
32
Figure 19 shows the maximum suggested configuration for productivity workers. The installer may choose to distribute VMware clusters equally across enclosures as well.
33
Knowledge workers Figure 20 shows the scaling for knowledge workers in a VDI POD. Each POD will house 660 knowledge workers based on the definitions provided in this document.
34
Figure 21 shows the suggested configuration of clusters for knowledge workers. Note that the actual POD that is the basis for scaling is a single BladeSystem enclosure and SAN.
35
Flex-10 Modules Cabled to network core 10GbE fibre or SFP+ cable VLANs for: o vMotion network o Management network o Production (VM) network
36
If iSCSI traffic will egress the rack under the control of network and storage teams the recommended cabling method is to cable an extra, dedicated 10GbE link pair to the network core with one port per Flex-10 module assigned to this network. Do not share this cable and port with any other traffic. Consult Chapter 4 of the HP StorageWorks P4000 SAN Solution User Guide at http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c02063204/c02063204.pdf for instructions on configuring iSCSI networks external to the enclosure. SAS configuration is assumed as in Figure 23. The enclosure on delivery should be cabled for SAS connectivity.
37
Power up Prior to beginning any configuration, please insure that you follow the power up sequence outlined below in Table 5. Following this power on sequence will insure that all components communicate with each other as intended. In the event that the system has been powered up before configuration starts you may use the power buttons from within the Onboard Administrator to power down the SAS switches and P4000sb nodes and reapply power in the correct order (this requires administrator access to the OA).
Table 5. Power on sequence required for initial setup of POD
Sequence
Item
Notes Power on each MDS600 fully. MDS600s must be online prior to SAN/iQ. These must be powered on manually. Do not power on the enclosure until these are powered on for 3 minutes. This allows all drives to spin up. To insure that these power up first it may help to unseat the Virtual Connect Flex-10 modules from Interconnect Bays 1 and 2 during initial power on only. Set power on sequence via the OA. Insure SAS switches are powered up. Set power on sequence via the OA. Insure that the SAS switches have been powered up for 1-2 minutes prior to powering on. Use the OA to set the power on sequence. If the intent is to isolate iSCSI traffic to within the rack, this host will need to have VMware ESX installed and a VM created on local storage initially to install the CMC. This is covered in the section titled Installing the first hypervisor host and CMC. Set power on sequence via the OA. If you use an external CMC and do not isolate your iSCSI network you may combine steps 5 and 6.
MDS600
SAS Switches
Flex-10 Modules
P4000sb
Hypervisor Host 1
Hypervisor Hosts
In the event of catastrophic power failure, you can set the power on order via the OA (requires OA firmware 3.0 or later). The next section discusses how to perform the configuration. Configuring the enclosure Instructions for configuring your enclosure are included with the enclosure and are also available on the web in the BladeSystem documentation. Visit http://www.hp.com/go/bladesystem to find documents. This section will concentrate on specifics that relate to the configuration of a POD for VDI using the HP StorageWorks P4800 BladeSystem SAN. A sample script is included in Appendix C of this document to serve as a guideline to scripting the configuration of the enclosure. Handle initial IP address configuration for the enclosure from the Insight Display. After initial setup from the Insight Display on the front of the c7000 the primary objective should be to update the firmware. Install the following firmware to meet the levels listed in the Firmware revisions section of this doc. Firmware to be updated at this point will be the OA, Virtual Connect Flex-10 modules and blade firmware on the hypervisor hosts. Power down your hypervisor hosts once firmware is upgraded. All P4800 components should be at the correct firmware levels from the factory.
38
Once firmware is updated, you can set the power on sequence as per the previous section. This is useful for recovery during catastrophic power failures. To do this, log into the OA using the Username Administrator and the Password provided on the tag that came with the OA. Once logged in, expand the Enclosure Information and then Enclosure Settings tabs as shown in Figure 24. NOTE: Setting the power on sequence in the OA is optional. Following proper power on sequence is mandatory. From the left menu, click the link Device Power Sequence.
39
A series of tabs will appear. Click on the tab labeled Interconnect Bays as in Figure 25. You should see four columns labeled Bay, Device, Enabled and Delay. You will be setting interconnect bays 1 and 2 (your Flex-10 modules) to power on after 3 minutes and 30 seconds and interconnect bays 5 and 6 (your SAS switches) to power on after an initial delay of 3 minutes. This insures that the MDS600 disks have time to spin up. Click the Apply button when done.
40
Click on the tab labeled Device Bays. Set Device Bays 7, 8, 15 and 16 to power on after 270 seconds. Set all other device bays to No Poweron as in Figure 26. You may change this after initial setup any time or simply leave it at the default. Click Apply.
Finish configuring your enclosure settings. Be sure to change the Administrator password and add an administrative user to the OA. Configuring SAS switching All SAS configuration is completed at the factory prior to shipping. Do not make any changes to the SAS configuration in the P4800 enclosure.
41
Configuring Virtual Connect Flex-10 Prior to completing this section it is recommended that you read the administrative documentation pertaining to Virtual Connect Flex-10 as well as Appendix B of this document. Documentation can be found at http://www.hp.com/go/virtualconnect. In the left menu of the Onboard Administrator, click on the link to the Virtual Connect Manager. This will launch a new window or tab. Log into the Virtual Connect domain with the username Administrator and the password on the tag of the active Virtual Connect module (generally bay 1). The logon screen is shown in Figure 27.
42
Once connected, a wizard will start. Figure 28 shows the first wizard screen. Click on the Next button to begin using the wizard. NOTE: This document assumes that you will not be using Virtual Connect assigned MAC addresses.
Enter a Username and Password for an administrator on the local OA on the next screen. Figure 29 shows this screen. Click on the Next button when done.
43
You will be asked to import the enclosure as in Figure 30. Make sure the radio button to Create a new Virtual Connect domain by importing this enclosure. is checked and then click on the Next button. A warning will appear that network access to servers will be disabled until networks and profiles are defined. This is expected and you may click Yes to complete the import process.
Once the enclosure is imported you will get a confirmation screen. You can click Next to continue. The next step is to configure the name of the Virtual Connect domain as in Figure 31.
Enter a name of up to 31 characters. It is important to note that any other enclosures added to this rack will use this domain name as well. As you scale to more racks each rack will have its own name. As such you may wish to consider a naming scheme that will uniquely identify the rack as a member among a larger group of racks. Click on the Next button once you have assigned the name.
44
The next screen asks you to create local user accounts. Create at least one alternate administrator account at this time. You will have the ability to add more users later. Be sure once the wizard is complete to change the password for the default Administrator account. It is recommended that you click on the Advanced button and change the settings for default password length to 6 or more characters and to require strong passwords. Once accounts are created click on Next to proceed. You will click on Finish on the next screen to start the Network Setup Wizard. At the Welcome to the Network Setup Wizard screen you can click the Next button to proceed. Figure 32 highlights selecting to use native MAC addresses. Click on Next when done.
At the next screen you as the installer will need to make decisions about how VLANs are handled based on your individual networks. This document assumes that Map VLAN Tags has been selected. If you cannot use this setting you will need to make alterations to uplinks as you go forward.
45
At the Define Network Connection screen select to create a network with connections carrying multiple networks as in Figure 33.
After clicking on Next, you will be asked to define a shared uplink set as in Figure 34.
46
Give a name such as External_Conn to this uplink set and assign the 10Gb ports you have connected to the network core to it. You should have one 10GbE port per VC module assigned to the shared uplink set. You will define your management, vMotion and production networks as VLAN tagged networks. Click on the Apply button when done. At the next page, choose to create a new network. As in Figure 35, choose to use a connection with uplinks dedicated to a single network.
You will define your iSCSI network. Do not assign any uplinks to the network. Click on Apply when done. If you have an extra enclosure connected and have been configuring a dual enclosure domain iSCSI traffic will be associated with the stacking links only. Verify that you have defined all networks and click on Next and then Finish. You will now be at the Virtual Connect home screen as in Figure 36.
47
Creating server profiles During this process you will essentially create two profiles that will be assigned to either hypervisor/ management hosts or P4000sb storage servers. We will start with the P4000sb storage server profile. NOTE: Insure all servers in the enclosure are powered off prior to creating profiles. Click on the Define drop down menu and choose Server Profile from the menu. Give the profile a name. With G6 servers you can choose to hide the Flex-10 iSCSI Connections. Define the following profile on two connections as in Table 6.
Table 6. Server profile for P4000sb storage blades
Assign this profile to bay 7. You can then copy it, assigning a new name each time, to bays 8, 15 and 16.
48
Create a new profile and this time assign it a name that reflects that it is part of the management/ hypervisor host schema. You will make use of the ability to carve up FlexNICs with this profile. Create 6 new network connections (total of 8) and assign networks as in Table 7.
Table 7. Server profile for hypervisor hosts
This profile should be copied and assigned to bays 1 6 and 9 14. This will form the basis for your hypervisor and management hosts. NOTE: Even if you are merely undergoing a pilot implementation and do not yet have a full enclosure, copy all profiles at this time. Once complete, you should have 16 profiles created and assigned to their respective bays. Complete any remaining customizations and configurations to your Virtual Connect domain. When done, create a backup copy of the domain configuration for future restoration. You should make a backup of the Virtual Connect domain configuration whenever you make changes to any portion of the infrastructure.
49
After connecting to the host, click on the configuration tab. Make the following general configurations to the host as in Table 8.
Table 8. Configuration changes for initial ESX host
Subcategory NTP
Changes Configure an NTP server for the host to point to SSH server and client enabled, vSphere Web Access, Software iSCSI Client Optionally enable update ports
Firewall Firewall
Once done you will need to configure networking for the host. Click on the Networking link under the Configuration tab. Figure 37 shows the suggested network configuration. The basic principles of this configuration are: Left side and right side adapters for each vSwitch Separate vSwitches for iSCSI, Management, Production and vMotion VM networks for management and production networks Select adapters based on speeds. The speeds will match the bandwidth numbers you plugged in during the Creating server profiles section of this document.
50
Figure 37. vSwitches and layout in the vCenter Networking Configuration screen
NOTE: With the 1.48 version of the network driver you should enable beacon probing for network failover detection on your vSwitches. This first host will become part of a cluster of hosts that will house all management VMs. For now, it must first become the host that creates the Centralized Management Console VM that will allow you to build your SAN. NOTE: If you made the choice to configure an iSCSI network that egresses the rack and is available at the network core, you may skip the creation of this management server and use an external CMC server on the iSCSI network.
51
Creating the Centralized Management Console virtual machine You will need to create a VM to install the initial CMC. This virtual machine will be placed initially on the local disk. Table 9 highlights the choices to make during the creation of the VM.
Table 9. VM creation options
Category Configuration Datastore VM Version Guest Operating System CPUs Memory Network SCSI Controller Select a Disk Create a Disk Advanced Options
Choice Custom Local VMFS volume Virtual Machine Version: 7 Microsoft Windows Server 2003 R2x32; Linux 1 2GB 2 Adapters: 1 Management Network, 1 iSCSI Network LSI Logic Parallel Create a new virtual disk Variable size based on OS an standard practices, store with the virtual machine Node SCSI (0:0)
Once the VM has been created and the OS installed you should power it on and apply any updates and patches. It is expected that the private iSCSI network will not have DHCP services. You should decide on a networking schema for the iSCSI network and assign a static IP address to the CMC VM. You should consider installing a DNS server on the private iSCSI network. It is recommended that you minimize access to this VM. Since this is the access point to the internal iSCSI network, all efforts should be made to eliminate unneeded services, close off any unneeded ports and eliminate any non-critical users from accessing the system on the management network. If accessing the host via Microsoft RDP, port 3389 should be open. Configure the P4000sb storage blades Prior to installing the CMC, you can configure the P4000sb storage blades. These are found in enclosure device bays 7, 8, 15 and 16. For each node you will need to do the following. Login to the iLO and launch a remote console to the blade. This will give you local access to keyboard, video and mouse for the blade. At the initial login screen, type the word start and press enter. Press enter when the blue < Login > screen appears. A menu entitled Configuration Interface will appear.
52
Highlight the Network TCP/IP Settings menu item and press enter. Highlight the < eth0 > adapter and press enter. Enter a hostname and choose to manually set the IP address by highlighting Use the following IP address. You will need to enter an address for this adapter. Press OK when done. Return to the main menu by highlighting back and press enter. Highlight < Log out > and press enter. Repeat for the remaining three nodes. Installing the P4000 Failover Manager A Failover Manager (FOM) should be installed on the local disks of any hypervisor host in the management cluster. This Failover Manager protects the SAN by maintaining quorum in the event more than one node fails. The FOM must stay on the local disks and be connected to the iSCSI network. The FOM is available on the DVD media shipped with the P4800. The FOM should be the first node in the management group that is brought up and the last shut down. Install CMC in the dual homed VM In order to configure the storage it will be necessary to install the Centralized Management Console in the VM you created. The executable for the CMC is included with the software provided with your P4800 SAN. Run the executable and follow the instructions in order to complete the install. Once the installation is complete, the Find Nodes Wizard will launch.
53
Detecting nodes You will locate the four (4) P4000sb nodes that you just configured. Figure 38 shows the initial wizard for identifying nodes. Click on the Next button to proceed.
54
Click the radio button to search globally and then click on Next as in Figure 39.
55
At the next screen, enter the subnet mask and subnet of the private iSCSI network. Click on OK and then Finish when done. The nodes should appear in the CMC under Available Nodes.
Once you have validated that all nodes are present in the CMC you can move on to the next section to create the management group.
56
Creating the management group When maintaining an internal iSCSI network each rack must have its own CMC and Management Group. The management group is the broadest level where the administrator will manage and maintain the P4800 SAN. To create the first management group, click on the Management Groups, Clusters, and Volumes Wizard at the Welcome screen of the CMC as shown in Figure 41.
57
Click on the Next button when the wizard starts. This will take you to the Choose a Management Group screen as in Figure 42. Select the New Management Group radio button and then click on the Next button.
58
This will take you to the Management Group Name screen. Assign a name to the group and insure all four (4) P4000sb nodes and the FOM are selected prior to clicking on the Next button. Figure 43 shows the screen.
Figure 43. Name the management group and choose the nodes
It will take quite a bit of time for the management group creation to complete. You can use Appendix A to record the IP addresses of the P4000sb nodes as well as the planned IP address for your cluster. When the wizard finishes, click on Next to continue.
59
Figure 44 shows the resulting screen where you will be asked to add an administrative user.
Enter the requested information to create the administrative user and then click on Next. You will have the opportunity to create more users in the CMC after the initial installation. Enter an NTP server on the iSCSI network if available at the next screen and click on Next. If unavailable, manually set the time. The next screen will begin the process of cluster creation described in the following section.
60
Create the cluster At the Create a Cluster screen, and select the radio button to choose a Standard Cluster as in Figure 45. Click on the Next button once you are done.
61
At the next screen, enter a Cluster Name as in Figure 46 and insure all 4 nodes are highlighted. Click on the Next button.
62
At the next screen, you will be asked to enter a virtual IP address for the cluster as in Figure 47.
Click on Add and enter an IP address and Subnet Mask on the private iSCSI network. This will serve as the target address for your hypervisor side iSCSI configuration. Click on Next when done. At the resulting screen, check the box in the lower right corner that says Skip Volume Creation and then click Finish. You will create volumes in the next section. Once done, close all windows. At this point, the sixty (60) day evaluation period for SAN/iQ 8.5 begins. You will need to license and register each of the 4 nodes within this sixty (60) day period.
63
64
Installing the hypervisor For the remaining hypervisor hosts (device bays 2-6 and 9-14) install the hypervisor as you did in the section entitled Installing the first hypervisor host and CMC. This includes assigning the same networks as was done to host 1. On each host, click on the Configuration tab in the VIC and then Storage Adapters and highlight the software based iSCSI adapter as in Figure 49. Click on the Properties link for the adapter.
65
A new window appears as in Figure 50. Click on the Configure button to proceed.
When the new window appears, click on Enabled and then click OK to continue. Click on the Configure button again. The window in Figure 51 should appear.
There is now an iSCSI name attached to the device. You may choose to alter this name and make it simpler to remember by eliminating characters after the : or you may leave it as is. Record this name in Appendix A. You will use these in the section entitled Configuring storage for hypervisor hosts.
66
From the Properties window, click on the Dynamic Discovery tab and then on Add as in Figure 52.
If you are using CHAP (challenge handshake authentication protocol) you should configure it now. Click on OK when you have finished and close out the windows. When you close the window you will be asked to rescan the adapter. Skip this step at this time.
67
68
Enter a name for the server (the hostname of the server works well), a brief description of the host and then enter the initiator node name you recorded in Appendix A. If you are using CHAP you should configure it at this time. Click on OK when done. Repeat this process for every hypervisor host that will attach to this SAN.
69
From the CMC, expand the cluster as in Figure 56 and click on Volumes (0) and Snapshots (0).
Click the drop down labeled Tasks. From the drop down, select the option for New Volume as in Figure 57.
70
In the New Volume window under the Basic tab, enter a volume name and short description. Enter a volume size of 50GB. This volume will house ThinApp executables. Figure 58 shows the window.
Once you have entered the data, click on the Advanced tab. As in Figure 59, insure you have selected your cluster, RAID-10 replication and then click the radio button for Thin Provisioning.
Click on the OK button when done. Repeat this process to create three (3) 300GB management volumes. One will house management servers, one will house OS images, templates and test VMs and the other will serve as a production
71
test volume for testing the deployment of VMs as well as testing patch application and functionality of new master VMs. When all volumes have been created, record names in Appendix F of this document and then return to the Servers section of the CMC under the main Management Group. You will initially assign the volumes you just created to the first management host. In this document this host is in device bay 1. Right click on your first management server as in Figure 60 and choose to Assign and Unassign Volumes.
72
Click on the check boxes under the Assigned column for all volumes you just created to assign them to this host. You will repeat these steps when you create your other volumes after the vCenter server has been created. NOTE: You may script the creation and assignment of volumes using the CLIQ utility shipped on your P4000 software DVD.
73
Use the VIC to attach to management host 1. Under the Configuration tab click on the Storage Adapters link in the left menu and then click on Rescan as in Figure 62. Click on OK when prompted to rescan.
This will cause the host to see the volumes you just assigned to it in the CMC. To make these volumes active you will need to format them as VMFS. Under the Configuration tab in the VIC, click on Storage as in Figure 63.
Click on Add Storage as in the figure. Follow the onscreen instructions to create the datastores on this host. Use all available space per volume. When you have finished, your storage should appear similar to that shown in Figure 64.
74
The next section serves as a guideline for you to understand the overall volume layout for the POD. You should not create and assign the volumes listed in that section until after you have built your management servers.
Volume layout
This section suggests layouts for all volumes on the SAN. Use the process from the prior section or script the creation of these volumes. All volumes on the P4800 will be created as thinly provisioned. You should either manually select this option or script it during creation. The number, size and layout of volumes will vary based on user type. Insure that you understand the consequences of changing volume sizes to linked clones and user data storage prior to making changes. This document assumes that user data will be housed on separate filesystems. This assists with system maintenance, availability and the management of filesystem growth. The document further assumes that maintenance plans for the recomposition of clones will be aggressive and that VMs will be patched and rebuilt on a regular basis. Use Appendix F of this document to record all volume names and VMware cluster assignments. Personality volumes Personality volumes should be stored independently on volumes sized to comprehend the size of local profiles for users. Volume size should be minimized as monolithic volumes may result in performance degradation. As an example, if a survey of local profiles finds the average local profile among selected users to be 500MB, a personality volume of 200GB could be used to support 400 users. Insure that the volume size adequately addresses space requirements for all volumes. Replica / clone volumes There are numerous factors to consider when sizing volumes for replicas and clones including the size of the base image and the frequency with which clones will be refreshed. This section defers size considerations to VMwares documentation (http://www.vmware.com/products/view/resources.html) rather than making specific recommendations. This section does specify ratios that will need to be considered when laying out volumes for replicas and clones. For each VMware cluster, it is recommended that 5-8 hosts be used as the basis for the pool that will be created. Each volume that will be associated with a cluster will have between 16 and 32 linked clones per volume.
75
Note: VMware recommends up to 64 linked clones per volume. HP strongly recommends 16 or 32 linked clones per volume based on testing of the P4800. Using prior sizing recommendations we can derive volume counts. For a five (5) node cluster where each host has 66 virtual desktops per host we can calculate total virtual machine counts through simple multiplication. = In our above example that is 5 x 66 or 330 virtual machines. Volume count is simply a matter of dividing the total number of VMs by clones per volume. = In our example this equates to 330 VMs divided by 16 VMs per volume for a total of 21 volumes per cluster. Once again, consult VMwares View documentation for determining volume sizing. Management volumes A volume will need to be created that will house management server virtual machines. A single 1TB volume or two (2) 500GB volumes are generally adequate with the assumption that an external SQL Server instance is available outside of the enclosure. The next section titled Set up management VMs covers the suggested VMs to be created. Fileshare volumes This paper suggests that user data be housed on separate fileshares. In the event you choose to host these from management virtual machines within the enclosure you should consider the costs and benefits. These shares are highly available as virtual machines and due to the way they are networked are also highly performant and cost effective. For some system maintenance activities it may be necessary to take the shares offline.
76
View Manager VMware View Manager is the central component that determines how desktops are delivered in a VDI environment. HPs reference platform is capable of supporting very large numbers of users. You should consult VMwares documentation in order to determine proper VM sizing. An approach that seeks to maximize overall scaling is heavily recommended. Documentation can be found at http://www.vmware.com/products/view/resources.html. You should consider any and all recommended methods for maximizing not only performance, but also availability and security. Creating load balanced, highly available View Manager nodes between racks is recommended as a way to maximize availability and uptime. Fileshare hosts Three fileshare VMs have been created for demonstration purposes rather than utilizing external storage. One host will be used to share VMware ThinApp application executables and should be sized according to VMwares best practices documentation available at http://www.vmware.com/products/thinapp/related-resources.html. The remaining two hosts will be used to host user data fileshares and should be configured based on OS version best practices. At a minimum, it is recommended that the hosts be granted two vCPUs and 3GB of memory for 32 bit operating systems. 64 bit operating system VMs should be sized with at least 4GB of memory. If you decide to house user data on VM fileshares, the hosts will need to be part of the production domain. Systems Insight Manager HP Systems Insight Manager is the clear choice for managing HP servers and storage by being the easiest, simplest and least expensive way for HP system administrators to maximize system uptime and health: Provides hardware level management for HP ProLiant, Integrity, and HP 9000 servers, HP BladeSystem, HP StorageWorks Modular Smart Array (MSA), Enterprise Virtual Array (EVA), and XP storage arrays Integrated with HP Insight Remote Support Advanced, provides contracts and warranty management, and automates remote support Enables control of Windows, HP-UX, Linux, OpenVMS and NonStop environments Integrates easily with Insight Control and Insight Dynamics suites that enable you not only to proactively manage your server health whether physical or virtual but also to deploy servers quickly, optimize power consumption, and optimize infrastructure confidently with capacity planning HP recommends that Systems Insight Manager be installed to monitor components within the reference platform and beyond. AppSense HP recommends customers implement AppSense Environment Manager for Policy and Personalization management in a VDI environment. AppSense technology enables VDI users to experience a consistent and personalized working environment regardless of where and how they access their virtual desktop. Since AppSense dynamically personalizes virtual desktops on access, virtual desktop image standardization is possible, significantly reducing back-end management and storage costs. AppSense Environment Manager combines company policy and users personal settings to deliver an optimum virtual desktop without cumbersome scripting and high maintenance of traditional profiling methods. Capabilities such as profile migration, personalization streaming, self-healing and registry
77
hiving combine to provide an enterprise-scalable solution managing all aspects of the user in a VDI environment. For more information on best practice user environment management in VDI, please visit http://www.appsense.com/solutions/virtualdesktops.aspx. Appendix E of this document makes some recommendations around best practices for utilizing AppSense. Migrate the CMC Once all management volumes and servers have been created and assigned, right click on the CMC VM from within vCenter. Click on Migrate. Perform a storage migration by highlighting the Change datastore radio button and click Next. Choose the appropriate SAN management volume as the destination and click on Next. Choose to use Same format as source and then click Next. Click on Finish. The CMC is now housed on shared storage. Failover Manager You must leave the FOM on the local disk of a hypervisor host. In the event that the host goes down, this will require the restart/repair of that host in order to bring the FOM back to functional levels.
Cluster
Hosts
Function House management VMs as well as serving as a test bed for VMs, Templates and patches prior to deployment. First group of VDI hosts Second group of VDI hosts
Management
Bays 1 and 9
VDI001 VDI002
Bays 2 6 Bays 10 14
The management VMs will show up in the management cluster once the management hosts are added.
78
Figure 65 shows the first 4 hosts configured and entered into vCenter.
79
Enclosure location Prior to configuring the second and possibly third enclosures it is necessary to place them within the rack. Figure 66 highlights the location of the new enclosures within the rack.
80
After the enclosure has been added you will need to cable the Virtual Connect Flex-10 modules to allow the formation of a Virtual Connect domain and the inter-enclosure communication path. Figure 67 shows the cabling for a two (2) enclosure domain.
Figure 67. Two enclosure Virtual Connect Flex-10 cabling. Left diagram uses CX-4 cabling. Right diagram uses SFP+ cabling
Figure 68 shows cabling for a 3 enclosure domain. Note that in both configurations you may choose to use CX4 cabling to link enclosures. CX4 connectors are tied to uplink 1 on the Virtual Connect Flex-10 modules. If you use CX4 you will not be able to hook an SFP+ cable or fibre connection to uplink 1.
Figure 68. 3 enclosure Virtual Connect Flex-10 cabling. Red cables represent CX4 cabling.
Within the Virtual Connect management console you will need to make changes as well. Rather than a single enclosure definition for Virtual Connect, you will need to create a domain that includes all modules and enclosures. It is highly recommended that you read the Virtual Connect Multi-Enclosure Stacking Reference Guide located at http://www.hp.com/go/virtualconnect. It is recommended that each pair of Virtual Connect Flex-10 modules have its own set of redundant links to the network core for the redundant VLANd management, VM and vMotion networks. Consult Appendix G for a single enclosure example.
81
Conclusions
This document has provided an overview of solution sizing and discussed the buildout of a single enclosure solution for knowledge workers. While it has attempted to provide a broad range of information about scaling to various workloads, HP understands that all customer environments are not identical and that adaptations need to be made. The POD approach is designed to provide the flexibility needed to make those adaptations. For large scale implementations, HP recommends engaging our trained, experienced resources to provide the optimal implementation experience. For more information about HP service resources, see the links in the For more information section of this document.
82
Appendix A
Notes and resources
Table A-1. Important IP addresses
Component OA1 OA2 OA3 VC Manager P4800 Cluster P4800 Node 1 P4800 Node 2 P4800 Node 3 P4800 Node 4
IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . .
Notes
Host 1 2 3 4 5 6 9 10 11 12 13
iSCSI Name
83
Figure B-1. After Server Blade A fails, the system administrator moves the desired profile to a bay containing a spare blade
2 3
Conventionally used to identify NICs for the LAN Conventionally used to identify host bus adapters (HBAs) for the LAN
84
In increments of 100 Mb
85
In this reference configuration, Figure B-3 shows how the 10Gb network was partitioned into 4 virtual networks and configured for varying bandwidth: Management Network Configured for 800Mbps; communication to ESX hosts, iLO processors and the OA as well as interconnects Production Network Configured for 1.2Gbps; communication with users and outside network traffic occurs here. This network supports the users View session, Internet and corporate network access iSCSI Network Configured for 6Gbps; communication with the iSCSI network occurs here vMotion Network Configured for 2Gbps; vMotion traffic occurs across this network
Flex-10
Slot A
HP ProCurve 2p 10-GbE CX4 al Module
J9149A
Slot B
al
Mdl Status
al
Mdl Status
2 ProCurve Switch al
Link
Link
2 ProCurve Switch al
Link
Mode
Mode
Mode
Mode
100-127 V~ 10 A 200-240 V~ 5 A
86
210 210 0 0 30 30 0 0
87
SET ENCRYPTION NORMAL #Configure Protocols ENABLE WEB ENABLE SECURESH DISABLE TELNET ENABLE XMLREPLY ENABLE GUI_LOGIN_DETAIL #Configure Alertmail SET ALERTMAIL SMTPSERVER 0.0.0.0 DISABLE ALERTMAIL #Configure Trusted Hosts #REMOVE TRUSTED HOST ALL DISABLE TRUSTED HOST #Configure NTP SET NTP PRIMARY 10.1.0.2 SET NTP SECONDARY 10.1.0.3 SET NTP POLL 720 DISABLE NTP #Set SNMP Information SET SNMP CONTACT "Name" SET SNMP LOCATION "Locale" SET SNMP COMMUNITY READ "public" SET SNMP COMMUNITY WRITE "private" ENABLE SNMP #Set Remote Syslog Information SET REMOTE SYSLOG SERVER "" SET REMOTE SYSLOG PORT 514 DISABLE SYSLOG REMOTE #Set Enclosure Bay IP Addressing (EBIPA) Information for Device Bays #NOTE: SET EBIPA commands are only valid for OA v3.00 and later SET EBIPA SERVER 10.0.0.1 255.0.0.0 1 SET EBIPA SERVER GATEWAY NONE 1 SET EBIPA SERVER DOMAIN "vdi.net" 1 ENABLE EBIPA SERVER 1 SET EBIPA SERVER 10.0.0.2 255.0.0.0 2 SET EBIPA SERVER GATEWAY NONE 2 SET EBIPA SERVER DOMAIN "vdi.net" 2 ENABLE EBIPA SERVER 2 SET EBIPA SERVER 10.0.0.3 255.0.0.0 3 SET EBIPA SERVER GATEWAY NONE 3 SET EBIPA SERVER DOMAIN "vdi.net" 3 ENABLE EBIPA SERVER 3 SET EBIPA SERVER 10.0.0.4 255.0.0.0 4 SET EBIPA SERVER GATEWAY NONE 4 SET EBIPA SERVER DOMAIN "vdi.net" 4 ENABLE EBIPA SERVER 4 SET EBIPA SERVER 10.0.0.5 255.0.0.0 5 SET EBIPA SERVER GATEWAY NONE 5 SET EBIPA SERVER DOMAIN "vdi.net" 5 ENABLE EBIPA SERVER 5
88
SET EBIPA SERVER 10.0.0.6 255.0.0.0 6 SET EBIPA SERVER GATEWAY NONE 6 SET EBIPA SERVER DOMAIN "vdi.net" 6 ENABLE EBIPA SERVER 6 SET EBIPA SERVER 10.0.0.7 255.0.0.0 7 SET EBIPA SERVER GATEWAY NONE 7 SET EBIPA SERVER DOMAIN "vdi.net" 7 ENABLE EBIPA SERVER 7 SET EBIPA SERVER 10.0.0.8 255.0.0.0 8 SET EBIPA SERVER GATEWAY NONE 8 SET EBIPA SERVER DOMAIN "vdi.net" 8 ENABLE EBIPA SERVER 8 SET EBIPA SERVER 10.0.0.9 255.0.0.0 9 SET EBIPA SERVER GATEWAY NONE 9 SET EBIPA SERVER DOMAIN "vdi.net" 9 ENABLE EBIPA SERVER 9 SET EBIPA SERVER 10.0.0.10 255.0.0.0 10 SET EBIPA SERVER GATEWAY NONE 10 SET EBIPA SERVER DOMAIN "vdi.net" 10 ENABLE EBIPA SERVER 10 SET EBIPA SERVER 10.0.0.11 255.0.0.0 11 SET EBIPA SERVER GATEWAY NONE 11 SET EBIPA SERVER DOMAIN "vdi.net" 11 ENABLE EBIPA SERVER 11 SET EBIPA SERVER 10.0.0.12 255.0.0.0 12 SET EBIPA SERVER GATEWAY NONE 12 SET EBIPA SERVER DOMAIN "vdi.net" 12 ENABLE EBIPA SERVER 12 SET EBIPA SERVER 10.0.0.13 255.0.0.0 13 SET EBIPA SERVER GATEWAY NONE 13 SET EBIPA SERVER DOMAIN "vdi.net" 13 ENABLE EBIPA SERVER 13 SET EBIPA SERVER 10.0.0.14 255.0.0.0 14 SET EBIPA SERVER GATEWAY NONE 14 SET EBIPA SERVER DOMAIN "vdi.net" 14 ENABLE EBIPA SERVER 14 SET EBIPA SERVER NONE NONE 14A SET EBIPA SERVER GATEWAY 10.65.1.254 14A SET EBIPA SERVER DOMAIN "" 14A SET EBIPA SERVER 10.0.0.15 255.0.0.0 15 SET EBIPA SERVER GATEWAY NONE 15 SET EBIPA SERVER DOMAIN "vdi.net" 15 ENABLE EBIPA SERVER 15 SET EBIPA SERVER 10.0.0.16 255.0.0.0 16 SET EBIPA SERVER GATEWAY NONE 16 SET EBIPA SERVER DOMAIN "vdi.net" 16 ENABLE EBIPA SERVER 16 #Set Enclosure Bay IP Addressing (EBIPA) Information for Interconnect Bays #NOTE: SET EBIPA commands are only valid for OA v3.00 and later SET EBIPA INTERCONNECT 10.0.0.101 255.0.0.0 1 SET EBIPA INTERCONNECT GATEWAY NONE 1 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 1 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 1 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 1 ENABLE EBIPA INTERCONNECT 1 SET EBIPA INTERCONNECT 10.0.0.102 255.0.0.0 2 SET EBIPA INTERCONNECT GATEWAY NONE 2 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 2 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 2
89
SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 2 ENABLE EBIPA INTERCONNECT 2 SET EBIPA INTERCONNECT 10.0.0.103 255.0.0.0 3 SET EBIPA INTERCONNECT GATEWAY NONE 3 SET EBIPA INTERCONNECT DOMAIN "" 3 SET EBIPA INTERCONNECT NTP PRIMARY NONE 3 SET EBIPA INTERCONNECT NTP SECONDARY NONE 3 ENABLE EBIPA INTERCONNECT 3 SET EBIPA INTERCONNECT 10.0.0.104 255.0.0.0 4 SET EBIPA INTERCONNECT GATEWAY NONE 4 SET EBIPA INTERCONNECT DOMAIN "" 4 SET EBIPA INTERCONNECT NTP PRIMARY NONE 4 SET EBIPA INTERCONNECT NTP SECONDARY NONE 4 ENABLE EBIPA INTERCONNECT 4 SET EBIPA INTERCONNECT 10.0.0.105 255.0.0.0 5 SET EBIPA INTERCONNECT GATEWAY NONE 5 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 5 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 5 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 5 ENABLE EBIPA INTERCONNECT 5 SET EBIPA INTERCONNECT 10.0.0.106 255.0.0.0 6 SET EBIPA INTERCONNECT GATEWAY NONE 6 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 6 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 6 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 6 ENABLE EBIPA INTERCONNECT 6 SET EBIPA INTERCONNECT 10.0.0.107 255.0.0.0 7 SET EBIPA INTERCONNECT GATEWAY NONE 7 SET EBIPA INTERCONNECT DOMAIN "" 7 SET EBIPA INTERCONNECT NTP PRIMARY NONE 7 SET EBIPA INTERCONNECT NTP SECONDARY NONE 7 ENABLE EBIPA INTERCONNECT 7 SET EBIPA INTERCONNECT 10.0.0.108 255.0.0.0 8 SET EBIPA INTERCONNECT GATEWAY NONE 8 SET EBIPA INTERCONNECT DOMAIN "" 8 SET EBIPA INTERCONNECT NTP PRIMARY NONE 8 SET EBIPA INTERCONNECT NTP SECONDARY NONE 8 ENABLE EBIPA INTERCONNECT 8 SAVE EBIPA #Uncomment following line to remove all user accounts currently in the system #REMOVE USERS ALL #Create Users add at least 1 administrative user ADD USER "admin" SET USER CONTACT "Administrator" SET USER FULLNAME "System Admin" SET USER ACCESS ADMINISTRATOR ASSIGN SERVER 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1A,2A,3A,4A,5A,6A,7A,8A,9A,10A,11A ,12A,13A,14A,15A,16A,1B,2B,3B,4B,5B,6B,7B,8B,9B,10B,11B,12B,13B,14B,15B,1 6B "Administrator" ASSIGN INTERCONNECT 1,2,3,4,5,6,7,8 "Administrator" ASSIGN OA "Administrator" ENABLE USER "Administrator" #Password Settings ENABLE STRONG PASSWORDS SET MINIMUM PASSWORD LENGTH 8
90
#Session Timeout Settings SET SESSION TIMEOUT 1440 #Set LDAP Information SET LDAP SERVER "" SET LDAP PORT 0 SET LDAP NAME MAP OFF SET LDAP SEARCH 1 "" SET LDAP SEARCH 2 "" SET LDAP SEARCH 3 "" SET LDAP SEARCH 4 "" SET LDAP SEARCH 5 "" SET LDAP SEARCH 6 "" #Uncomment following line to remove all LDAP accounts currently in the system #REMOVE LDAP GROUP ALL DISABLE LDAP #Set SSO TRUST MODE SET SSO TRUST Disabled #Set Network Information #NOTE: Setting your network information through a script while # remotely accessing the server could drop your connection. # If your connection is dropped this script may not execute to conclusion. SET OA NAME 1 VDIOA1 SET IPCONFIG STATIC 1 10.0.0.255 255.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 SET NIC AUTO 1 DISABLE ENCLOSURE_IP_MODE SET LLF INTERVAL 60 DISABLE LLF #Set VLAN Information SET VLAN FACTORY SET VLAN DEFAULT 1 EDIT VLAN 1 "Default" ADD VLAN 21 "VDI" ADD VLAN 29 VMOTION ADD VLAN 93 PUB_ISCSI ADD VLAN 110 "MGMT_VLAN" SET VLAN SERVER 1 1 SET VLAN SERVER 1 2 SET VLAN SERVER 1 3 SET VLAN SERVER 1 4 SET VLAN SERVER 1 5 SET VLAN SERVER 1 6 SET VLAN SERVER 1 7 SET VLAN SERVER 1 8 SET VLAN SERVER 1 9 SET VLAN SERVER 1 10 SET VLAN SERVER 1 11 SET VLAN SERVER 1 12 SET VLAN SERVER 1 13 SET VLAN SERVER 1 14 SET VLAN SERVER 1 15 SET VLAN SERVER 1 16 SET VLAN INTERCONNECT 1 1
91
SET VLAN INTERCONNECT 1 2 SET VLAN INTERCONNECT 1 3 SET VLAN INTERCONNECT 1 4 SET VLAN INTERCONNECT 1 5 SET VLAN INTERCONNECT 1 6 SET VLAN INTERCONNECT 1 7 SET VLAN INTERCONNECT 1 8 SET VLAN OA 1 DISABLE VLAN SAVE VLAN DISABLE URB SET URB URL "" SET URB PROXY URL "" SET URB INTERVAL DAILY 0
92
Series Essential Series Ideal for basic, taskoriented applications and terminal services
Features Simple and affordable Marvell Smart ARM-based architecture with HP ThinPro Basic multimedia support (lower resolution, smaller window size) Essential peripheral support
Model
t5325
Enhanced features, mainstream use Choice of Microsoft Windows Embedded Compact (CE) 6.0 or HP ThinPro Media player Terminal emulation Wide peripheral support Secure USB compartment t5540 (CE 6.0) t5545 (HP ThinPro)
Flexible Series Rich performance for almost any server-based or remote computing environment
Powerful, flexible, innovative Choice of Microsoft Windows Embedded Standard (WES) or HP ThinPro Next-generation Intel Atom N280 processor (select models) Integrated wireless 802.11a/b/g/n (select models) Highly configurable operating system with support for local applications Secure USB compartment and PCIe/PCI expansion option Write filter and firewall Broad peripheral support Supports dual monitors t5740 (WES) t5745 (HP ThinPro) t5630w (WES)
Mobile thin clients 13.3-inch diagonal LED-backlit HD anti-glare display Integrated wireless Support for local applications 4320t
Each HP Thin Client ships with HP Device Manager, which can simplify configuration and administration.
93
HP Device Manager
HP Device Manager allows you to track, configure, upgrade, clone, and manage up to thousands of thin clients with ease. For greater business agility, Device Manager can dramatically simplify device deployment, task automation, compliance management, and policy-based security management. Benefits include: Simplify client administration through automated management tool and administrative templates Clone and group devices, enforce policies, automate tasks, and run system reports Update scheduling and status reporting for all thin clients in the enterprise Enable encrypted traffic; enhance security by issuing certificates between thin clients, gateways, and servers For more information on Device Manager, refer to http://www.hp.com/go/thinclient. In addition, for organizations with a mix of thin and traditional desktop clients, HP Client Automation Software Standard software can provide a single management tool for an entire desktop environment. For more information, refer to http://www.hp.com/go/clientautomation.
94
95
Volume Name
Size (GB)
96
Appendix G Networking
Figure G-1 provides an example of networking to the core.
97
98
1 1 4 2 6
10642 G2 (42U) Side Panels (set of two) (Graphite Metallic) HP 10642 G2 Front Door ALL HP 8,3k VA Modular PDU 40 Amp Core Only HP Two C-13 PDU Extension Bars HP 16A IEC320-C19 to C20 2m /6ft PDU cord, Grey
SAN
QTY 1 Part No. BM480A Description HP StorageWorks P4800 SAN
Services
QTY 1 1 1 1 1 Part No. UE602E UE603E HA124A1-58H (UJ714E) HA124A1-58J (UJ715E) HA124A1-58K (UJ716E) Description HP BladeSystem c7000 Infrastructure Installation and Startup Service for Blade Hardware and Insight Control Software, Electronic HP BladeSystem C7000 Enhanced Network Installation and Startup Service, Electronic Base P4000 setup 1-4 nodes - Same site/building Base P4000 setup 1-4 nodes - Multiple sites Additional 1-4 nodes for an existing P4000 configuration
Thin Clients
QTY 660 Part No. VU902AT#ABA VU900AA#ABA VU899AT#ABA 660 660 NM360A8#ABA EM870AA Description HP t5740 Thin Client Intel N280 Atom 2GF/2GR wifi HP t5740 Thin Client Intel N280 Atom 2GF/2GR HP t5740 Thin Client Intel N280 Atom 2GF/1GR HP Compaq LA1905wg 19 Widescreen LCD Monior HP Quick Release Kit
Software
QTY 6 6 660 660 1 1 Part No. 507575-B21 507576-B21 T9950AA T9955AA TB722AA 417688-B23 Description VMware View Premier 100 User VMware View Premier 10 User HP Client Automation Enterprise App Mgr Thn Cli 1-999 SW LTU HP Client Automation Enterprise OS Mgr Thn Cli 1-999 SW LTU HP Client Automation Enterprise PC Mgmt&Migr Ste 1-999 SW LTU HP Insight Control for c-Class BladeSystem, 16 licenses HP Insight Control for VMware vCenter Included with HP Insight Control
99
AppSense documentation and resources, AppSense, http://www.appsense.com AppSense and VMware View, http://appsense.com/solutions/vmware.aspx
Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Adobe and Acrobat are trademarks of Adobe Systems Incorporated. 4AA1-9256ENW, Created June 2010