Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Nutanix on HPE®

ProLiant®
Nutanix Best Practices

Version 2.1 • February 2019 • BP-2086


Nutanix on HPE® ProLiant®

Copyright
Copyright 2019 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.

Copyright | 2
Nutanix on HPE® ProLiant®

Contents

1. Executive Summary.................................................................................4
1.1. Trademark Disclaimer.................................................................................................... 4

2. Nutanix Enterprise Cloud Overview...................................................... 6


2.1. Nutanix Acropolis Architecture.......................................................................................7

3. HPE ProLiant Servers............................................................................. 8


3.1. Network Topology...........................................................................................................9
3.2. ProLiant Management and Monitoring...........................................................................9
3.3. ProLiant Host Upgrades...............................................................................................10

4. General Best Practices......................................................................... 11


4.1. Common Networking Best Practices........................................................................... 11
4.2. Cluster Expansion........................................................................................................ 13
4.3. Node Replacement...................................................................................................... 13

5. ProLiant Best Practices........................................................................ 14


5.1. ProLiant Networking.....................................................................................................14

6. Conclusion..............................................................................................17

Appendix..........................................................................................................................18
Best Practice Checklist....................................................................................................... 18
References...........................................................................................................................19
About Nutanix...................................................................................................................... 20
Trademark Notice................................................................................................................ 20

List of Figures................................................................................................................ 21

List of Tables.................................................................................................................. 22

3
Nutanix on HPE® ProLiant®

1. Executive Summary
Nutanix runs on HPE® ProLiant® rack-mount servers. Intended for datacenter administrators
responsible for procuring, designing, installing, and operating Nutanix on HPE ProLiant, this
best practices document can help steer the decision-making process when deploying Nutanix.
We didn’t write this guide to provide advice or guidance on technical issues solely related to
HPE ProLiant servers, and you should contact HPE directly with any issues that do not relate to
Nutanix software.
HPE ProLiant servers provide infrastructure that supports business objectives and growth. The
Nutanix Enterprise Cloud streamlines enterprise datacenter infrastructure by integrating server
and storage resources into a turnkey system. When running Nutanix on HPE ProLiant, you no
longer need a SAN, which reduces the number of devices to purchase, deploy, and maintain and
improves speed and agility.
In this document, we address system life cycle operations that require special handling in the
ProLiant environment, such as network cabling, firmware upgrades, and node replacement.
Where multiple options are available, we provide the information you need to decide between
them. We also highlight configurations and features that may be common in the field but that we
don’t recommend for Nutanix environments.
The Nutanix Field Installation Guide for HPE ProLiant Servers offers step-by-step installation
instructions; this guide supplements those instructions. We do not, however, intend to supplant
or supersede any guidance, instructional manuals, or directives from HPE regarding ProLiant
servers, and you should direct questions regarding the same to HPE.

Table 1: Document Version History

Version Number Published Notes


1.0 June 2017 Original publication.
2.0 February 2018 Gen10 updates.
2.1 February 2019 Updated Nutanix overview.

1.1. Trademark Disclaimer


© 2019 Nutanix, Inc. All rights reserved. Nutanix®, the Enterprise Cloud Platform™ and the
Nutanix logo are trademarks of Nutanix, Inc., registered or pending registration in the United
States and other countries. HPE® and ProLiant® are the registered trademarks of Hewlett-

1. Executive Summary | 4
Nutanix on HPE® ProLiant®

Packard Development LP and/or its affiliates. Citrix Hypervisor® is a trademark of Citrix Systems,
Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent and
Trademark Office and in other countries. Windows® and Hyper-V™ are registered trademarks
or trademarks of Microsoft Corporation in the United States and/or other countries. Linux® is
the registered trademark of Linus Torvalds in the U.S. and other countries. VMware ESXi™ is a
registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.
All other brand names mentioned herein are for identification purposes only and may be the
trademarks of their respective holder(s). Nutanix is not associated with, sponsored or endorsed
by Hewlett-Packard.

1. Executive Summary | 5
Nutanix on HPE® ProLiant®

2. Nutanix Enterprise Cloud Overview


Nutanix delivers a web-scale, hyperconverged infrastructure solution purpose-built for
virtualization and cloud environments. This solution brings the scale, resilience, and economic
benefits of web-scale architecture to the enterprise through the Nutanix Enterprise Cloud
Platform, which combines three product families—Nutanix Acropolis, Nutanix Prism, and Nutanix
Calm.
Attributes of this Enterprise Cloud OS include:
• Optimized for storage and compute resources.
• Machine learning to plan for and adapt to changing conditions automatically.
• Self-healing to tolerate and adjust to component failures.
• API-based automation and rich analytics.
• Simplified one-click upgrade.
• Native file services for user and application data.
• Native backup and disaster recovery solutions.
• Powerful and feature-rich virtualization.
• Flexible software-defined networking for visualization, automation, and security.
• Cloud automation and life cycle management.
Nutanix Acropolis provides data services and can be broken down into three foundational
components: the Distributed Storage Fabric (DSF), the App Mobility Fabric (AMF), and AHV.
Prism furnishes one-click infrastructure management for virtual environments running on
Acropolis. Acropolis is hypervisor agnostic, supporting three third-party hypervisors—ESXi,
Hyper-V, and Citrix Hypervisor—in addition to the native Nutanix hypervisor, AHV.

Figure 1: Nutanix Enterprise Cloud

2. Nutanix Enterprise Cloud Overview | 6


Nutanix on HPE® ProLiant®

2.1. Nutanix Acropolis Architecture


Acropolis does not rely on traditional SAN or NAS storage or expensive storage network
interconnects. It combines highly dense storage and server compute (CPU and RAM) into a
single platform building block. Each building block delivers a unified, scale-out, shared-nothing
architecture with no single points of failure.
The Nutanix solution requires no SAN constructs, such as LUNs, RAID groups, or expensive
storage switches. All storage management is VM-centric, and I/O is optimized at the VM virtual
disk level. The software solution runs on nodes from a variety of manufacturers that are either
all-flash for optimal performance, or a hybrid combination of SSD and HDD that provides a
combination of performance and additional capacity. The DSF automatically tiers data across the
cluster to different classes of storage devices using intelligent data placement algorithms. For
best performance, algorithms make sure the most frequently used data is available in memory or
in flash on the node local to the VM.
To learn more about the Nutanix Enterprise Cloud, please visit the Nutanix Bible and
Nutanix.com.

2. Nutanix Enterprise Cloud Overview | 7


Nutanix on HPE® ProLiant®

3. HPE ProLiant Servers


The HPE ProLiant server combines compute, memory, and storage in a number of form factors.
Each rack-mount server chassis holds one Nutanix node containing both compute and storage.
Purchase HPE ProLiant servers from the Nutanix-specified list of supported server models and
components. After you have procured compatible hardware, install Nutanix software on the
ProLiant servers.
This document addresses the following ProLiant server models:
• DL360 Gen10 8SFF
• DL380 Gen10 12LFF
• DL380 Gen10 24SFF
• DL360 Gen9 8SFF
• DL380 Gen9 8SFF
• DL380 Gen9 12LFF
• DL380 Gen9 24SFF
Nutanix supports ProLiant server models as they are tested. For a complete list of supported
servers, see the HPE ProLiant Hardware Compatibility List (HCL). For Nutanix installation
instructions, refer to the Nutanix Field Installation Guide for HPE ProLiant Servers. The following
table outlines the supported hypervisors for each server model.

Table 2: Hypervisor Support

Model AHV ESXi Hyper-V


DL360 Gen10 8SFF X X Not supported
DL380 Gen10 12LFF X X Not supported
DL380 Gen10 24SFF X X Not supported
DL360 Gen9 8SFF X X Not supported
DL380 Gen9 8SFF X X Not supported
DL380 Gen9 12LFF X X Not supported
DL380 Gen9 24SFF X X Not supported

3. HPE ProLiant Servers | 8


Nutanix on HPE® ProLiant®

3.1. Network Topology


ProLiant servers operate much like existing Nutanix server models, such as NX, Dell XC, and
Lenovo HX. Each ProLiant server connects to a top-of-rack (ToR) switch through an HPE
560FLR-SFP+ or HPE 640FLR-SFP+ adapter and 10 or 25 GbE cables. A single 1 GbE cable for
out-of-band (OOB) management connects to the dedicated Integrated Lights-Out (iLO) connector
port. iLO is a baseboard management controller (BMC), similar to the IPMI management
interface on NX servers, and provides a web and SSH interface for monitoring and managing the
physical components.

Figure 2: ProLiant Network Topology

The dedicated iLO port can connect either to the same ToR switches or to a set of dedicated
management switches. Nutanix recommends separating the management network from the data
network as shown in the figure above, but this configuration is not required.

3.2. ProLiant Management and Monitoring


Connecting to a server’s iLO web interface allows you to manage each server independently
and view details such as fan speed, temperature, and hardware logs. You can also access the
remote console through the iLO interface to view the on-screen state of the server. You can use
the Nutanix Prism web interface to monitor the state of ProLiant servers, but more advanced
server management operations still require an iLO connection.

3. HPE ProLiant Servers | 9


Nutanix on HPE® ProLiant®

3.3. ProLiant Host Upgrades


The HPE Smart Update Manager (SUM) manages the BIOS and various firmware versions of
ProLiant servers. To update each server’s components, follow an HPE-supported method for
running SUM in offline mode. Examples of supported methods include localhost updating by
booting from ISO or remote initiating from Linux® and Windows® operating systems and HPE
OneView. Because you install drivers as part of HCL-specified ESXi and AHV images, run SUM
in offline advanced mode and choose the Downgrade Firmware installation option to upgrade or
downgrade only firmware components. Each Service Pack for ProLiant (SPP) release (specified
in the HCL) includes a version of SUM you can use to deploy the SPP components. You can also
download the latest version of SUM from the HPE Support Center.

Note: Remote mounting the SPP ISO using iLO virtual media functionality requires
an iLO Advanced license.

For more information on SPP and SUM, as well as a complete list of available firmware
components, view the release notes for your specific SPP at the HPE website. Check the
Nutanix HPE ProLiant HCL for the recommended and tested firmware versions and hardware
components.

3. HPE ProLiant Servers | 10


Nutanix on HPE® ProLiant®

4. General Best Practices


When using Foundation to image a cluster with VMware ESXi, use the HPE-customized ESXi
ISO image that includes HPE-specific device drivers. Nutanix nodes use a whitelist of hypervisor
ISOs based on the file hash, and this whitelist includes the custom HPE ESXi ISO.
The Nutanix HPE ProLiant HCL specifies the firmware and BIOS versions recommended for
ProLiant deployments using Nutanix. Be sure to use the firmware and BIOS versions specified
in this guide for all components of the ProLiant server. If you need a firmware upgrade that falls
outside the specified versions, please contact Nutanix Support before performing the upgrade.
When performing physical server maintenance using HPE tools such as SUM or OneView,
ensure that you shut down no more than one node in a Nutanix cluster at a time. Because
HPE tools, and not Nutanix tools, conduct host management operations, the process needs
coordination between the hardware and software layers. As in any other Nutanix deployment,
administrators perform hypervisor and AOS upgrades through the Nutanix Prism interface. It is
particularly important to closely align with the tested and supported hypervisor and AOS versions
listed in the HCL in ProLiant deployments.

4.1. Common Networking Best Practices


In all Nutanix deployments, the CVM and hypervisor management network adapters must share
the same network broadcast domain and subnet. You must have connectivity in the same layer 3
network between all CVMs and hypervisors in the same Nutanix cluster. The host management
adapters do not have to be in this same subnet, but placing them in the same network simplifies
network address design.

Storage Traffic Between CVMs in ESXi


In ESXi hosts, you can create a port group named CVM Network that prefers vmnic0 as active
and vmnic1 as standby to ensure minimal latency between Nutanix CVMs. Connect vmnic0 in
all physical servers to Switch-A. Connect vmnic1 in all servers to Switch-B. In this configuration,
traffic between CVM nodes in the same rack moves over the same switch without needing to
traverse the upstream switches. You can configure traffic for additional port groups and VMs to
use the other switch (B on vmnic1) as active to separate CVM traffic from user VM traffic at the
physical switch level. You can also select other port group configuration options such as load-
based teaming or originating virtual port ID for guest traffic, depending on the VM requirements.
The following vSphere networking configuration diagram shows the CVM Network failover order.

4. General Best Practices | 11


Nutanix on HPE® ProLiant®

Figure 3: CVM Network Active Adapter Selection

Configure the Guest Network as shown in the following diagram or use any other suitable
teaming strategy for this port group.

Figure 4: Guest Network Active Adapter Selection

4. General Best Practices | 12


Nutanix on HPE® ProLiant®

Use the following commands from any CVM to set the active and standby adapters of all ESXi
hosts in a Nutanix cluster for the CVM Network and Guest Network port groups:
for i in `hostips`; do echo $i; ssh root@$i 'esxcli network vswitch standard portgroup policy
failover set -a=vmnic0 -p="CVM Network" -s=vmnic1';done
for i in `hostips`; do echo $i; ssh root@$i 'esxcli network vswitch standard portgroup policy
failover set -a=vmnic1 -p="Guest Network" -s=vmnic0’;done

Storage Traffic Between CVMs in AHV


In the default AHV network configuration with two uplinks, configure one adapter from the AHV
host as active and one as backup. You can’t select the active adapter in such a way that it
persists between host reboots. When multiple uplinks from the AHV host connect to multiple
switches, ensure that adequate bandwidth exists between these switches to support Nutanix
CVM replication traffic between nodes. Nutanix recommends redundant 40 Gbps or faster
connections between switches, which you can achieve with a leaf-spine configuration or direct
interswitch link.

4.2. Cluster Expansion


Expanding a Nutanix cluster on HPE ProLiant servers currently requires two steps:
1. Use the standalone Foundation VM to image the new server with the correct hypervisor and
AOS version. Image the node, but do not add it to any cluster during the Foundation process.
2. Use Prism or the Nutanix command-line interface (nCLI) in the existing ProLiant cluster to
discover the newly imaged node and add it to the cluster. Node discovery uses IPv6 multicast
traffic between Nutanix CVMs, so configure the upstream network devices to allow this traffic
between nodes. Additionally, Nutanix recommends placing the CVMs and hypervisor hosts in
the native, or default, untagged VLAN. If the CVMs are in a tagged VLAN, place the node you
want to add in this same VLAN before discovery.

Tip: With ProLiant servers, you must use the Foundation VM instead of CVM-
based cluster expansion to support bare metal imaging. For ease of use, deploy the
Foundation VM on the existing Nutanix cluster before adding nodes.

4.3. Node Replacement


Node replacement consists of node addition and node removal.
• Node addition: Node addition for ProLiant servers uses the same two-step process described
in the cluster expansion section above.
• Node removal: To remove the old node from the cluster, follow the Nutanix Prism Web
Console Guide.

4. General Best Practices | 13


Nutanix on HPE® ProLiant®

5. ProLiant Best Practices


The following best practices apply to HPE ProLiant servers.
Perform physical server configuration directly using the iLO interface. During normal operation,
be sure to perform all firmware, BIOS, and iLO upgrades on ProLiant servers using the SPP and
SUM for the corresponding physical server following the Nutanix HPE HCL. Firmware updates for
HPE ProLiant servers are not currently available through the Prism interface.
Each SPP release (specified in the HCL) includes a version of SUM you can use to deploy the
SPP components. You can also download the latest version of SUM from the HPE Support
Center.
The following table summarizes where you perform various tasks in ProLiant on Nutanix
deployments. Basic hardware monitoring consists of simple values such as system performance
and system alerts. Basic hardware management involves tasks such as simple server restarts.
Advanced management and logging monitors values such as fan speed, temperature, or detailed
hardware boot logs. You perform these tasks in the Prism web interface, in the hypervisor
interface, or directly in the iLO.

Table 3: Management Responsibilities

Responsibilities Prism Hypervisor iLO


Basic hardware monitoring and alerts X X X
Basic hardware management X X
Advanced hardware management and logs X

5.1. ProLiant Networking


The following diagram shows the preferred networking configuration for ProLiant servers. The
10 or 25 GbE HPE 560FLR-SFP+ or 640FLR-SFP+ LOM NICs connect to two separate ToR
switches. A separate OOB connection to the dedicated iLO connector port provides access to
iLO for management. Administrators can create this connection either through a dedicated set of
management switches or through the same ToR switch that the 10 or 25 GbE ports use. Nutanix
recommends using a separate OOB management network for fault tolerance and high availability.

5. ProLiant Best Practices | 14


Nutanix on HPE® ProLiant®

Figure 5: ProLiant ESXi Network Detail

We recommend a leaf-spine network architecture to eliminate oversubscription, providing


maximum throughput and scalability for east-west traffic. Choose a line-rate, nonblocking ToR
switch that provides high throughput and low latency between nodes in the Nutanix cluster.
Configure the ToR switch ports facing the ProLiant servers using the vendor-recommended
configuration for server ports. Use a configuration like portfast or edge to ensure that the server-
facing port transitions immediately to the spanning tree forwarding state. Additionally, configure
ports to automatically detect and negotiate speed and duplex. Configure the CVM and hypervisor
VLAN of the Nutanix nodes as untagged or native in the ToR switch.
Perform Foundation node installation using a dedicated VM running on a laptop or other
infrastructure device in the network. Place the Foundation VM on a network that can reach the
iLO interface of all servers and the CVM and hypervisor interfaces.
AHV networking is similar to ESXi networking, as shown in the following diagram. The cabling
for the ProLiant servers is identical, and all of the same management recommendations apply.
Nutanix recommends using the active-backup load balancing algorithm for simplicity, but to use

5. ProLiant Best Practices | 15


Nutanix on HPE® ProLiant®

both uplink network adapters for additional throughput, use balance-slb load balancing or LACP
instead. For more information, review the AHV Networking best practices guide.

Figure 6: ProLiant AHV Network Detail

5. ProLiant Best Practices | 16


Nutanix on HPE® ProLiant®

6. Conclusion
Nutanix can turn your existing compute into web-scale storage. The Nutanix Enterprise Cloud
allows a clustered pool of HPE ProLiant servers to act as a centrally managed compute and
storage system that offers high performance, redundancy, and ease of management not found in
traditional SAN-based infrastructures.
Nutanix provides shared storage without the SAN. We have validated and tested the Nutanix
on HPE ProLiant best practices in this guide for functionality and performance, providing
administrators the tools they need to generate successful, error-free designs for their
environments.
Combining ProLiant rack-mount servers with the Nutanix Enterprise Cloud gives administrators
the flexibility to build a compute and storage infrastructure on the hardware of their choice.

6. Conclusion | 17
Nutanix on HPE® ProLiant®

Appendix

Best Practice Checklist


General Best Practices
• Refer to the Nutanix HPE ProLiant HCL to find a complete set of compatible hardware,
firmware, and software to use in a Nutanix on ProLiant deployment.
⁃ Contact Nutanix Support before using firmware versions not on the HCL.
• Upgrade firmware versions using HPE utilities.
⁃ Ensure that you shut down no more than one Nutanix node at any given time when using
HPE utilities such as SUM.
• Perform hypervisor and AOS upgrades through Prism.
• Expand the Nutanix cluster using a two-step process:
⁃ Image the node using a Foundation VM.
⁃ Add the new node through Prism.
• Replace nodes using the node removal and node addition processes.

Common Networking Best Practices


• Allow IPv6 multicast traffic between nodes.
• Place CVM network adapters in the same subnet and broadcast domain as the hypervisor
management network adapter.
⁃ Placing the iLO in this same network is optional but can simplify network design.
• Place the Nutanix CVM adapter and hypervisor host adapters in the native or default untagged
VLAN.
• In ESXi, create a port group for all CVMs that prefers the same ToR switch.
⁃ For all other port groups, use Route based on originating virtual port ID for the standard
vSwitch and Route based on physical NIC load for the distributed vSwitch.
• In AHV, use the default active-backup mode for simplicity.
⁃ Only use balance-slb to use the capacity of both uplink adapters if required.
⁃ Refer to the AHV Networking best practices guide for more advanced networking
configurations.

Appendix | 18
Nutanix on HPE® ProLiant®

• Use a dedicated Foundation VM on a network with access to the iLO OOB, CVM, and
hypervisor network.
⁃ Run this VM on a laptop connected to the ToR switch or on an existing virtual infrastructure.
Nutanix recommends deploying this Foundation VM in the cluster for easy node expansion.

ProLiant Best Practices


• Upgrade firmware using the iLO interface connection to access iLO.
⁃ Following the Nutanix Field Installation Guide for HPE ProLiant Servers, use SUM to
upgrade firmware in offline mode on the server to HCL-compatible levels.
• Perform physical server hardware configuration using iLO.
• Perform day-to-day server management using Prism and the hypervisor interface.
• Use iLO for hardware-specific management and monitoring for items such as fan speed,
temperature, and detailed hardware logs.

Networking
• Use HPE 560FLR-SFP+ or 640FLR-SFP+ LOM NICs
⁃ Connect two NIC ports to two separate ToR switches for fault tolerance.
• Use a leaf-spine network that provides a line-rate, nonblocking connection between all Nutanix
nodes.
• Configure ToR switches as edge or server ports using a configuration similar to that
demonstrated in KB 2455.
• Configure ToR switch ports to carry the CVM and hypervisor VLAN as untagged, or native.
• Connect the iLO interface for OOB management access using iLO.
• Use a dedicated OOB management network if possible.
⁃ This management network must connect to the primary data network for Foundation to
work properly.

References
1. Nutanix Field Installation Guide for HPE ProLiant Servers
2. Nutanix HPE ProLiant Hardware Compatibility List
3. Nutanix Prism Web Console Guide
4. Nutanix vSphere Networking Best Practices
5. Nutanix AHV Best Practices Guide
6. Nutanix AHV Networking Best Practices Guide

Appendix | 19
Nutanix on HPE® ProLiant®

About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix Enterprise Cloud OS leverages web-scale engineering and
consumer-grade design to natively converge compute, virtualization, and storage into a resilient,
software-defined solution with rich machine intelligence. The result is predictable performance,
cloud-like infrastructure consumption, robust security, and seamless application mobility for a
broad range of enterprise applications. Learn more at www.nutanix.com or follow us on Twitter
@nutanix.

Trademark Notice
© 2019 Nutanix, Inc. All rights reserved. Nutanix®, the Enterprise Cloud Platform™ and the
Nutanix logo are trademarks of Nutanix, Inc., registered or pending registration in the United
States and other countries. HPE® and ProLiant® are the registered trademarks of Hewlett-
Packard Development LP and/or its affiliates. Citrix Hypervisor® is a trademark of Citrix Systems,
Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent and
Trademark Office and in other countries. Windows® and Hyper-V™ are registered trademarks
or trademarks of Microsoft Corporation in the United States and/or other countries. Linux® is
the registered trademark of Linus Torvalds in the U.S. and other countries. VMware ESXi™ is a
registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.
All other brand names mentioned herein are for identification purposes only and may be the
trademarks of their respective holder(s). Nutanix is not associated with, sponsored or endorsed
by Hewlett-Packard.
Disclaimer: This guide may contain links to external websites that are not part of Nutanix.com.
Nutanix does not control these sites and disclaims all responsibility for the content or accuracy
of any external site. Our decision to link to an external site should not be considered an
endorsement of any content on such site.

Appendix | 20
Nutanix on HPE® ProLiant®

List of Figures
Figure 1: Nutanix Enterprise Cloud................................................................................... 6

Figure 2: ProLiant Network Topology................................................................................ 9

Figure 3: CVM Network Active Adapter Selection........................................................... 12

Figure 4: Guest Network Active Adapter Selection......................................................... 12

Figure 5: ProLiant ESXi Network Detail.......................................................................... 15

Figure 6: ProLiant AHV Network Detail........................................................................... 16

21
Nutanix on HPE® ProLiant®

List of Tables
Table 1: Document Version History................................................................................... 4

Table 2: Hypervisor Support.............................................................................................. 8

Table 3: Management Responsibilities............................................................................ 14

22

You might also like