Vxrail Tech Faq

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

June 20, 2024

Dell VxRail™ System


Technical FAQ

This VxRail Technical FAQ describes technical details related to VxRail features and
functionality and should be used as a companion to the VxRail General FAQ.

Table of Contents

VxRail End of Sales Life (EOL) for SmartDPUs .......................................................................................... 3


VxRail End of Sales Life (EOL) for 15th Generation Nodes .................................................................... 3
VxRail Lot 9 Compliance & End of Sales Life (EOL) for VxRail E665/F/N in EMEA ................... 4
What’s New in VxRail .......................................................................................................................................... 5
What’s new in VxRail 7.0.520? .................................................................................................................... 5
What’s new in VxRail 8.0.212? .................................................................................................................... 8
What’s new in VxRail 7.0.510? .................................................................................................................... 8
VxRail Design ....................................................................................................................................................... 11
VMware Technology .................................................................................................................................... 13
vSAN Express Storage Architecture (ESA) ..................................................................................... 14
VxRail HCI System Software .......................................................................................................................... 17
Installation and Deployment .................................................................................................................... 17
System Management .................................................................................................................................... 17
Lifecycle Management ................................................................................................................................. 18
CloudIQ for VxRail ......................................................................................................................................... 22
RESTful API ...................................................................................................................................................... 25
VxRail Hardware ................................................................................................................................................ 26
VxRail on PowerEdge Servers .................................................................................................................. 26
Platform models............................................................................................................................................. 27
VD-4000 ....................................................................................................................................................... 27
Processors ........................................................................................................................................................ 28
Drives ................................................................................................................................................................. 28
Connectivity: On-board networking, additional networking, and fibre channel support 30
Memory ............................................................................................................................................................. 31
Intel Optane Persistent Memory ............................................................................................................. 32

Dell Technologies – Internal Use – Confidential 1


Internal Use - Confidential
June 20, 2024

GPU ..................................................................................................................................................................... 33
Security .................................................................................................................................................................. 33
Networking ........................................................................................................................................................... 35
SmartFabric Services for VxRail .............................................................................................................. 38
Deployment Options ......................................................................................................................................... 41
VxRail satellite nodes................................................................................................................................... 41
VxRail dynamic nodes ................................................................................................................................. 42
Dynamic nodes with external storage array as primary storage........................................... 44
VxRail Dynamic AppsON ........................................................................................................................ 46
Dynamic nodes with VMware vSAN cross-cluster capacity sharing as primary storage
......................................................................................................................................................................... 47
VxRail with vSAN ESA .................................................................................................................................. 48
VCF on VxRail .................................................................................................................................................. 49
2-node vSAN Cluster .................................................................................................................................... 50
Stretched Cluster ........................................................................................................................................... 50
Customer-deployable VxRail .................................................................................................................... 51
Ecosystem support ............................................................................................................................................ 52
External storage ............................................................................................................................................. 52
VxRail Management Pack for Aria Operations ................................................................................... 53
Delivery Options ................................................................................................................................................. 54
Integrated Rack .............................................................................................................................................. 54
Sales......................................................................................................................................................................... 55
Licensing ........................................................................................................................................................... 55
Tools ................................................................................................................................................................... 56
Training ............................................................................................................................................................. 57
End of Sales Life (EOL) ................................................................................................................................ 57
End of Sales Life (EOL) for 14th Generation Nodes...................................................................... 57
Support Services ............................................................................................................................................ 58
Deploy Services .............................................................................................................................................. 58
Solutions ................................................................................................................................................................ 58
Competition .......................................................................................................................................................... 59

Dell Technologies – Internal Use – Confidential 2


Internal Use - Confidential
June 20, 2024

VxRail End of Sales Life (EOL) for SmartDPUs


Question: What should I know about VxRail SmartDPU End of Sales Life?
Answer: VxRail is announcing the EOL of the Dell DPUs with no transition to new DPUs.
This decision is to align with the overall business performance of DPU solutions
with VxRail, and follows the PowerEdge EOML announcement of the same DPUs.

Question: Which DPUs are affected?


Answer: This EOL applies to NVIDIA BlueField-2 and Pensando DPUs, and also extends to
applicable Enablement kits and APOS Cuskits.

Question: Which VxRail platforms are affected?


Answer: The following VxRail 15G nodes: E660F, P670F, V670F.

Question: What is the EOL date for SmartDPUs?


Answer: The targeted EOL date is May 29th, 2024.

VxRail End of Sales Life (EOL) for 15th Generation Nodes


Question: What should I know about VxRail 15th Generation End of Sales Life?
Answer: Dell Technologies is announcing the EOL of Intel and AMD-based VxRail platforms
built on 15th generation PowerEdge servers.

Question: Which nodes will EOL?


Answer: The following Intel nodes: E660, E660F, E660N, P670, P670F, P670N, S670
Answer: The following AMD nodes: E665, E665F, E665N, P675, P675F, and P675N.

Question: What are the EOL dates for Intel platforms?


Answer: The E660/F/N, P670F/N, V670/F and S670 have the following global EOL dates:
• End of Life (EOL): September 3rd, 2024
• End of Expansion (EOE): August 31st, 2027
• End of Standard Support (EOSS): August 31st, 2029
• Early EOL of E665/F/N in EMEA is covered in the next section regarding Lot 9

Question: What are the EOL dates for AMD platforms?


Answer: The E665/F/N AMD platforms have separate EOL dates for EMEA and non-EMEA
regions. Refer to the Lot9 Compliance section of this FAQ for additional details of
the previously communicated early EMEA AMD EOL announcement.
Answer: The global EOL dates for the E665/F/N platforms in non-EMEA countries are:
• End of Life (EOL): June 4th, 2024
• End of Expansion (EOE): May 31st, 2027
• End of Standard Support (EOSS): May 31st, 2029

Dell Technologies – Internal Use – Confidential 3


Internal Use - Confidential
June 20, 2024

Question: What Intel platforms should customers transition to?


Answer: Customers should transition from the P670/F/N to the VP-760.
Answer: Customers should transition from the E660/F/N to the VE-660.
Answer: In Q3CY2024, VxRail is planning to release the VS-760, a 16th generation
successor to the S670.

Question: What AMD platforms should customers transition to?


Answer: VxRail is planning to release AMD platforms based on PowerEdge 16th Generation
servers in Q2CY2024: the VP-7625 and VE-6615.
Answer: Customers should transition from the P675/F/N to the VP-7625.
Answer: Customers should transition from the E665/F/N to the VE-6615.

Question: What happens to support contracts that exceed the EOSS dates?
Answer: Once EOSS dates are coded, entitlements quoted past the EOSS date are
terminated, and the unused portion of the standard support contract quoted beyond
the EOSS date is credited back to the customer automatically.

Question: What actions should I take?


Answer: Customers should start taking steps to migrate off these platforms. Contact
customers to let them know of the EOL of these platforms and transition to 16th
generation VxRail.
Answer: 15th generation VxRail began shipping in July 2021. Many of our customers who
bought VxRail two years ago are due for a refresh. Customers that refresh with 16th
generation VxRail are better able to support both current and emerging workloads
with significant performance gains.
Answer: Mixing VxRail 15th and 16th generation platforms is supported. To do this, 15G
clusters will need to be updated to the software version that supports all
generations.
Answer: Newer VxRail platforms are more powerful, so you need less of them to run
workloads. This reduces complexity and saves resources, money and time.
Answer: VxRail on the latest 16th generation PowerEdge servers and the innovation of
additional platforms and automation gives our customers a portfolio of solutions to
automate and simplify their landscape. While you’re discussing refresh with your
customers, remind them of the time they save with automated LCM, the
simplification of a single operating model and the innovation of VxRail like the
addition of dynamic and satellite nodes, VCF on VxRail, and APEX Services on
VxRail, and see what other workloads they can expand on VxRail.

VxRail Lot 9 Compliance & End of Sales Life (EOL) for VxRail E665/F/N in EMEA
Question: What is Lot 9, and what steps are taken to ensure VxRail is compliant?
Answer: The ErP Lot 9 regulation introduces requirements for servers with one or two
processor sockets to limit power consumption in idle state and set minimum
requirements for power supply efficiency. Starting January 1, 2024, the ErP Lot9

Dell Technologies – Internal Use – Confidential 4


Internal Use - Confidential
June 20, 2024

regulation will restrict the sale of Platinum PSUs, and impose the use of Titanium
Power Supplies in all VxRail products that are shipping into CE countries. More
information, including a list of affected countries, can be found here.

Question: How are the Platinum PSUs affected by the Lot 9 regulation?
Answer: Beginning November 6th, 2023, all Platinum PSU offerings will EOL across sales
and ordering tools, in affected EMEA CE countries only.
Answer: The Lot 9 requirements apply to VxRail nodes, not to APOS upgrades. If an
affected node was shipped before January 1, 2024, the customer can upgrade the
existing PSU with another Platinum PSU, at any time.
Answer: Service spares are unaffected. PSUs will be replaced with the same level of power
efficiency. If the PSU to be replaced is a Platinum PSU, the service part will be a
Platinum PSU, even after January 1, 2024.

Question: Are all VxRail models available with Titanium PSUs?


Answer: No. VxRail E665, E665F, and E665N do not support Titanium PSUs. As a result,
they will not be allowed to be shipped or sold into affected EMEA countries.

Question: How are the VxRail E665/F/N models affected by the Lot 9 regulation?
Answer: There are no Titanium PSUs that are compatible with the VxRail E665/F/N.
Answer: E665/F/N will EOL across EMEA sales and ordering tools on Nov 6th, 2023.

Question: Does the Lot 9 regulation affect the selling or shipment of Platinum PSUs or
VxRail E665/F/N in non-CE countries?
Answer: No, Titanium PSUs are not mandated in non-CE countries.
Answer: Platinum PSUs will continue to ship normally in non-CE countries (Americas/APJ,
and EMEA regions where CE is not required).
Answer: E665/F/N nodes will continue to ship normally in the Americas and APJ regions.

What’s New in VxRail


What’s new in VxRail 7.0.520?
Question: What are the availability dates of this release?
Answer: VxRail 7.0.520 is available for cluster updates on June 19th. Nodes can be re-
imaged with VxRail 7.0.520 on June 26th. Factory shipment and ordering path with
VxRail 7.0.520 begins on July 16th.

Question: What are the highlights of this release?


Answer: VxRail 7.0.520 provides support for VMware vSphere 7.0 U3q. For more
information, see the VMware vSphere 7.0 Release Notes.
VxRail 7.0.520 includes support for the following features:
• Identifier for VxRail software release types

Dell Technologies – Internal Use – Confidential 5


Internal Use - Confidential
June 20, 2024

• Serviceability improvements for faster response time


• Backporting of VxRail 8.x features
• Reduced node image package
• Stronger TLS/SSH ciphers
For a more comprehensive list of features added in VxRail 7.0.520, see the VxRail
7.0 release notes.

Question: What are the supported update paths?


Answer: Clusters running any previous VxRail 7.0 software version can directly update to
VxRail 7.0.520. For clusters running VxRail 4.5 or 4.7, they require a multi-hop with
a previous VxRail 7.0 software version as an intermediary hop. For detailed
guidance on update options from a specific VxRail software version, use the SolVe
procedure generator.

Question: What are the supported hardware platforms?


Answer: VxRail platforms based on PowerEdge 14G, 15G, and 16G hardware can run
VxRail 7.0.520. Only VxRail 15G Intel-based and VxRail 16G (Intel and AMD)
platforms are available for ordering.

Question: What are VxRail software release types?


Answer: Starting with VxRail 7.0.520, VxRail is distinguishing its software releases by
release types: feature and patch releases. A feature release is a VxRail software
release that introduces new features and hardware support. A patch release is
primarily focused on providing support for emergency patches for VMware
software, PowerEdge hardware, and VxRail software to address critical issues. The
release type identifier helps users quickly determine the level of urgency with which
they should assess the need to update their clusters. While updating to the latest
feature release allows users to take advantage of the latest capabilities, a patch
release may have more critical impact to their operations and organizational
security posture.
Both feature and patch releases have the same LCM experience.

Question: How can I identify VxRail feature releases and patch releases?
Answer: Feature releases, such as VxRail 7.0.520, are VxRail releases that introduce new
capabilities and hardware support. Moving forward, release numbers that end with
zero are identified as feature releases, while releases that end between one and
nine are identified as patch releases. For example, a future patch release based on
VxRail feature release 7.0.520 can have a release number of VxRail 7.0.52[1-9].

Question: What are the serviceability improvements that have been added to this
release?
Answer: The service request ticket creation capability in VxRail Manager UI has been
enhanced to simplify the customer experience and speed up the processing time to
resolve a service request ticket. There is a new input field for creating a service
request ticket. The new field, Issue Type, allows a user to select from a set list
which product area requires attention. Based on the issue type, VxRail Manager
will automatically collect relevant logs and package them into a bundle. When Dell

Dell Technologies – Internal Use – Confidential 6


Internal Use - Confidential
June 20, 2024

Support picks up the ticket and requests from the customer the log information, the
customer can avoid gathering the logs themselves and send to them the prepared
log bundle. This new feature will help capture log data while the issue may still be
present and reduces the turnaround time for ticket resolution by automating the log
collection.
Answer: VxRail event handling now includes part numbers for memory and disk/drive errors
and slot number information for battery errors. By adding this information, dial
home events can provide actionable data back to Dell Support or service providers
who can auto-dispatch parts in response.
Answer: VxRail event handling now supports scenarios when VxRail Manager certificate is
about to expire within 30 days or has expired. These scenarios will now trigger an
alarm, vCenter Server event, and a dial home event that will be sent to Dell
Support.

Question: Which VxRail 8.x features have been backported to VxRail 7.0.520?
Answer: Password management from VxRail Manager has been backported to VxRail
7.0.520. Introduced in VxRail 8.0.210, this feature streamlines password
management for iDRAC root and vCenter Server management accounts so that
password updates can be done in a single workflow from the UI or API.
Answer: The USB-based version of the node imaging management tool is available in
VxRail 7.x starting with VxRail 7.0.520. The USB option provides a solution for
users not wanting to connect a laptop onto the local network to reimage a node, or
users not familiar with using VxRail API to perform the operation. The USB option is
also the only option to re-image a VD-4000 witness because it lacks an iDRAC
interface.

Question: What are the details of the reduced node image package?
Answer: The reduced node image package includes all the contents of the full node image
package minus the VxRail Manager and vCenter Server installation files. For
VxRail 7.0.520, the reduced node image package is less than 6GB compared to the
full package of almost 21GB.

For customers looking to re-image a node to add to an existing cluster, or re-image


a failed node, there is already a VxRail Manager and vCenter Server (if on-cluster
and VxRail-managed) instance on the cluster. Customers can save time by
downloading the smaller node image package and reduce the node reimage time
by almost 40% by not installing the VxRail Manager and vCenter Server instances.

Question: How can a user reimage a node with the smaller node image package?
Answer: Users can use the node image management utility or VxRail API to not copy the
VxRail Manager and vCenter Server installation files when transferring the image to
the target node by using a selectable parameter. This use case can apply to
customers who already have a full image in their repository and can use either
method to transfer the reduced image to the target node.

The other option is to download a reduced node image package from the Dell
Support website. Starting with VxRail 7.0.520, there will be a reduced node image
package posted on the website along with other release contents on the Dell

Dell Technologies – Internal Use – Confidential 7


Internal Use - Confidential
June 20, 2024

Support website. This option can be used to build the node image ISO for the USB-
based version of the node image management tool.

Question: What are the security updates in VxRail 7.0.520?


Answer: In response to customer concerns about the weak cipher suite used for TLS/SSH,
the ciphers have been hardened and aligned to Dell Secure Infrastructure
Requirements v4.0 that is standardized across the entire Dell portfolio. This design
decision provides consistency on security capabilities across Dell products. The
Dell Secure Infrastructure Ready Requirements follow the best practices and
recommendations by U.S. National Institute of Technologies – NIST SP800-57 Part
1 r5 and National Security Agency in Commercial National Security Algorithm Suite
1.0 and 2.0.
Answer: Updates to and cluster deployments with VxRail 7.0.520 will automatically apply the
cipher suite on VxRail Manager. Users will have to manually apply the cipher suite
on vCenter Server, ESXi, and iDRAC. Disallowed or deprecated cipher suites will
be removed by a future VxRail cluster update.

What’s new in VxRail 8.0.212?


Question: What are the availability dates for this release?
Answer: VxRail 8.0.212 is available for cluster updates on June 4th, 2024. The node image
package is available on June 18th, 2024.

Question: What changes does this release include?


Answer: VxRail 8.0.212 provides support for VMware ESXi 8.0 Update 2c, VMware vCenter
8.0 Update 2b, updated firmware, and security fixes. For more information about
the supported VMware releases, you can review the VMware ESXi and vCenter
release notes. For more information about the release details for VxRail 8.0.212,
you can review the VxRail Release Notes.

What’s new in VxRail 7.0.510?


Question: What are the availability dates for VxRail 7.0.510?
Answer: The RTS date for VxRail 7.0.510 is May 15th, 2024. There is no RTW.

Question: What are the key features and highlights of this release?
Answer: VxRail 7.0.510 is a hardware-only release, introducing 16th generation VxRail
AMD-based all-flash and all-NVMe platforms with support for vSphere 7.0 Update
3o.
Answer: These new VxRail VE-6615 and VP-7625 nodes are powered by 4th Generation
AMD EPYC processors (known as Genoa) and include the following features:
• AMD 4th Generation EPYC processors, with up to 96 cores per socket, up to
two sockets total, representing a 50% increase in core count over 15G.

Dell Technologies – Internal Use – Confidential 8


Internal Use - Confidential
June 20, 2024

• The VE-6615 supports a single AMD EPYC processor with up to 84 cores. The
VP-7625 supports up to two AMD EPYC processors with up to 96 cores per
socket.
• These new platforms support both all-flash and all-NVMe storage options with
vSAN OSA.
• All-flash VE-6615 and VP-7625 support RI and MU SAS/vSAS/SATA drives in
sizes of up to 7.68TB. The all-flash VE-6615 can achieve up to 61.44 TB of total
storage per node, and the all-flash VP-7625 can achieve up to 161.28TB of
storage per node.
• All-NVMe VE-6615 and VP-7625 support RI NVMe drives in sizes of up to
15.36TB. The all-NVMe VE-6615 can achieve up to 122.88 TB of total storage
per node, and the all-NVMe VP-7625 can achieve up to 322.56TB of storage
per node.
• The VE-6615 supports up to 12 DIMMs of DDR5 memory, in DIMM sizes up to
256GB. The VP-7625 supports up to 24 DIMMs of DDR5 memory, in DIMM
sizes up to 128GB. A per node capacity of 3TB at speeds of up to 4800MT/s is
achievable with both the VE-6615 and VP-7625.
• The VP-7625 does not support use of a 256GB DIMM due to server thermal
capacity limitations.
• PCIe Gen 5 which provides additional PCIe lanes (up to 128 total) and double
the throughput of PCIe Gen4.
• These platforms sport the new BOSS-N1, which brings two main improvements
over the BOSS-S2 card introduced on 15G. First, it has been upgraded to be a
pair of dual mirrored m.2 drives 960GB in size. Second, the m.2 interface has
been upgraded to NVMe. It still retains the hot-pluggability of the BOSS-S2,
which as a reminder, refers to the ability to disconnect and replace an M.2
SATA drive if it has failed, without needing to power off and open the server.
• Support for up to six single-wide and two double-wide GPUs.
• These 16th generation VxRail AMD-based VE-6615 and VP-7625 platforms will
be supported for Greenfield deployments only at launch.

Question: Which storage type is supported with the VE-6615 and VP-7625?
Answer: Both all-flash and all-NVMe storage options are available with the VE-6615 and VP-
7625 platforms.
Answer: All-Flash is available on both the VE-6615 and VP-7625. NVMe storage is available
on the VE-6615 and is also available with the dual CPU configuration of the VP-
7625. NVMe is not supported on the single CPU configuration of the VP-7625.

Question: Do these nodes support VMware vSAN ESA?


Answer: No. The VE-6615 and VP-7625 platforms support vSAN OSA only at RTS with
vSphere 7.0 Update 3o.

Question: Are these platforms customer-deployable?


Answer: No, not at launch. Customer-installable deployment is planned for post-RTS.

Dell Technologies – Internal Use – Confidential 9


Internal Use - Confidential
June 20, 2024

Question: Can I upgrade from a Single to a Dual CPU configuration with the VP-7625
APOS?
Answer: No. Upgrading from a single to a dual CPU configuration is not supported APOS.

Question: Can I purchase VMware perpetual licensing with the VE-6615 or VP-7625 at
RTS?
Answer: No. Please review the VxRail Ordering and Licensing Guide to review up-to-date
licensing information.

Question: Can these new nodes be added to an existing cluster?


Answer: The rules for mixing platforms have not changed for VxRail, however these VxRail
AMD platforms are for greenfield deployments only. Adding 16G AMD nodes to
existing OSA clusters will become available post-RTS. Dell employees can access
the 6 Month Roadmap for the latest information.

Question: What about the rest of the VxRail portfolio? Are additional 16G platforms
forthcoming?
Answer: The VxRail platform strategies are reviewed on the 6 Month Roadmap.

Question: What are the key features of the 4th Generation AMD EPYC processor?
Answer: This new generation of processor delivers up to 96 Zen4 cores and twelve
4800MT/s memory channels per AMD processor. The introduction of PCIe Gen 5
provides twice the bandwidth of PCIe Gen 4.

Question: Can I upgrade the processors in my current VxRail cluster (14G or 15G
hardware) to 4th Gen AMD EPYC?
Answer: No. It is not possible to upgrade existing VxRail AMD nodes with 2nd or 3rd
Generation AMD processors with the new 4th Generation AMD EPYC processors.

Question: What are the memory configuration rules for this new AMD chipset?
Answer: Each processor socket can be configured with 4, 6, 8, 10 or 12 DIMMs of equal
size. For best performance, populate all 12 DIMM slots in each processor (1 DPC
only) to achieve a memory speed of 4800 MT/s. Mixed DIMMs are not supported.
Refer to the Memory Configuration Rules slide in the Technical Reference Deck
and Ordering Configurations guide for details.

Question: What are the significant hardware changes from 15G?


Answer: For a deep dive on hardware changes, review the VxRail Owners Manuals which
cover platform details. At a high level VxRail perspective, these changes are:
• Increased core count, from 64-cores to 96-cores (up to 192 cores with dual
socket configuration)
• PCIe Gen 5 – provides double the bandwidth of PCIe Gen 4
• DDR5 memory – increasing memory bandwidth from 3200 MT/s to 4800 MT/s,
with DIMM sizes up to 256GB, and the ability to slot 3TB memory per node.
• Upgrades the size of the BOSS card to dual 960GB and adds NVMe interface

Dell Technologies – Internal Use – Confidential 10


Internal Use - Confidential
June 20, 2024

Question: How can these new nodes be deployed?


Answer: The VE-6615 and VP-7625 can be deployed with vSAN OSA as standard vSAN
HCI clusters, stretched clusters and 2-node vSAN clusters.
Answer: These platforms cannot be deployed as dynamic node clusters or as satellite
nodes.
Answer: Customer-installable deployments are planned for post-RTS.
Answer: Refer to the roadmap for timing of support with VCF with VxRail.

Question: Does this release support all the features of the 7.0.480 release?
Answer: Yes, it supports all the features of the 7.0.480 release, with an update path to
vSphere 8.x planned for future release. Refer to the 6 Month Roadmap for the
latest information.

VxRail Design
Question: Are VxRail systems achieving six 9’s of availability?
Answer: VxRail 2- to 4-node clusters configured with N + 1 redundancy, and 4- to 16-node
clusters configured with N + 2 redundancy are designed for 99.9999% hardware
availability, which equates to less than 1 minute of unplanned downtime per year.
When used with additional included software features that provide further high
availability, like fault domains or stretched cluster, VxRail can achieve greater than
6 x 9’s availability at the per VM level.

Question: How do VxRail systems scale up?


Answer: Storage can scale up by adding more capacity disks to an existing disk group, or by
adding additional disk groups. Additional memory can be added providing it follows
balanced guidelines.

Question: How do VxRail systems scale out?


Answer: After initial deployment as a 2 or 3 node cluster, VxRail clusters scale near-linearly
up to 64 nodes, in one node increments, pooling additional compute, storage,
virtualization, and management resources. New nodes can be added with just one
click, and will first undergo a version compatibility check to ensure expansion
prerequisites are met. A target node running an older version of VxRail HCI system
software than the cluster it is joining will receive the appropriate composite update
(as outlined in the target version release notes) before the expansion procedure is
initiated. Existing cluster workloads will automatically optimize and rebalance in
accordance with the newly added resources.

Question: How does VxRail load balance storage when a node is added?
Answer: VxRail can rebalance storage assuming there is available slack space to do so.
While DRS (if licensed) will handle moving VMs, vSAN will not rebalance data to
the drives of the newly added node, unless a capacity drive has reached 80% full If
any capacity drive in the cluster has reached 80% full, vSAN will automatically

Dell Technologies – Internal Use – Confidential 11


Internal Use - Confidential
June 20, 2024

rebalance the cluster, until the space available on all capacity drives is below the
80% threshold. You can manually start a rebalance from the storage perspective,
which may be beneficial as the timing of it can be controlled. See vSAN
documentation for additional details.

Question: Can I mix different VxRail nodes in a cluster?


Answer: Yes, except for vSAN ESA nodes with vSAN OSA nodes, VxRail clusters can
expand and accommodate mixed node types, with the following caveats:
• All current nodes and target nodes must run the same VxRail software version
• A 2-node VxRail cluster can be expanded only if the nodes are running 7.0.130
or later.
• A 3-node VxRail cluster requires the first 3 nodes to be homogenous (same
series type and configuration), but can accommodate mixed node types
thereafter. Note that homogenous configuration applies only to components
managed by VxRail LCM – CPUs, disks, networking and memory must be the
same in the first three nodes of a cluster.
• Clusters cannot contain a mix of hybrid nodes and all flash/NVMe nodes
• Clusters cannot contain a mix of Intel nodes and AMD nodes (this is a VMware
limitation).
• Clusters cannot contain a mix of nodes with different base networking
connectivity (nodes must all be either 10GbE or 25GbE)

Question: What are the supported VxRail high availability configuration options when
using 3-node and 4-node deployments?
Answer: For vSAN OSA, with a 3-node vSAN cluster configuration there are three physical
hosts, and the data and witness components are distributed across all three nodes.
This configuration provides a lower entry point for customers, however, there are
trade-offs with respect to functionality and data protection. This configuration can
only support failures to tolerate (FTT) = 1, with RAID-1. It does not have spare
resources to self-heal. If a customer desires the resiliency of automatic self-healing
from a component failure, then a 4-node minimum is required. In a four-node
configuration, should one node fail, there remains (the sufficient minimal) three
nodes to meet the requirement of FTT = 1, RAID-1. A four-node configuration also
supports the failures to tolerate (FTT) = 1, with RAID-5 (Erasure Coding). However,
the self-healing resilience is lost, as four nodes is the minimal required to support
RAID-5 (Erasure Coding). Five nodes would be required to support this self-healing
resilience.
Answer: For vSAN ESA, customers should use the RAID-5 storage policy for better space
efficiency with performance. vSAN ESA RAID-5 can configure with a 2+1 scheme
for 3 or 4-node clusters.

Question: What hardware components in a VxRail system are customer upgradeable


(CRU), and which are field upgradeable (FRU)?
Answer: Please refer to the VxRail Ordering and Licensing Guide for a list of upgradable
components.

Dell Technologies – Internal Use – Confidential 12


Internal Use - Confidential
June 20, 2024

Question: Which VxRail software components are not included in a full stack update?
Answer: VxRail LCM ensures a continuously validated version set of VMware software,
VxRail software, drivers, and firmware, but does not include GPU drivers and FC
HBA firmware. Though a user would be able to consolidate GPU drivers and FC
HBA firmware in the same cluster update for a faster update. Customer installable
software such as vRealize Log Insight and RecoverPoint for VMs are also updated
separately; refer to SolVe procedures and VMware documentation for update
instructions. Some update paths are only available between certain VxRail software
versions; refer to the target version release notes to ensure valid update paths.

Question: What is the typical hardware and software EoL/EoS policy?


Answer: Dell’s hardware policy ensures that End-of-Sale is announced 3 months in
advance. From a software perspective, our policy is to offer N and N-1 versions of
VxRail and ESXi out of the factory.

Question: What is the difference between RTQ, RTW and RTS?


Answer: RTQ means Ready to Quote and that sales teams can begin quoting activities (in
specific catalogs only). RTW means Release to Web and the VxRail software for
existing clusters can be updated. RTS means Release to Ship from factory and
new clusters and net new nodes can be ordered. Refer to the VxRail Ordering and
Licensing Guide for additional details.

VMware Technology
Question: Do VxRail systems offer data reduction capabilities?
Answer: Yes. Only all-flash VxRail system configurations offer a variety of data efficiency
services, including deduplication, compression, and RAID 5/6 erasure coding.
Hybrid configurations are not supported. These data reduction capabilities require
vSAN Advanced, Enterprise, or Enterprise Plus licensing. Customers with a
subscription license will have the vSAN Enterprise license included in the package.

Question: What is the best practice for when to enable data efficiency services?
Answer: If customers plan to use deduplication and compression, or compression only, it is
best to activate them at time of deployment, as when enabled, each disk group on
each host needs to be rebuilt. See VMware documentation for additional guidance.

Question: Are data reduction capabilities supported on hybrid configurations?


Answer: No, deduplication, compression, and RAID 5/6 erasure coding require all-flash
configuration.

Question: Are data reduction services usable with both vSphere and vSAN encryption?
Answer: There is no impact to vSAN encryption when using data services, including
deduplication and compression, as encryption occurs after dedupe and
compression. vSphere Encryption will significantly limit any benefits of dedupe and
compression, as in this instance, encryption occurs before dedupe and
compression.

Dell Technologies – Internal Use – Confidential 13


Internal Use - Confidential
June 20, 2024

Question: Is any of the VMware software running on the VxRail node transferrable?
Answer: If the VMware software was purchased via Bring Your Own License/Subscription
(BYOS) option from Broadcom, the software is transferrable. However, eOEM
licenses are non-transferrable from the hardware they were purchased with.

Question: How is vCenter Server deployed with VxRail?


Answer: A VxRail-managed vCenter server manages the VxRail cluster on which it was
deployed, and starting with VxRail 7.0.130, can also manage other VxRail clusters
(including dynamic node clusters, 2-node and stretched clusters).
Answer: A customer-managed vCenter can manage multiple VxRail clusters and other
vSphere or vSAN clusters. Note that LCM operations are the responsibility of the
customer in this deployment.
Answer: Please refer to the vCenter Deployment Scenarios Technical Webinar and the
vCenter Server Planning Guide for more information.

Question: Does VxRail support VMware Host Profiles?


Answer: No.

Question: What are the recommended VMware configuration limits VxRail can support?
Answer: Refer to the VMware Configuration Limits to obtain information on ESXi host
maximums and other details.

vSAN Express Storage Architecture (ESA)

Question: What is vSAN ESA?


Answer: vSAN ESA is an optimized version of vSAN that exploits the full potential of the
very latest in hardware and unlocks new capabilities. Previously vSAN ESA was
referred to as vSAN 2.0, this terminology should no longer be used. ESA is the
appropriate shorthand, while OSA is used to refer to the Original Storage
Architecture of vSAN.

Question: What are the key features in vSAN ESA?


Answer: vSAN ESA builds on the existing vSAN stack to leverage the powers of multi-core,
faster and larger capacity memory, and NVMe technology to introduce new
capabilities.
• The new Log Structured File System allows for faster ingestion of VM data to
eliminate tradeoff of performance for space efficiency and security when
compression and encryption data services are enabled.
• The new Log Structured Object Manager and data structure introduces
adaptive data resiliency that maintains storage policy settings while looking to
improve space efficiency as a cluster node count scale up or down.
• All-NVMe storage pool simplifies storage device management for performance
and workload balance while significantly cutting down performance impact of an
individual failed storage device.

Dell Technologies – Internal Use – Confidential 14


Internal Use - Confidential
June 20, 2024

Question: What are the data services available in vSAN ESA?


Answer: Compression is part of the default storage policy and applied per VM, and not a
cluster-wide setting as in OSA, thus compression will only be applied to virtual
machines assigned the default policy. To disable compression, change the storage
policy assigned to the virtual machine to one with without compression enabled.
Changing the compression storage policy of a virtual machine will not change
existing data, as it only applies to new writes.
Answer: Data-at-rest encryption can be enabled during cluster deployment. vSAN ESA
encryption supports either an external Key Management Server (KMS) or the
Native Key Provider. Check the VMware Compatibility Guide for a list of supported
external KMS. It is recommended to use TPM with encryption to persist the
availability of the key during a host reboot to avoid having to connect to the key
provider for key retrieval. A customer has to decide whether to enable encryption at
the time of cluster deployment. Once enabled, encryption cannot be disabled at a
later point.

Question: How does vSAN ESA provide better performance for data services?
Answer: The data services are applied at the time of ingest which avoids the IO amplification
penalties experienced with needing to compress and decompress data when
processing between cache and capacity layers in vSAN OSA. With vSAN ESA,
data is immediately compressed and from then on, it is designed to process
compressed data down the stack which reduces CPU cycles and network
bandwidth usage.
Answer: When encryption is enabled, it is encrypting the compressed data which lessens
the impact to CPU cycles and network bandwidth usage. vSAN ESA also avoids
the need to decrypt and re-encrypt data when passed between cache and capacity
layers which is needed in vSAN OSA.

Question: What are the improvements to erasure coding in vSAN ESA?


Answer: Besides faster application of data services at ingestion, the log structured file
system eliminates the performance penalty for erasure code writes. In vSAN OSA,
an erasure code write, such as RAID-5 or RAID-6, requires read-modify-write
overhead because it has to re-calculate the parity before the write
acknowledgement. vSAN ESA can issue a write acknowledgement once the write
is mirrored. Full stripe writes are efficiently done asynchronously while the FTT is
maintained throughout.

Question: What is adaptive RAID-5 in vSAN ESA?


Answer: vSAN ESA can dynamically adjust between the two schemes: 2+1 and 4+1, with
best effort to ensure that there is a spare storage device per best practices. This
intelligence offloads storage management as the cluster size scales up and down.
A cluster with as few as 3 nodes can start with a RAID-5 2+1 scheme which
provides better space efficiency than a RAID-1 using vSAN OSA. When the cluster
scales up to 6 nodes, vSAN ESA automatically converts to a 4+1 scheme for better
space efficiency.
Conversely, when a cluster with a 4+1 RAID-5 scheme reduces node count from 6
to 5 nodes, vSAN ESA dynamically reconfigures to a 2+1 scheme to ensure
resiliency and one spare storage device.

Dell Technologies – Internal Use – Confidential 15


Internal Use - Confidential
June 20, 2024

Question: What are the benefits of an all-NVMe storage pool?


Answer: With the elimination of discrete cache and capacity drives, all drives in the cluster
make up a single storage pool. Each drive is its own fault domain. In the event of a
storage drive failure, resynchronization is significantly minimized.
In a single storage pool, all drives contribute to cache and capacity. Balancing
performance and capacity across the storage pool is easier. The same storage
policies for vSAN OSA are used in vSAN ESA.

Question: What are the license requirements for vSAN ESA?


Answer: vSAN ESA requires vSAN Advanced, Enterprise, or Enterprise Plus license
editions. If using encryption, vSAN Enterprise or Enterprise Plus license is required.
The vSAN Enterprise license is included with the vSphere Foundation subscription.

Question: How does the introduction of vSAN ESA impact the value of vSAN OSA?
Answer: Both vSAN ESA and OSA architecture co-exist in vSAN 8.0. The flexibility of two
architectures can allow customers to use vSAN in even more use cases.
There are many applications that still perform well using vSAN OSA. There may be
applications are not yet qualified to run on ESXi 8.0.
For customers with no near-future plans to refresh hardware, there is not a rush to
switch over to vSAN ESA.
For customers who are considering a tech refresh, it can make sense to position
vSAN ESA to begin their exploration into identifying benefits of running their
applications in this new architecture.

Question: Are there vSAN OSA features that vSAN ESA does not support?
Answer: vSAN ESA does not support the following features:
• Granular storage policies per VMDK. ESA policies are applied at the VM level.
• Deduplication

Question: Do the scalable, high performance native snapshots in vSAN ESA have any
negative impact to existing tools that use VMware snapshots?
Answer: No. This new snapshot architecture capability does not change the way in which 3rd
party VADP backup solution, SRM or vSphere Replication interact with snapshots.
These solutions should all see improved performance.

Question: Doesn’t all mixed-use NVMe make vSAN ESA expensive? Will it deliver
sufficient additional performance to justify extra cost of mixed-use NVMe?
Answer: The changes in ESA will impact how storage is consumed. It is expected that ESA
with erasure coding will perform the same if not better than mirroring on OSA. This
will deliver significant capacity savings, in addition to increased capacity savings
from improved compression – which is part of the default storage policy, and
reduced acquisition costs as no cache drives are needed.

Question: Where can I find more information about vSAN ESA?


Answer: Please use the following VMware resources:

Dell Technologies – Internal Use – Confidential 16


Internal Use - Confidential
June 20, 2024

• vSAN 8 ESA FAQ


• Blog on Introduction to the vSAN ESA
• Blog on Adaptive RAID-5 Erasure Coding
• Blog on Delivering RAID-1 Performance with RAID-5/6

VxRail HCI System Software


Installation and Deployment
Question: How can a customer use their own VDS to deploy a VxRail cluster?
Answer: The customer needs to create the VDS before deploying the VxRail cluster. In the
VDS, port groups for VxRail system traffic including vCenter Server need to be
defined along with optional NIC teaming policies. Using either First Run or VxRail
API, user inputs the JSON configuration file and assigns the uplinks to the port
groups which is then validated before the cluster is deployed.

Question: Does VxRail still require multicast?


Answer: IPv6 multicast is used by the VxRail Manager loudmouth service and is required for
automatic node discovery. VxRail nodes can be added manually in the event of
customer IPv6 network limitations.

Question: Do VxRail systems address customer encryption and security needs?


Answer: Yes, VxRail leverages data-at-rest encryption and data in-transit encryption as part
of its security capability set. vSAN’s version of data at rest encryption is native to
HCI and is set at the cluster level (supporting hybrid, all-flash and stretched
clusters) for broad protection. Data in-transit encryption keys are auto generated
with a FIPS 2-compliant algorithm, and can be enabled for all-flash nodes in
conjunction with deduplication, compression, and data at rest encryption.
Alternatively, vSphere encryption can be used for VM-level encryption.

Question: Where should the Key Management Server (KMS) be hosted?


Answer: With vSphere and vSAN 7 Update 2, VMware introduces the support for a Native
Key Provider feature which simplifies key management for environments using
VMware encryption. For vSAN, the embedded KMS is ideal for Edge or 2-Node
topologies and is a great example of VMware’s approach to intrinsic security. This
also works with ESXi Key Persistence to eliminate dependencies.

System Management
Question: How are the VxRail systems managed?
Answer: All VxRail hardware maintenance and LCM activities can be managed from within
vCenter with VxRail Manager. For day-to-day VM management, customers manage
the VMware stack on VxRail directly from the vSphere Web Client.

Dell Technologies – Internal Use – Confidential 17


Internal Use - Confidential
June 20, 2024

Question: Is there a management interface that ties in all VMware and storage
management into one portal across all VxRail clusters a customer might
have?
Answer: vRealize Automation (optional) allows for management and orchestration of
workloads across VxRail clusters. It also provides a unified service catalog that
gives consumers an App Store ordering experience to make requests from a
personalized collection of application and infrastructure services. All physical
system management requires VxRail Manager.

Lifecycle Management
Question: What is synchronous release (commonly referred to as simultaneous
shipment or SimShip)?
Answer: There is an agreement between Dell VxRail and VMware that for every express
patch, quarterly patch, and major update of ESXi and vSAN software, VxRail will
deliver a supporting software release within 30 days of VMware GA. Best effort is
given to express patches to deliver even more quickly (sometimes in a few
business days).This objective is to provide customers confidence that they can
invest in VxRail while knowing they can quickly reap the benefits of the latest
software features and promptly address security vulnerabilities identified and fixed
by VMware. Refer to this synchronous release commitment KB article for more
information.

Question: What are some common things that would cause a delay to the agreed upon
commitment?
Answer: Holidays, factory shutdowns, and most often engineering findings during validation
might impact the 30-day commitment. Rather than release against a software
version with critical issues still present, engineering may choose to defer to a
subsequent software release/version with proper fixes and often assists our
partners to deliver those fixes faster. You can reach out to your regional Storage
Center of Competence (CoC) Product Line Manager (PLM) to get updates when
there is a delay.

Question: What are the different elements in VxRail LCM update process that makes it
unique and differentiated for customers?
Answer: Refer to the VxRail Techbook for a detailed explanation of the update process, its
features, and enhancements introduced over time. For a presentable overview of
the VxRail LCM experience, refer to the VxRail Customer Presentation or the
VxRail Technical Overview Presentation.

Question: What is the update path for VxRail?


Answer: VxRail update paths differ with every release. Some VxRail software versions allow
updates only along specific update paths, while others are restricted to certain node
types, or are entirely unsupported. To ensure version compatibility, please refer to
the Dell VxRail Release Notes for the VxRail system software that you wish to
update to.

Question: Can a node be downgraded to a prior version of code?

Dell Technologies – Internal Use – Confidential 18


Internal Use - Confidential
June 20, 2024

Answer: Yes, but any request to do so must be verified with VxRail engineering on a case-
by-case basis. Refer to KB article 000020460 for additional guidance.

Question: Should customers use software, other than VxRail Manager to perform
updates?
Answer: No, VxRail Manager is the sole source for VxRail lifecycle management, cluster
compatibility, software updates, and version control. VMware software tools such
as vSAN Config Assist and Updates, vSphere Update Manager (VUM) or Dell EMC
OpenManage are not supported for performing VxRail updates.

Question: What is pre-update health check?


Answer: The VxRail pre-update health check, or pre-check as the VxRail Manager UI refers
to it as, has been an important tool for users to determine the overall health of their
clusters and assess the readiness for a cluster update. The output of this report
brings awareness to users of troublesome areas and provides users with
information such as Knowledge Base articles to resolve the issues.
Health check automatically runs every 24 hours. This tool relies on a LCM pre-
checks file. The VxRail team frequently updates the LCM pre-checks file with new
checks to improve the quality of the health check. It is best practice to run the latest
version of the LCM pre-checks file.
Clusters, that are connected to the Dell cloud via the Secure Connect Gateway,
automatically scans the Dell Support website to check for an updated LCM pre-
checks file to download and apply for the next pre-check run.
Unconnected clusters require users to manually check the Dell Support website for
a new version of the LCM pre-checks file. If there is newer version, users can
download the file onto a local client and upload the file to VxRail Manager.

Question: What is update advisor?


Answer: The update advisor is a planning tool that provides VxRail users a list of the
possible update paths available for their cluster on the System Updates page of the
VxRail Manager UI. They can select each option to generate an update advisor
report that provides a consolidated readout comprised of the target update path
information, drift analysis detailing components that need to be updated, pre-
update health check report, and user-managed component report. This planning
tool is designed to simplify the update planning experience for users.

Question: What is compliance checker?


Answer: The compliance checker allows VxRail users to run a scan on the VxRail stack to
generate a drift detection report against the Continuously Validated State running
on their cluster. VxRail user can use this feature to periodically check for any
version drift. The compliance checker is configured to run daily though the user can
initiate it on-demand.

Question: What are the recent improvements made to reduce cluster update failures
and increase efficiency of performing a cluster update?
Answer: In 7.0.480, there are improvements to the user experience for LCM pre-check and
the update advisor report for unconnected clusters. Automation capabilities
removes the need for users to upload the LCM pre-checks file via the VxRail

Dell Technologies – Internal Use – Confidential 19


Internal Use - Confidential
June 20, 2024

Manager CLI. The VxRail Manager UI has been enhanced to allow users to upload
the LCM pre-checks file and installer metadata file to generate the update advisor
report, which embeds the LCM pre-check report. VxRail 7.0.480 also introduced the
capability for users to optionally reboot nodes in sequential order to further verify
the nodes are in good standing before a cluster update.

Question: What is a smart bundle?


Answer: The term ‘smart bundle’ refers to a space-efficient LCM bundle that can be
downloaded from the Dell cloud. A space-efficient bundle is created by first
performing a change analysis of the VxRail Continuously Validated State currently
running on a cluster versus the target VxRail Continuously Validated State that a
user wants to download for their cluster. The change analysis determines the delta
of install files in the full LCM bundle that is needed by the cluster to download and
update to the target version.

Question: How is VxRail compatible with vSphere Lifecycle Manager (vLCM)?


Answer: vLCM compatibility provides the ability for the VxRail to use vLCM framework to
perform the cluster updates with VxRail-provided Continuously Validated States. By
leveraging the vLCM API set, the use of the vLCM framework is mostly transparent
to the VxRail user as all interactions to update the VxRail software remain within
the VxRail Manager LCM experience.

Question: Why would a customer choose to enable vLCM?


Answer: Consolidating other VMware software updates into one job can offer significant time
savings. Separate updates cause multiple boot cycles to occur. If consolidated, all
those updates can be reduced to one boot cycle. NSX-T and vSphere with Tanzu
are integrated with vLCM such that their VIBs can be included into the cluster
image for remediation. By making VxRail LCM compatible with vLCM, VxRail users
can tap into this integration to potentially realize significant time savings. For this
use case, the VxRail user would need to use vLCM or NSX-T manager directly to
configure the cluster image. For VCF on VxRail users, this use case is not
supported until vLCM is supported.
Answer: For VxRail dynamic node cluster, users can take advantage of parallel remediation
to update their nodes in parallel to potentially significantly reduce cluster update
times.
Answer: By aligning with the vLCM framework, VxRail is in position to take advantage or
enhance future vLCM capabilities.

Question: How does vLCM compatibility impact the VxRail advantage in LCM over vSAN
ReadyNodes?
Answer: At its core, the VxRail advantage remains. VxRail’s Continuously Validated States
is what provides the operational simplicity and certainty our VxRail users value to
confidently evolve their clusters through hardware and software changes over time.
The practice of Continuously Validated States helps offloads customer’s IT
resources and decision-making responsibilities. vLCM compatibility changes how
the cluster update is executed. However, it is the planning and preparation LCM
features that differentiates VxRail from vSAN ReadyNodes: VxRail-driven
experience vs. customer-driven experience.

Dell Technologies – Internal Use – Confidential 20


Internal Use - Confidential
June 20, 2024

Answer: vLCM compatibility can also add to the VxRail advantage as its process can be
now more clearly differentiated to that of vSAN ReadyNodes. During baseline
image creation, a vSAN ReadyNode user would need to go through several manual
steps to build the image: deploying and configuring the hardware support manager
plugin to vLCM, deploying the driver and firmware depot, identifying each
component firmware and driver package in the stack, exporting the packages to
vCenter Server, and creating the cluster profile to establish the baseline image. For
a VxRail user, VxRail already has the baseline image in the form of the
Continuously Validated State. It’s a 3-step wizard. The VxRail Manager VM acts as
the hardware support manager plugin and can automatically port the Continuously
Validated State already on its VM into the vLCM framework. Similarly, building the
desired state image is just as manual of a process for a vSAN ReadyNode user
while the VxRail user has a much more streamlined, automated process because
of Continuously Validate States. The use of vLCM APIs in VxRail’s implementation
of vLCM compatibility allows the user experience to be an automated one within
VxRail Manager.

Question: Can the NVIDIA GPU VIB be added to the cluster image?
Answer: Users can include the NVIDIA GPU VIB when customizing their cluster update.
Customers are still responsible for acquiring the files and check for compatibility as
this is not part of the VxRail Continuously Validated State. Clusters running VxRail
7.0.350 or later support this feature with vLCM mode enabled. Starting with VxRail
7.0.450, clusters using legacy LCM mode has this feature.

Question: What is the sequential node reboot feature?


Answer: The sequential node reboot feature is a new cluster update planning tool that users
can use to server as a test run where nodes are rebooted one at a time to confirm
the nodes are in good standing without interrupting service. Nodes that fail to
reboot accordingly can be investigated before scheduling a cluster update.
Answer: From previous customer conversations, we understand that some customers prefer
to perform a sequential node reboot to surface issues that may not be detected by
LCM pre-checks before performing a cluster update. This practice can be very
beneficial when the cluster has not been frequently updated and the nodes have
not been rebooted for a long time.
Answer: The Cluster Hosts page on VxRail Manager UI has been enhanced to allow users
to select from a table which nodes that they want to have rebooted. The table also
has a new column to show when the node was last rebooted. Before performing the
sequential node reboot, cluster-level and node-level pre-checks are performed to
ensure good health before placing the nodes in maintenance mode one at a time.

Question: What considerations should be taken before using the sequential node
reboot feature?
Answer: The node reboot sequence can be scheduled to run at a later time.
Answer: Support cluster types are standard (3+ nodes), dynamic node, and stretched
clusters.
Answer: The node must have a vSphere license that supports vSphere DRS. Unlike VxRail
cluster update, VxRail cannot temporarily enable DRS for nodes that are not

Dell Technologies – Internal Use – Confidential 21


Internal Use - Confidential
June 20, 2024

entitled to DRS (i.e. vSphere Standard edition) to automatically move workloads to


maintain service before placing the node into maintenance mode.
Answer: If the node fails to reboot, the user will be presented the option to retry the reboot or
skip the node in the sequence.

Question: How does partial cluster update work?


Answer: At this time, the feature is only available via API. Within the input parameters for the
cluster update API call, user can enter the specific host names to which to perform
a cluster update. Per VMware guidance, the full cluster update should be
completed within one week as some Day 2 management operations are disabled
while in a partially upgraded state.

CloudIQ for VxRail


Question: What are the autonomous, multi-cluster management features in VxRail HCI
System Software that are available in CloudIQ?
Answer: Global visualization – Users have a centralized topology of all their VxRail clusters
in one global virtualization view. Cluster resource utilization (CPU, memory,
capacity, network), health scores, and alerts are available in a virtualization context.
VxRail clusters are organized under Datacenter and vCenter Servers as you would
find on the vCenter Server UI. The virtualization view, under the Monitor section in
CloudIQ, provides a Summary tab for cluster information, Alert tab for reported
health alerts, and VMs tab for an inventory of VMs running on the VxRail clusters.
Simplified health scores – The health of cluster components is aggregated, creating
a health score for the cluster. Users can quickly assess the state of their clusters
and quickly identify clusters that require troubleshooting. Users can drill down
problem clusters to pinpoint the primary issue and view the accompanying
Knowledge Base article to remediate the issue.
Advanced reporting – CloudIQ users can monitor CPU, memory, disk, and network
performance and utilization metrics at a cluster level. Further drilldown into
individual nodes is available with the Report Browser feature which allows for
custom line-chart reports that are available for export.
Future capacity planning – VxRail also uses infrastructure machine learning to
project future usage so the user can have better insight into their current usage and
projected IT resource needs.
Lifecycle management – LCM planning and execution capabilities can be
conducted across multiple clusters with a single workflow. On-demand pre-update
cluster health checks (LCM pre-checks) can determine whether the cluster is ready
for an update. It can then orchestrate update bundle downloads onto their VxRail
clusters. Once staged on the VxRail Manager VM on the cluster, a user can initiate
the execution of a cluster update.
Role-based access control – Integration with vCenter Server role-based access
allows users to regulate access and privilege to perform lifecycle management
operations. CloudIQ can register to the vCenter Servers so that privileges such as:
LCM pre-checks, update bundle download and staging, and cluster update can be
managed using vCenter role-based access and enforced by CloudIQ.

Dell Technologies – Internal Use – Confidential 22


Internal Use - Confidential
June 20, 2024

Question: How do VxRail customers use CloudIQ?


Answer: Customers can access CloudIQ for their VxRail clusters via the web portal:
https://cloudiq.dellemc.com/. Using CloudIQ does not require additional hardware
or software on a customer’s VxRail cluster. It is entirely consumed via a cloud-
based web portal which provides a single global view of the customer’s VxRail
environment.
Users are required to have an Online Support account and establish a connection
with Dell Technologies using connectivity agent on the VxRail cluster. The
connectivity agent is a software-based secure communication channel for remote
support activities so that VxRail clusters can securely send telemetry data to Dell
Technologies cloud. VxRail clusters need to be registered to the Online Support
account for CloudIQ to report them.

Question: How does VxRail HCI System Software provide multi-cluster management
capabilities for CloudIQ users?
Answer: A microservice, called adaptive data collector, runs on VxRail HCI System Software
to aggregate metrics from the vSAN cluster and VxRail system. The metrics are
packaged and sent to the VxRail repository in the Dell Technologies cloud via the
connectivity agent. Within the CloudIQ platform in the Dell Technologies cloud,
infrastructure machine learning is used to produce reporting and insight to enable
users to improve serviceability and operational efficiencies. LCM operations are
available via the web portal and the operation requests are sent to the clusters via
the same connectivity agent so that the tasks are executed locally by VxRail HCI
System Software.

Question: What else should I know about the cluster update operation?
Answer: Running the cluster update operation first requires setup work. Customers need to
configure role-based access control and store VxRail infrastructure credentials for
cluster updates. Role-based access control allows a customer to permit select
individuals to perform lifecycle management operations by leveraging roles and
privileges. VxRail infrastructure credentials management is a supporting feature
that further streamlines cluster updates at scale. Like a cluster update via VxRail
Manager, root account credentials for vCenter Server, Platform Services Controller,
and VxRail Manager are required. CloudIQ users can save credentials at the initial
setup and they can automatically populate during a cluster update operation. This
benefit is magnified when performing multi-cluster updates.
Answer: CloudIQ initiates the cluster update operation but the actual update operation is
performed locally on the cluster itself. CloudIQ is responsible for tracking and
reporting the operation.
Answer: Cluster update is only supported for standard clusters (3 or more nodes), dynamic
node clusters, and management clusters for satellite nodes. It is not available for 2-
node clusters, stretched cluster configurations, and satellite nodes.

Question: How secure is it to perform cluster updates from CloudIQ?


Answer: VxRail has implemented multiple security measures.
• Role-based access control via vCenter limits the users who can manage the
credentials manager and who can execute a cluster update.

Dell Technologies – Internal Use – Confidential 23


Internal Use - Confidential
June 20, 2024

• A user’s vCenter Server credentials are used to verify permission to perform a


cluster update. A customer can have a small set of users who has knowledge of
the root account credentials and can manage the infrastructure credentials, and
a larger set of users who are privileged to execute a cluster update but cannot
manage the infrastructure credentials.
Answer: All credentials are stored on the cluster themselves and not in the cloud. To
execute a cluster from CloudIQ, the user has to be on the company network
(whether on-site or via their VPN).

Question: How does role-based access control work?


Answer: Role-based access control on CloudIQ leverages vCenter Server role-based
access. To enable vCenter role-based access, CloudIQ users have to enter the
credentials for their vCenter that manage their VxRail clusters. In this process,
CloudIQ registers its privileges to vCenter’s role-based access. From vCenter,
users can manage role-based access control. Once configured, access to CloudIQ
and authorization to certain CloudIQ for VxRail privileges will be enforced by
vCenter role-based access.
CloudIQ for VxRail privileges registered on vCenter include: download software
bundle, execute health check, execute update, and manage update credentials.

Question: How can customers and sellers experience CloudIQ?


Answer: Customers and sellers can experience the CloudIQ capabilities via the CloudIQ
simulator: https://cloudiq.dellemc.com/simulator.

Question: Is this available for dark sites that have no access to the internet?
Answer: VxRail uses a connectivity agent for secure data transfer and requires an internet
connection. VxRail clusters that do not have internet access will not be able to use
CloudIQ.

Question: Do we collect customer data and is the customer cluster data secure?
Answer: Dell Technologies does not collect customer data through CloudIQ.
While the data collector service does aggregate machine data relative to the
cluster, software, hardware topology, and performance, users can be assured that
there is no customer or personal data collected.
The Dell Technologies remote connectivity mechanism assures data transfer
between customer site and Dell Technologies is secure. For more information, refer
to this VxRail security white paper.
Though the customer cluster metadata is not anonymized when it is stored in the
data lake, the data cannot be mapped to a customer account without Dell
Technologies support services. If the customer does have concerns that the
topology data may contain sensitive VM names, they have the option of not using
the multi-cluster management functionality by turning off the collection service.
Access to the cluster metadata is restricted to the VxRail engineering team.

Question: How are these multi-cluster management features in VxRail HCI System
Software licensed for use in CloudIQ?

Dell Technologies – Internal Use – Confidential 24


Internal Use - Confidential
June 20, 2024

Answer: Except for the cluster update feature, the capability set is licensed as part of VxRail
HCI System Software which comes standard for every VxRail node. To enable the
cluster update feature, there is an add-on license on top of the standard software
license which is called VxRail HCI System Software SaaS active multi-cluster
management.

Question: Can a customer evaluate cluster update feature in their own environment
before purchasing the add-on license?
Answer: Yes, trial licenses are available for the customer to evaluate the add-on license
functionality on a cluster for a limited period of time. Their sales account
representative would need to submit an RPQ. Contact the VxRail Product
Management team to understand the restrictions that come with the evaluation
license.

Question: What is the process of ordering and applying the add-on license?
Answer: The VxRail HCI System Software add-on license is available for purchase via Dell
sales tools. The add-on license is applied on a per-node basis. In order to execute
a cluster update from CloudIQ, all nodes in the cluster must have this add-on
license. The licenses are associated to the service tags of each node at the time of
order and the license entitlements for each node is stored internally at Dell.
CloudIQ gathers information from this database to enable the appropriate
functionality for each node.
The add-on license is not transferable to another VxRail node. Once the node is
entitled with the add-on license, it cannot be disassociated from the node.

Question: When purchasing the add-on license after point of sale, which term-based
option should I choose?
Answer: The term-based option should align with the remaining term length of the hardware
support contract for the node and rounded up to the next year. One of the tools to
find this information is https://quality.dell.com/search/. For example, if there are 30
months remaining in the support contract, select the 3-year term add-on license.

Question: What VxRail functionality is available to a cluster if the nodes within have
mixed software licenses?
Answer: Unless all nodes in the VxRail cluster have the add-on licenses, CloudIQ defaults to
the functionality provided by the standard VxRail HCI System Software license for
that cluster.

RESTful API
Question: What are the VxRail RESTful API capabilities?
Answer: The VxRail API provides customers and partners/system integrators a full set of
capabilities to automate “Day 1” (cluster deployment), “Day 2” (cluster operations
and LCM updates) and collection of system information. The latest API
documentation is available at the Dell Technologies Developer Portal:
https://developer.dell.com/apis/5538/.

Dell Technologies – Internal Use – Confidential 25


Internal Use - Confidential
June 20, 2024

Question: Does VxRail support cover custom-built automation (e.g., PowerShell scripts,
Ansible playbooks) using VxRail API?
Answer: VxRail support covers public API built into VxRail (VxRail API), the official VxRail
API PowerShell Modules and VxRail script packages created by VxRail
Engineering available for download from the Dell Technologies Support site.
Support for any custom-built automation leveraging VxRail API should be provided
by the party developing and implementing this solution for the Customer (e.g., Dell
Services/Consulting, VMware PSO, 3rd party system integrator, Customer) – it is
not covered by VxRail support.
Answer: VxRail 7.0.010 offers an API-driven deployment option for a VxRail system as a
part of public VxRail API. Like the GUI-driven deployment, the use of this API
requires professional services engagement from either Dell Services or an
authorized/certified partner. With VxRail 7.0.240 or later, this requirement can be
waved via RPQ.

Question: VxRail can be deployed via a “Day 1” REST API. Can customers use this, and
is there an RPQ required?
Answer: “Day 1” cluster deployment capabilities were introduced in VxRail API in VxRail
7.0.010. API-driven deployment is another way of deploying the VxRail cluster,
providing customers with more choice. The deployment restrictions are the same
as for the standard, GUI-driven deployment. API-driven deployment does not
remove the need for professional services to provide customers with the best
experience (the same applies to using this API from the VxRail API PowerShell
Modules and Ansible Modules). Dell Technologies and certified partners deliver
professional services for the cluster deployment.
For more information about the customer-deployable option and requirements,
please check the following section of this Technical FAQ: Customer-Deployable
VxRail.

VxRail Hardware
VxRail on PowerEdge Servers
Question: Which VxRail nodes are available on 16th Generation PowerEdge servers?
Answer: The Intel-based VxRail nodes available on 16G include: the VE-660 and VP-760
based on the PowerEdge R660 and R760 platforms. Note that VxRail platform
model numbers are now aligned to PowerEdge model numbers to easily identify
the underlying parent platform. The suffix for storage storage has been removed
from the platform model number. The storage type can be selected in the ordering
path.

Which VxRail nodes are available on 15th Generation PowerEdge servers?


Answer: The Intel-based VxRail nodes available on 15G include: the E660, E660F, E660N,
P670F, P670N, V670F, S670, and VD-4000 based on the PowerEdge R650, R750
and XR-4000.
Answer: The AMD-based VxRail nodes available on 15G include: the P675F, P675N, E665,
E665F and E665N, based on the PowerEdge R7515 and R6515.

Dell Technologies – Internal Use – Confidential 26


Internal Use - Confidential
June 20, 2024

Question: If a customer purchases VxRail nodes without TPM, can it be added APOS?
Answer: Yes. However, the VxRail APOS ordering path does not contain a TPM part (the
VxRail APOS component list is not an exhaustive list) so it is recommended to use
the PowerEdge APOS part for TPM in this situation.

Question: Can customers upgrade TPM from 1.2 to 2.0?


Answer: Yes. When TPM is installed and enabled it is tied to the motherboard
cryptographically. If a customer wants to move from TPM 1.2 to 2.0, they need to
disable TPM in BIOS, replace the TPM with the new module, then re-enable TPM
in the BIOS. This workflow will not require replacing the motherboard.

Platform models
VD-4000

Question: Does the VD-4000 support vSAN Express Storage Architecture?


Answer: Yes, starting with VxRail 8.0.210.

Question: Does a VD-4000 satellite node provide RAID or storage device redundancy?
Answer: The VD-4000 does not support a PERC, and therefore, there is no storage
redundancy. It is an army of one. Note that PERCs are optional, and not a
requirement, for any VxRail to deploy as a satellite node.

Question: What license is required for the VD-4000 nodes?


Answer: VD-4000 nodes support the same license options as other VxRail node models.
See the VxRail Licensing section for details.

Question: What license is required for the VD-4000w witness?


Answer: The VD-4000w (physical hardware) requires separate vSphere licensing, but does
not have to match the license edition of the node itself. See the VxRail Licensing
section for details. No license is necessary for the OVA witness.
Answer: vSphere licensing will require subscription licensing on a 1 processor, 16 core.

Question: Does VxRail LCM the VD-4000w?


Answer: Yes, VxRail will LCM the physical witness hardware.

Question: Does the VD-4000 support VxRail 8.0.x?


Answer: Yes, starting with VxRail 8.0.210.

Question: My customer has 1GbE switches. Can they deploy VD-4000 in 1GbE
environments?
Answer: In a 2-node configuration, the 10GbE or 25Gbe ports can be connected back to
back to handle vSAN traffic. SFP transceivers can be used to auto-negotiate all
other traffic down to 1GbE.

Dell Technologies – Internal Use – Confidential 27


Internal Use - Confidential
June 20, 2024

Processors
Question: What processors are available on VxRail?
Answer: 4th Generation Intel Xeon Scalable processors, single or dual, from 8 to 56 cores
each.
3rd Generation Intel Xeon Scalable processors, single or dual, from 8 to 40-cores
each.
Single 2nd or 3rd Generation AMD EPYC with up to 64-cores.
Intel and AMD processors cannot be placed in the same vSphere cluster.

Question: What is a Xeon D processor?


Answer: The Xeon D is a brand of x86 system-on-a-chip based on the same architecture as
Xeon Ice Lake CPUs. Unique to the Xeon D line, emphasis was also made on low
power consumption, and integrated hardware blocks such as a network interface
controller. The Xeon D is exclusively available on the VD-4000 in 4-20 core options.

Question: Does VxRail have a dual socket AMD offering?


Answer: No. VxRail offers AMD processors in single socket configuration on the E665 and
P675, configurable with a 2nd or 3rd Gen AMD EPYC with up to 64-cores.

Question: What are the CPU swap and upgrade rules?


Answer: All CPU upgrades are required to go through the RPQ process. Please refer to this
KB article for caveats and process details.
Depending on the existing VxRail configuration and the processor requested,
additional components may require changes to fans, heatsinks, and PSUs. In
addition, there is no ‘standard’ service offering for processor swaps. Swaps must
be performed by Partner or through ProDeploy Additional Development Time
SKUs. An RPQ is required for all processor swaps.
CPU upgrades between Intel generations (ie. Intel Ice Lake to Intel Sapphire
Rapids) are not supported.
CPU swapping/replacing an existing CPU within the same generation, with a higher
clock speed or more cores is supported, and requires an RPQ.
CPU swaps that require additional licensing are supported. For example, when a
processor upgrade crosses the 32-core boundary.
Populating the 2nd socket on a dual socket mother board, where the node originally
shipped with a single processor, is not supported.
CPU swapping and upgrades of AMD nodes including upgrading from 2nd Gen to
3rd Gen AMD EPYC processors is supported, and all upgrades will require an RPQ.
CPU upgrades that require additional licensing are supported. CPU upgrades that
replace a 32-core or lower with a 48-core or 64-core processor, are permitted.

Drives
Question: What cache drive options are available?

Dell Technologies – Internal Use – Confidential 28


Internal Use - Confidential
June 20, 2024

Answer: In order of overall performance, from highest to lowest, the following cache drive
types are available: Optane, NVMe, MU NVMe, WI SAS, and MU SAS.
Answer: Generally higher performing drives cost more, however the performance gains from
NVMe versus the increased cost to the overall solution can make them very
attractive. Note, pricing changes frequently.

Question: Will larger cache drives have a greater portion of the drive usable for write
cache?
Answer: For clusters running vSAN OSA, the write cache buffer size is 600GB, though
larger capacity drives will extend drive life. vSAN 8.0 OSA increases the buffer
capacity to 1.6TB for all-flash clusters.

Question: Is it possible to mix different cache drives in the same node?


Answer: For some nodes, depending on the drive backplane, this is possible. However,
uniformity is always preferred, but not required, as it provides predictable
performance, and types of cache can have greatly varying performance. This ESG
technical validation white paper highlights these differences.

Question: Does Dell replace SSD drives that wear out?


Answer: Yes, ProSupport and ProSupport Plus include drive replacements for solid-state
drive wear-outs.

Question: Can a customer re-use drives from a decommissioned node in a newer node?
Answer: No, this is not supported.

Question: What all-flash capacity drives are available?


Answer: In order of overall performance, from highest to lowest, the following capacity drive
types are available: NVMe, SAS, vSAS and SATA SSD.
Answer: NVMe delivers the most performance, but at a cost premium. The needs of many
high-performance workloads can be met with SAS, particularly when paired with
NVMe cache drives, and two or more disk groups per node.
Answer: SATA drives are the most inexpensive flash option, they are suitable for general
purpose VMs and their workloads. vSAS is available at a similar price point to
SATA, but with improved performance.

Question: Does VxRail offer hybrid configurations?


Answer: VxRail continues to offer hybrid configurations, with spinning NL-SAS or SAS drives
for capacity and flash drives for the cache tier. These are available in the P, E and
S series. Ensure that the customers performance needs can be met with this
configuration.

Question: We no longer sell the capacity drive needed to expand my customer’s


diskgroup, what are my options?
Answer: In such circumstances where an equivalent drive is no longer available, a different
drive can be used. Starting with 7.0.210, adding a larger drive to an existing
diskgroup is supported. See the Mixing disk guidelines in a node for VxRail slide in
the Technical Reference Deck for additional information.

Dell Technologies – Internal Use – Confidential 29


Internal Use - Confidential
June 20, 2024

Question: How does the implementation of 24Gbps SAS drives affect VxRail clusters?
Answer: The 24Gbps drives have a 14G VxRail software dependency of 7.0.370 or newer,
and a 15G VxRail software dependency of 7.0.405 or newer. Customers that do not
update to these software versions will not be able to add these drives or nodes with
these drives to their clusters.
Answer: The drive industry is shifting to standardize production of 24Gbps SAS drives,
hence the decision to EOL 12Gbps and replace them with 24Gbps variants. There
will not be 24Gbps SAS WI drives on offer as the industry has shifted to NVMe for
this segment. In situations where this changeover would introduce mixed cache
speeds in a node or cluster, know that this is permitted, but adjust performance
expectations to that of the slowest cache drive. See the Mixing disk guidelines in a
node for VxRail slide in the Technical Reference Deck for additional information.
Answer: VxRail 15G and 16G leverage 12Gbps SAS controllers, so no performance gains
from these faster drives should be expected. Customers whose performance needs
exceed this limitation should be encouraged to explore NVMe configurations.

Connectivity: On-board networking, additional networking, and fibre channel support


Question: What networking does VxRail support?
Answer: VxRail supports 10GbE, both SFP+ and BaseT, and 25GbE SFP28, with a variety
of dual and quad port on-board cards from various vendors. Networking can be
expanded with the addition of dual and quad port PCIe network cards, including a
dual port 100GbE NIC. 1GbE is supported with the restrictions described below.

Question: My customer is ready to buy, has 10GbE networking, but plans on upgrading
to 25GbE. How do I best position them for this?
Answer: 25GbE SFP28 is compatible with 10GbE SFP+, and will negotiate down to 10GbE,
with the correct optics. This would enable your customer to refresh their VxRail
environment today, configuring them with 25GbE network cards connected to their
existing 10GbE SFP+ network switches. Then in the future, upgrade the switches
to 25GbE, and gain the additional bandwidth. This does not apply to 10GbE BaseT.
Answer: The inverse is also true. They can upgrade the switches to 25GbE first. The 10GbE
SFP+ network cards in their VxRail node can use this new switch fabric, but at the
slower 10GbE speed.

Question: Can I expand my VxRail systems by adding additional network cards to


support additional traffic bandwidth or traffic configurations?
Answer: Yes. Refer to the Hardware Configurations Guide for supported network expansion
configurations.

Question: What network cards support vSAN over RDMA?


Answer: Certified options include offerings from Mellanox and Intel.

Question: How does VxRail handle LCM of FC HBA?


Answer: In VxRail 7.0.240, FC HBA drivers and firmware can be added to the cluster update
process for a more efficient singular update cycle. This customization is available

Dell Technologies – Internal Use – Confidential 30


Internal Use - Confidential
June 20, 2024

using VxRail LCM or vLCM. However, customers remain responsible for testing
and validation as the FC HBA is still not part of the Continuously Validated State
provided by VxRail. In addition, customers are responsible for managing,
upgrading, and supporting their external storage arrays.
Answer: Customers may install VM/VIB/Drivers to operationalize the use of the external
storage as required.

Question: What are the considerations when using 1GbE networking on VxRail?
Answer: As network is the backplane for all vSAN storage traffic, the reduced bandwidth
impacts performance and scaling. Because of this the following limitations imposed:
• Single processor configurations only
• Maximum cluster size of 8 nodes
• Hybrid configurations only, all-flash or NVMe nodes are not supported
• Four network ports required on each node
• Requires the use of four 10GbE BaseT, which will negotiate down to 1GbE.
• With today’s increasingly more powerful and dense hardware, some customers’
needs may be met with a heavily configured 2-node cluster.

Memory
Different processor architectures have different memory rules and configurations to achieve
optimal performance. These are summarized here, and covered in more detail in the Technical
Reference deck and Orderability Guide.

Question: What are the memory rules for Ice Lake / Intel 3rd Generation?
Answer: Eight memory channels, with support for two DIMMs per memory channel. For
maximum 3200 MT/s performance populate all channels, e.g. 8 or 16 DIMMs per
processor. Populating with 4 DIMMs is supported, but provides reduced memory
performance. Capacities range from 64GB to 2TB per processor.
Answer: Near-balanced memory configs where mixed DIMM capacities are used to more
closely match requirements are supported. Providing 384GB, 640GB and 768GB
per processor options.
Answer: Support for Persistent Memory 200 Series, with capacities ranging from 256GB to
4TB per processor depending on mode.
Answer: Note that RDIMMs and LRDIMMs cannot be mixed. Persistent Memory can mix
with either.

Question: What are the memory rules for AMD 2nd or 3rd Gen EPYC?
Answer: Eight memory channels, with support for two DIMMs per memory channel. For best
performance configure one DIMM per memory channel for a max of 8 DIMMs,
resulting in 3200 MT/s of bandwidth. For maximum capacity configure two DIMM
per memory channel for a max of 16 DIMMs, doubling capacity, but at a reduced
bandwidth of 2933 MT/s. Capacities range from 64GB to 2TB per processor.

Dell Technologies – Internal Use – Confidential 31


Internal Use - Confidential
June 20, 2024

Answer: E665 has a maximum capacity of 1TB as it is not configurable with the 128GB
LRDIMM. The E665N is limited to eight 64GB DIMMs for a max of 512GB.
Answer: Configuring with 4 DIMMs is supported but recommended only with 32 cores or
fewer.

Intel Optane Persistent Memory


Question: What is VxRail’s response to the Intel announcement that they plan to cease
future Optane (memory and drive) development?
Answer: VxRail will continue to offer Intel Optane memory and drives with availability of
existing Optane supply through the life of the 14G and 15G product line. Optane
memory and drives are not offered on 16G VxRail platforms.
Answer: For full details refer to 411 - VxRail Intel Optane Update.docx

Question: What should I know about Intel Optane Persistent Memory (PMem) on
VxRail?
Answer: PMem is Intel’s storage class memory product, and is used in combination with
traditional DRAM. It can be used in Memory Mode to provide a larger amount of
system memory, up to 4TB per processor (3TB on P580N). It can be used in
Application Mode to provide non-volatile RAM or block storage with RAM like
performance. See the Technical Reference deck and DfD - Intel Optane Persistent
Memory for additional details.
Answer: Only supported with 2nd or 3rd Gen Intel Xeon Scalable processors.
Answer: Only available at POS, not available via APOS
Answer: vSphere Memory Monitoring and Remediation (VMMR) can be used to
troubleshoot memory bottlenecks between system memory and Intel Optane
Persistent Memory at the host and VM level

Question: Does PMem require a particular VMware license?


Answer: PMem requires vSphere Enterprise which is provided with the vSphere Foundation
subscription.

Question: Does VxRail support PMem in both memory mode and app-direct?
Answer: Both Memory Mode and App-Direct are supported, but only in particular
configurations. These are documented in the hardware-config spreadsheet.
Answer: Memory Mode and App-Direct mode cannot be used at the same time, only one
mode or the other can be used.

Question: Are there any caveats for Intel Optane PMem?


Answer: Yes. If a VM using PMem in App Direct Mode crashes it will be restarted on the
same host. However, if the host crashes and the application did not copy the data
to another PMem, the VM cannot be restarted. This issue is being worked, as this
technology is evolving.

Dell Technologies – Internal Use – Confidential 32


Internal Use - Confidential
June 20, 2024

GPU
Graphical processing unit are compute accelerators that are beneficial to all types of VDI
environments and AI/ML data science workloads. The VP and V Series continues to be the
primary platforms for GPUs in the VxRail family. They support the broadest choice of GPUs,
and the most GPUs per node. Some of these GPUs are available on other VxRail platforms.

Question: Which GPU is suited for a particular workload?


Answer: Refer to the VxRail Technical Reference deck for a detailed breakdown on GPUs
and associated workload use cases.

Question: Are GPUs included in the LCM process?


Answer: GPUs are not part of VxRail’s Continuously Validated States because Dell is not
authorized to publish NVIDIA’s software and firmware. However, users can acquire
the files separately and consolidate the update of GPU software and firmware into
the VxRail cluster update workflow for a more streamlined maintenance window.

Question: Can I mix GPUs?


Answer: Mixing of GPUs within a node is not supported, by VxRail or PowerEdge.
Answer: Mixing of GPUs within a cluster is permitted, however different GPUs use different
vGPU profiles. A virtual machine or VDI gold image that is configured for one vGPU
profile, will not run on a GPU that does not support that profile. This can mean that
even though there are sufficient other compute resources for the virtual machine,
vSphere cannot power it on as the physical GPU it requires is not available.

Question: Which GPUs are supported on which VxRail nodes?


Answer: Refer to the VxRail Technical Reference deck, Max GPU count by node slide, for
configuration details.

Security
Question: What are DISA STIGs, and do I need them?
Answer: Defense Information Systems Agency (DISA) Security Technical Implementation
Guides (STIGs) are the configuration standards for Dept of Defense (DOD)
Information Assurance (IA) and IA-enabled devices/systems. The STIGs contain
technical guidance to “lock down” information systems/software that might
otherwise be vulnerable to a malicious computer attack. To receive Approval to
Operate (ATO), a VxRail customer must first lock down (or harden) in accordance
with applicable DISA STIGs.

Question: Does VxRail provide DISA STIG compliant hardening guidelines and scripts?
Answer: Yes, VxRail provides the VxRail STIG Hardening Package to harden VxRail
systems to comply with DISA STIG requirements, in support of the NIST
Cybersecurity Framework. In addition, VxRail provides security configuration
guidance for protecting the system post-deployment.

Question: What is the VxRail STIG Hardening Package?

Dell Technologies – Internal Use – Confidential 33


Internal Use - Confidential
June 20, 2024

Answer: The VxRail STIG Hardening Package includes scripts and the VxRail STIG
Hardening Guide provides manual steps to harden VxRail systems in compliance
with relevant Department of Defense (DoD) Security Technical Implementation
Guidelines (STIG) requirements. The package supports standard VxRail clusters
running 7.0.131 or later.

Question: How do customers get the VxRail STIG hardening package?


Answer: Two options exist for users to implement the STIG Hardening Package: A self-
service option where customers can download the package from dell.support.com;
and a Dell Technologies Service option that is available through a custom
deployment service. Refer to this KB article for more information.

Question: Can customers pay for Dell to help with the VxRail STIG hardening?
Answer: Yes, there is an optional ProDeploy for VxRail Security STIG Hardening Add-on
offer available in the sales tools. Note – the STIG add-on offer is not available in all
regions and is not currently available for VCF on VxRail.

Question: Where can I find detailed information about VxRail security design and
assurances?
Answer: Please see the VxRail Comprehensive Security by Design white paper that covers
best practices, integrated and optional security features, and proven techniques.
Answer: Every VxRail provides the highest levels of security to enable customers to build
and sustain a compliant and cost-effective cybersecurity solution for Federal,
financial services, healthcare, cloud computing, and other industry sectors.

Question: What are the differences between vSAN encryption and virtual machine
encryption?
Answer: Virtual machine encryption, also known as vSphere Encryption, is enabled on a per
VM bases, whereas, vSAN encryption is enabled for the entire cluster datastore.
vSAN data at rest encryption is a better option for concerns of media theft and
allows data reduction to be applied. VM Encryption is a better option for concerns
of a rogue administrator but eliminates the benefit of data duplication due to
randomizing the data. Both forms of encryption require an external KMS. Read
https://kb.vmware.com/s/article/2148947 for more details.

Question: Would a customer ever utilize both vSphere and vSAN encryption?
Answer: There are a few scenarios where customers may choose to encrypt some critical
VMs with vSphere Encryption for the benefits provided above, primarily to protect
against network intrusion or rogue administrator. Using both encryption methods
will increase CPU overhead as now data is encrypted (and decrypted) twice.

Question: Is a Key Management Server (KMS) required when using encryption with
VxRail?
Answer: Yes. A KMIP compliant KMS is required for either vSphere or vSAN encryption.
vSphere Native Key Provider, HyTrust, or any other vSphere compatible KMS is
recommended. It should never be hosted on the same cluster of which it is
managing the encryption keys.
Answer: For vSAN data-in-transit encryption, a KMS is not needed.

Dell Technologies – Internal Use – Confidential 34


Internal Use - Confidential
June 20, 2024

Question: Does VxRail support multi-factor authentication?


Answer: VxRail supports 2FA with RSA SecurID. Users can login to VxRail via the vCenter
plugin when the vCenter is configured for RSA 2FA.

Question: Does VxRail transmit management traffic securely over the network?
Answer: Yes, VxRail requires management traffic to be transmitted over HTTPS using TLS
1.2. VxRail Manager, vCenter, and iDRAC all disable the HTTP interface, thus
preventing management traffic from being transmitted in the clear.

Question: Does VxRail support ESXi Lockdown Mode?


Answer: Yes, Lockdown mode is supported with the following restrictions as follows; the
only supported "Lockdown Mode is "Normal" and the VxRail Management, Platform
services account and Root accounts all need to be added to the "Exception Users”.

Question: Does VxRail have Common Criteria certification?


Answer: Yes, VxRail has an EAL 2+ certification that provides customers with the assurance
that our process of specification, implementation and evaluation of VxRail security
has been conducted in a rigorous, standard and repeatable manner for its
environment.

Question: Is VxRail IPv6 Ready?


Answer: Yes, VxRail meets all USGv6 Compliance and Interoperability testing, including
IPv6 Ready Standards.

Question: How do VxRail features align with the NIST Cybersecurity Framework?
Answer: Refer to the VxRail Comprehensive Security by Design paper for details.

Question: Is Dell CloudLink supported with VxRail and vSphere 8.0?


Answer: As of March 31st, 2023, Dell CloudLink licenses are sold exclusively to PowerFlex
customers as part of a refocusing effort. Refer to this SharePoint page for details.
Existing VxRail 7.x customers running CloudLink today will continue to have
support until 2026, for all use cases.
Existing VxRail 7.x customers running CloudLink that upgrade to VxRail 8.0, and do
NOT have PowerFlex, will still be able to use CloudLink. Support for vSphere 8.0 is
included with CloudLink 8.x.
Newly deployed VxRail 7.x or VxRail 8.x customers with NO PowerFlex, will not be
able to deploy or use CloudLink.

Networking
Question: Where can I find detailed information about VxRail network configuration?
Answer: Refer to the VxRail Network Planning Guide.

Question: Does VxRail provide Layer 3 (L3) Network support?


Answer: Yes, refer to the VxRail Network Planning Guide for more details.

Dell Technologies – Internal Use – Confidential 35


Internal Use - Confidential
June 20, 2024

Question: Do VxRail systems include Top of Rack switch networking?


Answer: No, VxRail systems can use the customer’s existing network or a new switch for
connectivity between nodes. ProDeploy Suite for VxRail does not include
installation of switches but does include planning for connectivity work required for
the VxRail node deployment.

Question: Can VxRail be configured for network redundancy?


Answer: Network card redundancy allows users to configure multiple connections across
multiple cards for system traffic for increased availability and active/active
connections for increased performance.
Answer: Support for multiple VDS’s allows users to separate system traffic for additional
security. Combined with network card redundancy, multiple VDS’s can be
configured across multiple network cards. This feature is available using either a
VxRail-managed or customer-managed vCenter Server.
Answer: Dynamic link aggregation for increased throughput is supported for active/active
connections with customer-managed VDS as of VxRail 7.0.130. Active/active
connections with VxRail-managed VDS is supported as of VxRail 7.0.240.

Question: Do VxRail systems configured for SFP+, SFP28 interfaces ship with
compatible cables or transceivers?
Answer: No, VxRail systems do not ship with SFP+/SFP28 cables or transceivers – they are
specific to each switch type and are best provided separately so that they match. If
the customer is using SFP+/SFP28/Twinax/optic cables and transceivers, purchase
those compliant to the switch vendor specifications. Additional information on optics
and what is included in the ordering path can be found in the Ordering and
Licensing Guide.

Question: Where can I find detailed information about VxRail network configuration?
Answer: Refer to the VxRail Network Planning Guide.

Question: What are the requirements for configuring link aggregation?


Answer: Setup of the VDS and link aggregation group (LAG) are done from vSphere, while
peering and cabling of the links need to be on the switch.
Answer: A minimum of four ethernet ports are required per VxRail node. Two ports minimum
are used for non-LAG uplinks (discovery, VxRail management, and vCenter). Two
ports minimum are used for LAG for vSAN, vMotion, and customer VM traffic. Use
of multiple NICs is supported. All ports must be the same speed.
Answer: For Day 1 deployment, link aggregation can be configured with customer-managed
VDS. You can use the Configuration Portal to build the JSON file. Either the First
Run deployment wizard or VxRail API can be used.
Answer: For Day 2 deployment, customers can configure link aggregation using their
VxRail-provided VDS with a SolVe procedure.

Question: What use cases are suitable for link aggregation on VxRail?
Answer: Based on performance testing, read-intensive applications with large block sizes
such as video streaming and Oracle SAS benefit most from link aggregation.

Dell Technologies – Internal Use – Confidential 36


Internal Use - Confidential
June 20, 2024

Question: Is LACP supported for link aggregation?


Answer: Yes, LACP for dynamic LAG needs to be also supported on the switch. Check the
VMware compatibility matrix. The vSphere VDS load-balancing hashing options for
LACP are TCP/UDP, IPv4, IPv6, and MAC.

Question: What are the requirements for configuring VxRail system traffic across two
VDSs?
Answer: A minimum of 4 physical ethernet ports are required. They can be from one NIC or
spread across two NICs for NIC-level redundancy. You can use the Configuration
Portal to build the JSON file. In this release, this feature can only be deployed using
VxRail API.
Answer: For Day 1 deployment, it can be implemented on a newly created VxRail-provided
VDS. It can also be implemented on a newly or existing customer-managed VDS.
Answer: There is also support for a Day 2 conversion from one VDS to two VDSs.

Question: Do VxRail systems configured for SFP+, SFP28 interfaces ship with
compatible cables or transceivers?
Answer: No, VxRail systems do not ship with SFP+/SFP28 cables or transceivers – they are
specific to each switch type and are best provided separately so that they match. If
the customer is using SFP+/SFP28/Twinax/optic cables and transceivers, purchase
those compliant to the switch vendor specifications and specifications for the Intel
NICs on VxRail. Additional information on optics and what is included in the
ordering path can be found in the Ordering and Licensing Guide.

Question: Does VxRail provide Layer 3 (L3) Network support?


Answer: Users may create single cluster across multiple racks or create stretched clusters
over L3 vs. running Layer 2 on a Layer 3 network. L3 networking is supported with
the following considerations:
• The vCenter and VxRail Manager must be on the first rack because system
VMs will lose network connectivity when moved from the first rack unless multi-
rack is on the same vLAN.
• While VxRail networks, by default, share the same IP gateway as management
traffic, users can utilize the vSAN gateway, override gateway, for routable
network segments for vSAN traffic.
• When creating a new network segment, user is prompted to identify the initial
network segment where the system VMs reside and to input the vSAN gateway
for the new network segment.
Answer: Maximum of 6 racks per VxRail cluster is allowed.
Answer: If customer does not want to use L3 feature, there is no action needed. vSAN will
continue using default TCP/IP stack.

Question: How are VxRail systems connected to the customer network?


Answer: Each node of a VxRail system is connected to a top-of-rack (TOR) switch, by either
SFP28, SFP+ (Twinax or optic) or RJ45 ports (CAT 6 cables) for cluster

Dell Technologies – Internal Use – Confidential 37


Internal Use - Confidential
June 20, 2024

interconnect and uplinks to the customer network. See the latest VxRail Networking
Guide for further details on supported VxRail networking configuration options.

Question: Do VxRail systems include Top of Rack switch networking?


Answer: No, VxRail systems use the customer’s existing network or a new switch for
connectivity between nodes. A 1GbE, 10GbE, or 25GbE top of rack switch (TOR)
that supports IPv4, IPv6, and Multicast (only on the ports VxRail connects to) is
required. IGMP Snooper, and IGMP Querier is required for 10GbE and preferred
for 1GbE.

Question: Can VxRail be configured for network card redundancy?


Answer: VxRail 7.0.010 enables support for customer-managed VDS with network card
redundancy for increased availability and active/active connections for increased
performance.
Answer: VxRail 7.0.100 extends this to the VxRail-provided VDS.

Question: What is multi-homing?


Answer: Multi-homing enables VxRail Manager to simultaneously connect to an external
management network for administrative purposes and an internal network for
device discovery. We still need multicast on the internal network.

SmartFabric Services for VxRail


Question: What are SmartFabric Services (SFS)?
Answer: With SmartFabric Services included with Dell SmartFabric OS10 in integrated
mode, customers can quickly and easily deploy and automate data center
networking fabrics for VxRail, reducing risk of misconfiguration at the same time.
This integration enables VxRail nodes and switches to automatically discover each
other. After the auto-discovery, VxRail will fully configure the switch fabric to
support the VxRail cluster When new nodes are added to the cluster, the node(s)
will automatically be discovered as well. SFS supports multi-rack, single-site
deployments for the “core” VxRail use case, automating leaf-spine fabric with up to
20 switches.
Answer: With SmartFabric Services included with Dell SmartFabric OS10 in decoupled
mode, setup of the networking fabrics for VxRail is not automated. Configuration is
done using the OMNI (OpenManage Network Integration) tool before deploying the
VxRail cluster or when adding or removing nodes from an existing cluster. This
mode is selected if customers desire more VxRail cluster management autonomy
to follow VMware ESXi releases more closely than SmartFabric Services can allow,
or they seek more standardization of their SmartFabric Services network fabric
management across all their Dell storage products.
Answer: SmartFabric Services with VxRail in decoupled mode is sole option for clusters
running VxRail 7.0.450 or 8.x. Customers with clusters already using integrated
mode would need to convert to decoupled before updating VxRail 7.0.450 or later,
or to VxRail 8.x.

Dell Technologies – Internal Use – Confidential 38


Internal Use - Confidential
June 20, 2024

Question: What are the options to deploy a network fabric based on SmartFabric
Services?
Answer: Any type of network planning, design or configuration effort is out of scope for a
VxRail deployment services engagement. Professional services are recommended,
but not required, for a deployment of a SmartFabric-based network. In order to
provide a predictable, high-quality customer experience for the entire Dell solution,
professional services are encouraged for both VxRail and SmartFabric. Dell
publishes a deployment guide for SmartFabric Services for customers who opt not
to utilize Dell professional services.

Question: Where can I find more information about SFS for VxRail?
Answer: There are several resources where you can find more information:
• VxRail General FAQ and VxRail Technical FAQ (this document) – both contain
a section dedicated to SFS
• Dell EMC Networking SmartFabric Services for VxRail Solution Brief
• Deployment Guide: Dell EMC Networking SmartFabric Services Deployment
with VxRail
• VxRail Appliance Technical Overview deck

Question: Are there any limitations in terms of number of switches, racks, VxRail
versions, clusters supported by SFS multi-rack?
Answer: VxRail 7.0 releases are supported by SFS versions 1.3 and later, while VxRail 8.0
releases are supported by SFS versions 3.2 and later. Up to 64 nodes in a single
cluster are supported(vSphere limitation), expandable to 6 physical racks. A switch
fabric based on SFS can support a single managed fabric consisting of up to 20
switches in 9 racks (if 2 spines and 18 leaves are used). Any Dell switch supported
with SmartFabric Services for VxRail can be used. A pair of leaf switches must be
deployed in every rack in the switch fabric, and two spine switches are required for
expansion outside of a single rack. SFS currently does not support VCF on VxRail,
NSX, 2-node and stretched VxRail/vSAN clusters.

Question: What is an SFS personality?


Answer: The SFS personality enables the functionality and supported configuration on the
switch fabric. Starting with OS 10.5.0.5 SFS has two personalities for VxRail:
• VxRail Layer 2 (L2) Single Rack personality – the original (legacy) SFS
personality, generally available in OS10 releases from 10.4.1.4 to 10.5.0.5, that
automates configuration of a single pair of ToR (or leaf) switches with only
layer 2 upstream connectivity for VxRail clusters;
• Layer 3 (L3) Fabric personality – the new SFS personality, available as of OS
10.5.0.5, automates configuration of a leaf-spine fabric in a single-rack or multi-
rack fabric, and supports both layer 2 and layer 3 upstream connectivity;
Answer: Note that for all new single rack and multi-rack SFS deployments, Dell EMC
requires using the L3 Fabric personality instead of the VxRail L2 Single Rack
personality.

Dell Technologies – Internal Use – Confidential 39


Internal Use - Confidential
June 20, 2024

Question: How does SFS Layer 3 (L3) Fabric personality (SFS for multi-rack VxRail)
work?
Answer: SFS uses BGP-EVPN to stretch L2 networks across the L3 leaf-spine fabric,
leveraging hardware VTEP functionality. This allows for the scalability of L3
networks with the VM mobility benefits of an L2 network. For example, the nodes in
a VxRail cluster can reside on any rack within the SmartFabric network, and VMs
can be migrated to any VxRail cluster node in any rack to another without manual
network configuration.

Question: Can “legacy” single-rack SFS deployments be upgraded to a multi-rack SFS


Layer 3 (L3) Fabric personality?
Answer: Not currently. SFS for VxRail deployed with OS10 versions prior to OS10.5.0.5 is
limited to a single rack deployment with L2 upstream connectivity only. Those
deployments can be updated to OS10.5.0.5 but will continue operating with the
VxRail L2 Single Rack personality and cannot be expanded to a multi-rack
deployment at this time.

Question: Where can the minimum requirements, features, deployment options, and
other details for VxRail deployments with SFS be found?
Answer: Please consult the deployment guide for the up-to-date list of minimum
requirements.

Question: Can customers switch between manual and automated services?


Answer: No. The auto-switch feature (SmartFabric) or manual switch configuration method
must be chosen at the time of purchase.

Question: Can I enable SFS personality on a supported switch, that wasn’t configured
with SFS?
Answer: Yes, you can change the switch operating mode to Smart Fabric Mode, but it will
erase most of the switch/fabric configuration. Only basic settings, such as
management IP address, management route, hostname, NTP server and IP of the
name server are retained. Similarly, switches can be reconfigured to a Full Switch
Mode (“manual” / no SFS), but this operation deletes the existing switch
configuration.

Question: How are Dell EMC PowerSwitches ordered?


Answer: Please consult VxRail Ordering Guide for information about ordering of VxRail with
SFS.

Question: What software is used to manage SmartFabric?


Answer: Professional Services will install the OMNI plug-in after the VxRail cluster is built.
The OMNI plug-in is the only supported method for Day-2 switch administration.
The new SFS GUI introduced in the OS10.5.0.5 is intended for initial deployment
only.

Question: How can I update the SmartFabric OS10? Is it a part of VxRail LCM?
Answer: SFS OS updates on switches can be done using the OMNI vCenter plug-in and
they are not a part of an automated VxRail LCM today.

Dell Technologies – Internal Use – Confidential 40


Internal Use - Confidential
June 20, 2024

Deployment Options
VxRail satellite nodes
Question: What are VxRail satellite nodes?
Answer: VxRail satellite nodes are low-cost single node extension for existing VxRail
customers. These customers have seen the benefit from the simplicity, scalability,
and automation of VxRail. They want to extend that benefit beyond the core data
center, but with a smaller footprint and lower cost than what a 2-node cluster can
deliver, and are willing to accept a lower level of resiliency.
Answer: VxRail satellite nodes run the same VxRail HCI System Software as VxRail with
vSAN and VxRail dynamic nodes providing a common operating model from core
to edge. Like the VxRail dynamic nodes, satellite nodes do not use vSAN.

Question: What are the key use cases for satellite nodes?
Answer: Key uses can be for customers with locations that have no high availability
requirements, have less strict SLA than the core data center, have application
workloads are not compute, memory, or storage intensive, or where high availability
and SLA requirements can be met by other means, e.g. at the application layer.
Answer: Typical use cases would be:
• Retail and ROBO customers with distributed edge sites
• Telco 5G far edge sites at cell towers
• Test/Dev and Legacy Application Workloads

Question: What is the hardware configuration for satellite nodes?


Answer: Satellite node hardware configurations are based on the VP-760, VE-660, E660/F,
V670F and VD-4000 platforms. All configuration options for these platforms are
available for satellite nodes, with the addition of a PERC H755 RAID controller
(except for the VD-4000, storage-dense variant of the VP-760, and all-NVMe
variants of the VE-660 and VP-760). Using the PERC H755 provides the ability for
data and virtual machines to be protected from disk failures, a feature not available
with the HBA.

Question: What is the minimum node count for a satellite node cluster?
Answer: Satellite nodes are not deployed as part of a cluster, and cannot be converted for
use in a cluster. Satellite nodes are standalone hosts which are remotely managed
from a VxRail management cluster.

Question: What is a VxRail management cluster?


Answer: A VxRail management cluster is an existing or newly deployed VxRail with vSAN
cluster running 7.0.300 or above which has the role of managing VxRail satellite
nodes, in addition to its regular VxRail duties.
Answer: Up to 500 satellite nodes can be managed by a single VxRail management cluster.
Answer: Maximum latency of 200ms between the VxRail management cluster and satellite
nodes. As latency approaches 200ms, some operations are less optimized.

Dell Technologies – Internal Use – Confidential 41


Internal Use - Confidential
June 20, 2024

Question: How is LCM performed with satellite nodes?


Answer: LCM of satellite nodes is like the LCM process for other VxRail node types, but
driven from the managed folder object, and not a VxRail cluster object. Satellite
node LCM bundles do not contain updates for VxRail Manager or vCenter, but just
the firmware, drivers and applications supported for components on a satellite
node, reducing the amount of data needing to be sent.
Answer: Starting in VxRail 7.0.480, VxRail checks whether the contents of the node update
require a node reboot. If not, maintenance mode and service interruption can be
avoided. Engineering testing has shown that update time has reduced from 15
minutes to 2 minutes with this new feature.
Answer: Satellite nodes, unlike clusters, cannot migrate VMs to another node, therefore
these VMs need to be powered off. This action can be performed in advance by the
customer or can be performed by VxRail Manager. If performed by VxRail Manager
it will power the VMs on after exiting maintenance more once LCM is completed.
The transfer of the LCM bundle and the update operation can be done separately
to reduce the maintenance window needed to complete a satellite node update.
Answer: Up to 20 satellite nodes can LCM in parallel; additional nodes beyond 20 will be
queued.
Answer: LCM of satellite nodes can be performed via VxRail Manager or VxRail API.

Question: How are virtual machines and application availability handled in the event of
node failure or node LCM?
Answer: Satellite nodes are intended for use cases where planned or unplanned downtime
is acceptable. In use cases where downtime cannot be tolerated, availability must
be handled at the application layer. VxRail includes vSphere replication can be
used to protect virtual machines on satellite nodes.

Question: How are satellite nodes licensed?


Answer: See the VxRail Licensing section for details.

Question: Can VxRail satellite nodes operate at extreme temperatures?


Answer: Yes. Only the ruggedized VD-4000 is capable of operating at extreme temperatures
of 27F – 131F (-5C to 55C).

Question: Does deployment of satellite nodes require services?


Answer: Satellite nodes are customer deployable. The experience is similar to adding a
node to an existing VxRail cluster. Alternatively, ProDeploy Suite for VxRail does
include satellite node deployment.

VxRail dynamic nodes


Question: What are VxRail dynamic nodes?
Answer: VxRail dynamic nodes are compute-only VxRail nodes that do not have internal
drives. They require external storage, either from a remote datastore from a VxRail
cluster via VMware vSAN cross-cluster capacity sharing (formerly known as HCI

Dell Technologies – Internal Use – Confidential 42


Internal Use - Confidential
June 20, 2024

Mesh) or a Dell storage product. They run VxRail HCI System Software so that
VxRail clusters running vSAN and dynamic node clusters have a consistent
VMware and LCM experience.

Question: What is Dynamic AppsON?


Answer: Dynamic AppsON is branding that refers to VxRail dynamic nodes with PowerStore
T, and the tighter integration they have with each other in comparison to the other
Dell storage offerings such as PowerStore LCM integration on VxRail Manager.

Question: What are the external storage options for dynamic nodes?
Answer: Dynamic nodes have two external storage options: Dell storage product or VxRail
cluster using VMware vSAN cross-cluster capacity sharing for primary storage. The
primary datastore must have at least 900GB of storage capacity to host the VxRail
Manager VM.

Question: Which Dell storage products are supported with VxRail dynamic nodes?
Answer: Dynamic nodes require Dell storage. PowerFlex, PowerStore-T, PowerMax, Unity
XT, and VMAX are supported.

Question: Are third party storage array offerings supported with VxRail dynamic nodes
for primary storage?
Answer: No.

Question: What are the key use cases for dynamic nodes?
Answer: For customers looking to improve the economics of their HCI deployment, VMware
vSAN cross-cluster capacity sharing allows them to scale compute and storage
asymmetrically to better meet their companies’ IT demands while saving on vSAN
license costs where possible. Deploying VxRail dynamic nodes with vSAN cross-
cluster capacity sharing as the primary storage ensures customers have the same
LCM experience in their client clusters as they do with their server clusters running
VxRail HCI System Software. Dynamic nodes enable customers to lower
subscription costs by avoiding additional vSAN capacity subscription licensing.
Answer: Customers can better address data-centric workloads, for example in financial
services and medical verticals, that may still run on traditional three-tier
infrastructure by tightly coupling external storage arrays with dynamic nodes in their
VCF on VxRail environment. Users can add dynamic nodes, creating new workload
domains and utilizing external storage as primary storage with PowerFlex,
PowerStore-T, PowerMax, Unity XT, and VMAX in a VCF on VxRail environment.
Answer: Dynamic nodes can use external storage arrays as primary storage. This provides
flexibility to take advantage of Dell storage arrays’ strong feature set while providing
the same VxRail operational model in the compute layer to address more
workloads.

Question: Is it possible to use VxRail dynamic nodes for SAP HANA?


Answer: Yes, VxRail dynamic nodes are supported as a solution for SAP HANA
deployments under the SAP HANA TDI program provided all the underlying
components, certified server and certified storage are supported and listed on the
SAP HANA hardware directory and the VMware vSphere conditions for SAP HANA

Dell Technologies – Internal Use – Confidential 43


Internal Use - Confidential
June 20, 2024

support are all followed. VMware vSAN cross-cluster capacity sharing is not
supported for SAP HANA, therefore VxRail dynamic nodes are supported for SAP
HANA deployments only with certified external storage. VxRail dynamic nodes are
not supported under the SAP HCI program.

Question: What is the hardware configuration for dynamic nodes?


Answer: Dynamic nodes are available on the VE-660, VP-760, E660F, P670F, and V670F
platforms. All configuration options for these platforms, except cache and capacity
drives, are available for dynamic nodes. On the ordering path, select ‘None’ for
internal storage disk drive options.

Question: What is the minimum node count for a dynamic node cluster?
Answer: The minimum node count to deploy a dynamic node cluster is two.

Question: Can a cluster have a mix of VxRail dynamic nodes and VxRail nodes running
vSAN?
Answer: No. A VxRail cluster running vSAN cannot add dynamic nodes.

Question: How are dynamic nodes licensed?


Answer: See the VxRail Licensing section for details.

Question: Do sales of VxRail dynamic nodes count toward VxRail revenue?


Answer: Yes.

Question: Can dynamic nodes be configured to use external storage array and VMware
vSAN cross-cluster capacity sharing?
Answer: Yes. However, one has to be primary storage while the other one is secondary
storage. The primary storage hosts the VxRail Manager VM.

Dynamic nodes with external storage array as primary storage

Question: What protocols are supported for external storage array connectivity?
Answer: FC, FC-NVMe, iSCSI, NFS, NVMe-oF, and NVMe-oF/TCP are available currently.
Please refer to the VxRail Roadmap (https://vxrail.is/6monthroadmap) for future
connectivity support.

Question: Does VxRail also lifecycle manage the external storage array?
Answer: Unless VxRail is in a Dynamic AppsON configuration, management of the external
storage array is done separately.

Question: Is the FC HBA part of the VxRail LCM?


Answer: Starting with VxRail 7.0.240, the user has the option to include FC HBA driver and
firmware updates into the VxRail LCM workflow for a combined single update cycle.
However, user still retains the responsibility to perform the testing and validation for
the FC HBA. This capability is available with vLCM enablement or VxRail LCM.

Dell Technologies – Internal Use – Confidential 44


Internal Use - Confidential
June 20, 2024

Question: What is the deployment process to configure dynamic nodes with external
storage array as primary storage?
Answer: For fibre-attached storage, the deployment process includes VxRail Day 1 bring-up
as well as setup done on the storage array and FC switch.
Similar to setting up FC-attached storage for ESXi clusters, zoning must be done in
advance of the VxRail Day 1 bring-up and storage needs to be already provisioned
from the storage array to the nodes. A VMFS datastore of at least 900GB in
capacity must also be created and zoned to each node in the cluster ahead of time.
If there are multiple VMFS datastores, VxRail will choose the largest one.
Once those elements are in place, user can run the VxRail Day 1 bring-up. The
wizard is largely the same as for VxRail HCI clusters except for a few areas. User
needs to select the new cluster type: dynamic node cluster, and that it will connect
to FC storage array for primary storage. There is no setup of vSAN traffic. The Day
1 bring-up will migrate the VxRail Manager VM from the internal BOSS card to the
datastore.
Answer: For NFS or iSCSI-attached storage, setup requires post Day 1 activity because IP
connectivity needs to be established on the cluster before storage can be mounted
and the VxRail Manager VM can be migrated onto the primary datastore.

Question: Can an external storage array be connected to multiple VxRail dynamic node
clusters?
Answer: Yes. An external storage array can be the primary storage resource to multiple
dynamic node clusters which is an example of scaling compute and storage
independently.

Question: What storage features of a Dell storage array are supported with primary
storage for dynamic nodes or secondary storage with VxRail for vSAN
nodes?
Answer: Provided the storage array OS/firmware level is at the appropriate level published
in the E-Lab support matrix to match to the corresponding ESXi level on the VxRail
nodes then the features of the storage array are supported. For specific storage
array features follow the recommended procedures and best practices as published
by the storage platform and/or VMware. The exception being there must be more
than one node per site, i.e. two node stretched metro clusters are not
recommended and require an RPQ.

Question: Is there external storage array management integrated onto VxRail Manager?
Answer: Starting with VxRail 7.0.480, VxRail Manager UI can report primary datastore
capacity (total, used, available, utilization), system serial number, and storage
protocol used on PowerStore, PowerMax, PowerFlex, and Unity XT. The feature is
restricted to storage using FC, NVMe/FC, or iSCSI protocols. This feature is
dependent on Virtual Storage Integrator (VSI) plugin v10.2 or later to acquire the
information from the storage system.

Question: Can Powerstore MetroVolume be used with VxRail dynamic nodes?


Answer: Yes, for standard vSphere deployments only. VCF on VxRail deployments requires
an approved RPQ. While it is possible to use PowerStore MetroVolume in a

Dell Technologies – Internal Use – Confidential 45


Internal Use - Confidential
June 20, 2024

vSphere Metro Stretched Cluster with VxRail dynamic nodes, VxRail does not
provide any automation or specialized procedures for configuring dynamic nodes
with PowerStore MetroVolume. Users should follow the standard manual
configuration steps required to connect the ESXi hosts, provision the storage, and
configure the PowerStore MetroVolume. There is not any integrated reporting or
alerting within VxRail Manager for MetroVolume operations or status. The user
needs to use the native vSphere and PowerStore tools for monitoring and
managing the metrocluster/metrovolume. Also the PowerStore LCM capability
within VxRail Manager is limited to one array. For the second array, the user needs
to use PowerStore Manager, CLI, or VSI plugin for LCM.

VxRail Dynamic AppsON

Question: What is the PowerStore update feature on VxRail Manager?


Answer: VxRail 7.0.450 introduces the ability for VxRail users to optionally perform a
PowerStore update from the VxRail Manager plugin on vCenter. This capability is
only offered in a Dynamics AppsON configuration which consists of a VxRail
dynamic node cluster, formed by two or more nodes, and a PowerStore T storage
array, running OS version 3.0.0 or later, that provides the primary storage for the
cluster.

Question: How is the PowerStore update feature for Dynamic AppsON implemented?
Answer: Performing PowerStore update from VxRail Manager requires communication with
the Virtual Storage Integrator (VSI) plugin on vCenter. In VSI v10.2, a private API
has been created for the exclusive use by VxRail Manager to perform PowerStore
lifecycle management via VSI. VxRail Manager uses the VSI private API to run the
PowerStore LCM pre-check and update and retrieve PowerStore version
information.

Question: What are the PowerStore LCM operations that can be run from VxRail
Manager?
Answer: From VxRail Manager, VxRail users can upload a PowerStore bundle from a local
client onto PowerStore Manager. VxRail users can then run the pre-check that is
packaged in the bundle. When ready, they can initiate the PowerStore LCM update
operation and monitor its progress. At any point in time, VxRail users can view the
PowerStore version from the cluster system page and physical views on VxRail
Manager.

Question: What are the pre-requisites to using this functionality?


Answer: PowerStore LCM operations from VxRail Manager are only available for a
PowerStore T storage array being used as primary storage for a VxRail dynamic
node cluster. All storage types except NFS datastores are supported. The VxRail
dynamic node cluster must be running VxRail 7.0.450 or later. VSI must be
deployed on the same vCenter Server managing the dynamic node cluster. The
VSI must be running a minimum version of 10.2.
Answer: For PowerStore version reporting on VxRail Manager, only configurations using
Fibre-Channel connectivity is supported.

Dell Technologies – Internal Use – Confidential 46


Internal Use - Confidential
June 20, 2024

Answer: A VxRail user must also enter PowerStore administrator credentials to start the
PowerStore update workflow from uploading the bundle, running the pre-check, to
initiating the update.

Question: Why are there storage protocol limitations for the PowerStore LCM
integration?
Answer: There is a limitation in the VSI API for NFS datastores that prevents VxRail from
retrieving the requisite array information to initiate the PowerStore LCM operations.
Answer: For the PowerStore version reporting limitation, you can submit an RPQ for other
storage protocols besides NFS.

Question: What other considerations should a user be aware of?


Answer: The VSI VM should be deployed and configured prior to using the PowerStore LCM
functionality from VxRail Manager. Configuration includes registering the VSI plugin
with vCenter Server, registering the PowerStore array with VSI, and adding the
VxRail management account for access to PowerStore via VSI. VxRail Manager
also requires a valid certificate from VSI and PowerStore.
Answer: While a VxRail user can now perform PowerStore LCM operations from VxRail
Manager, the execution of these operations is performed by the PowerStore
Manager. Troubleshooting failed PowerStore LCM operations is limited from VxRail
Manager and likely requires user to separately access PowerStore Manager to
view reports, logs, and other aspects about the array.

Dynamic nodes with VMware vSAN cross-cluster capacity sharing as primary storage

Question: What is the deployment process to configure dynamic nodes with VMware
vSAN cross-cluster capacity sharing as primary storage?
Answer: VxRail cluster deployment automates the Day 1 operations: cluster build, vSAN
network setup, mounting of the remote datastore, and migration of the VxRail
Manager VM to the remote datastore. After the VxRail cluster deployment is
completed, the dynamic node cluster is ready for Day 2 operations.
Answer: For the automated Day 1 operations, user is required to use VxRail API. There are
storage parameters that need to be populated on the JSON cluster deployment file
to identify the server cluster and remote datastore. For networking, the vSAN VMK
IP for each node, portgroup, vLAN ID, and optionally a vSAN gateway would need
to be provided on the same JSON file.

Question: What is a vSAN gateway?


Answer: Before VxRail 7.0.480, vSAN traffic shared the same default gateway as other
management traffic. While this is not an issue for Layer 2 networking, it can
become an issue for clusters with nodes across different subnets, as the static
routing table for each node becomes complex to manage.
VxRail 7.0.480 enables the use of vSAN gateway, also known as override gateway,
which was introduced in VMware vSphere 7.0 U3o. With vSAN gateway, Layer 3
networking management is greatly simplified as static routing table can be replaced
with setting the vSAN gateway for the vSAN VMK for each node.

Dell Technologies – Internal Use – Confidential 47


Internal Use - Confidential
June 20, 2024

VxRail dynamic node clusters with VMware cross-cluster capacity sharing is a good
example where vSAN gateways can be very beneficial as Layer 3 networks of
server clusters and client clusters can get very complex.

Question: What is the maximum distance for shared storage between client and server
clusters?
Answer: The maximum latency is 5ms round-trip time.

Question: Do the vSAN cross-cluster capacity sharing client and server clusters have to
be managed by a common vCenter Server?
Answer: No. As of VMware ESXi 8.0 U1, client and server VxRail clusters can be managed
by separate vCenter Servers regardless of if they are VxRail-managed or
customer-managed. The vCenter Servers can be linked via VMware Enhanced
Linked Mode or exist as standalone instances. However, vSAN stretched clusters
and 2-node vSAN clusters still require a common vCenter Server.

Question: What considerations are there for cluster expansion?


Answer: Cluster expansion is supported via the usual workflows. The expansion node needs
to be on the same subnet as the client cluster.

VxRail with vSAN ESA


Question: What are the minimum requirements to support vSAN ESA on VxRail?
Answer: VxRail supports vSAN ESA on the VE-660, VP-760, E660N, P670N, VD-4510c,
and VD4520c equipped with NVMe drives. In October 2023, VxRail introduced a
revised set of minimum hardware requirements that enable customers with
moderate performance needs and smaller data centers or edge environments to
benefit from the enhanced efficiency, reduced total cost of ownership and smaller
failure/maintenance domains provided by ESA. It is crucial to utilize the sizer tool to
accurately determine whether the hardware specificiations is sufficient for customer
needs. VxRail can deploy vSAN ESA with the following minimum requirements:
• Minimum of four RI or MU TLC NVMe, or M.2 NVMe drives per node
(two drive minimum for VD-4510c and VD-4520c)
• Minimum 10GbE networking
• Minimum of dual 8-core Intel processors
(single 16-core Intel Xeon-D processor for VD-4510c and VD-4520c)
• Minimum of 128GB of memory
• Intel Optane Persistent Memory support in App-Direct mode only
• GPU support

Question: What are the configuration limitations for VxRail with vSAN ESA?
Answer: VxRail with vSAN ESA does not support the following configuration options:
• Mixing of NVMe drive sizes, endurance ratings or vendor

Dell Technologies – Internal Use – Confidential 48


Internal Use - Confidential
June 20, 2024

• Single-socket processors (except the Intel Xeon-D processor on VD-4510c and


VD-4520c)
• AMD processors

Question: What are the deployment options for VxRail with vSAN ESA?
Answer: VxRail with vSAN ESA is only supported for greenfield opportunities. Brownfield
scenarios that involve repurposing VxRail nodes running vSAN OSA to run vSAN
ESA requires reimaging and redeployment. It is a disruptive process that may
require migration of workloads to another cluster before decommissioning the
VxRail nodes running vSAN OSA. Nodes may need to be reconfigured to meet the
vSAN ESA hardware requirements before they are reimaged and ultimately
redeployed into a VxRail cluster running vSAN ESA. Due to all these factors, an
approved RPQ is required to carefully evaluate each customer’s situation and
ability to successfully perform this conversion.
Answer: VxRail clusters with vSAN ESA can be managed by either VxRail-managed or
customer-managed vCenter Server 8.0. There is flexibility such that the vCenter
Server 8.0 instance can also manage VxRail clusters running vSAN OSA 7.0 or
OSA 8.0.
Answer: VxRail with vSAN ESA can be deployed as a standard vSAN cluster (3+ nodes), 2-
node vSAN cluster, or a stretched cluster.

Question: What are the deployment limitations for VxRail with vSAN ESA?
Answer: At this time, the following deployment options are not available for VxRail with
vSAN ESA:
• Mixing VxRail 15G and 16G nodes in the same cluster
• Re-purposing existing VxRail nodes
• Mixing vSAN ESA and OSA nodes in the same cluster
• Using vSAN cross-cluster capacity sharing on 2-node clusters
• 2-node vSAN cluster that shares a witness with a 2-node vSAN OSA cluster

Question: Can virtual machines and workloads be moved between vSAN ESA and OSA
clusters?
Answer: Yes, virtual machines and their workloads can be migrated between vSAN ESA
and OSA clusters using shared-nothing vMotion, as is the case between any
vSphere cluster.

VCF on VxRail
Question: Where can I learn more about VCF on VxRail?
Answer: Find more information in the VCF on VxRail Technical FAQ and white paper.

Dell Technologies – Internal Use – Confidential 49


Internal Use - Confidential
June 20, 2024

2-node vSAN Cluster


Question: Where can I find more information about 2-node vSAN Cluster configuration
with VxRail?
Answer: Refer to the VxRail Architecture Overview Guide.

Question: What constitutes a VxRail 2-node cluster?


Answer: 2-node deployments are supported on all VxRail E, P, V, D, and S series models
under the following conditions:
• 2-node configurations require a witness node at the same or a different
location.
• The 2-node cluster must use four network ports for base connectivity, and
Witness Traffic Separation (WTS).
Answer: 2-node clusters running VxRail 7.0.130 or later support cluster expansion.

Question: Can 2-node implementations support top of rack switching?


Answer: Yes. Configuring a ToR switch for 2-node is allowed and gives customers alternate
connectivity choices beyond direct connection.

Question: My customer would like to remove one node from a VxRail 3-node cluster to
make a 2-node cluster. Is this possible?
Answer: No. A 2-node cluster must be a newly defined configuration. Therefore, removing a
single node from a cluster to form a 2-node configuration is not supported.

Question: Can a VxRail-managed vCenter be deployed on a 2-node cluster?


Answer: Yes.

Question: What are the vSAN licensing options for the 2-node cluster?
Answer: See the VxRail Licensing section for details.

Question: What are nested fault domains or cluster secondary resiliency?


Answer: Nested fault domains, also known as cluster secondary resiliency, adds additional
resiliency to 2-node clusters. This enables 2-node clusters to withstand additional
failures; for example, the failure of a disk or disk group in one node, when the other
node is already in a failed or offline state. To support this capability, there needs to
be at least three disk groups in each node.

Stretched Cluster
Question: What should I know about Stretched Clusters in VxRail?
Answer: Please see the VxRail Architecture Overview Guide.

Dell Technologies – Internal Use – Confidential 50


Internal Use - Confidential
June 20, 2024

Customer-deployable VxRail
Question: How is VxRail customer-deployable?
Answer: For existing customers with experienced technical resources, VxRail has
introduced capabilities to enable customers to pre-configure their VxRail clusters
with a web-based configuration portal and to self-deploy their clusters with these
configurations using the VxRail deployment wizard, RESTful API, or the Offline
Deployment Tool.

Question: Should any customer self-deploy their own VxRail clusters?


Answer: No. While the self-deploy option can be ordered with any supported VxRail node,
the default choice is for Dell Professional Deployment Services to deploy the
cluster. The self-deploy option is designed for existing VxRail customers who have
had considerable experience with VxRail infrastructure planning, networking
planning, and virtual infrastructure management. The self-deployment process still
requires extensive planning and preparation beyond what VxRail has introduced to
simplify the pre-configuration and deployment workflow with the configuration portal
and VxRail deployment wizard.
Answer: Deploying a VxRail cluster is still a complex procedure that requires the sales team
to diligently vet which customers are a fit for this capability. Customers who have
chosen to self-deploy but ultimately require assistance from the Dell support team
to deploy their clusters will be directed to purchase Deployment services. Correctly
positioning and selling the customer-deployable option will lead to the best
customer experience.

Question: What is the VxRail configuration portal?


Answer: The VxRail configuration portal is a web-based user interface designed to guide the
customer to build a cluster configuration for the cluster that will be deployed. The
customer creates a project and adds to it the VxRail nodes that will form the cluster.
The customer will provide cluster configuration settings for resources such as top of
rack switches, networking, vSphere distributed switches, vCenter Server, VxRail
Manager, ESXi hosts, and virtual networks. From these settings, the wizard
generates a JSON configuration file which will be used by any of the available
VxRail deployment tools to apply the settings during cluster deployment.
Customers should review with their virtualization and networking teams all
necessary guides to plan the VxRail cluster configuration settings and networking
architecture before using the configuration portal. Those guides can include:
vCenter Server planning guide, VxRail networking guide, and VxRail administration
planning guide.

Question: Can customers self-deploy a VxRail cluster with any VxRail node?
Answer: All VxRail 14th generation, VxRail 15th generation AMD-based and Intel-based
models, and VxRail 16th generation models are customer self-deployable. VxRail
nodes must be running a minimum VxRail software version of 7.0.410. Expect new
platforms going forward to support the self-deploy option, unless expressly stated.
Answer: Customers can repurpose supported nodes that are already deployed in a cluster.
Customers can re-image the nodes using the node image management tool with a

Dell Technologies – Internal Use – Confidential 51


Internal Use - Confidential
June 20, 2024

supported version. Then they can self-deploy the re-imaged nodes into a new
cluster.

Question: How can a customer re-image a VxRail node?


Answer: There are three options for customers to re-image a node.
1. Using a Windows-based or Linux-based node image management tool on a
client laptop, they can connect it to the local network of the target nodes. Via
the iDRAC interface, they can transfer the image from the client to the nodes to
initiate the imaging operation.
2. Customers can use the VxRail API to run the node image management utility to
transfer the image from a repository accessible by the VxRail Manager VM in
use and the target nodes via the iDRAC interface. Once transferred, the
imaging operation can be initiated.
3. The USB-based version of the node image management tool allows customers
to build and transfer the ISO image onto a USB drive so that the target node
can boot from the USB drive and perform the imaging operation. Customers
can use this option to re-image the VD-4000 witness node which lacks the
iDRAC interface.

Question: Which VxRail cluster types are not customer self-deployable?


Answer: Dynamic node cluster, stretched, and clusters used in VCF on VxRail are not
supported.

Ecosystem support
External storage
Question: Can I use VxRail systems to access external storage?
Answer: Yes, Dell storage arrays are supported as primary storage for VxRail dynamic
nodes where the VxRail Manager VM will run from an external datastore.
PowerStore, PowerMax, Unity XT, VMAX, and PowerFlex are the supported Dell
storage arrays. Third-party storage arrays are not supported.
Answer: For secondary storage use cases, VxRail systems can utilize external iSCSI and
NFS datastores, in addition to Fibre Channel storage.

Question: Can VxRail external storage include any vendor storage array or type?
Answer: Yes, only as secondary storage. External storage can be connected via FC. It is up
to the customer to verify the FC HBA card, driver and firmware is qualified with their
storage array.

Question: What native file services are supported for VxRail?


Answer: vSAN 7.0 introduces native file services supporting both NFS and SMB, and are
available with VxRail. It is important to know that the vSAN File Services VM is not
managed by the VxRail LCM and needs to be updated separately from a cluster
update.

Dell Technologies – Internal Use – Confidential 52


Internal Use - Confidential
June 20, 2024

Answer: vSAN 8.0 supports NFS and SMB. With vSAN 8.0 U2, ESA fully supports vSAN file
services.

VxRail Management Pack for Aria Operations


Question: What is VxRail Management Pack for Aria Operations?
Answer: The VxRail Management Pack is an additional free-of-charge software pack that
can be installed onto Aria Operations to provide VxRail cluster awareness. Without
this Management Pack, Aria Operations can still detect vSAN clusters but cannot
discern that they are VxRail clusters. The Management Pack consists of an adapter
that collects distinct VxRail events, analytics logic specific to VxRail, and three
custom dashboards. These VxRail events are translated into VxRail alerts on Aria
Operations so that users have helpful information to understand health issues
along with recommended course of resolution. With custom dashboards, users can
easily go to VxRail-specific views to troubleshoot issues and make use of existing
Aria Operations capabilities in the context of VxRail clusters.

Question: How are Aria Operations and SaaS multi-cluster management different?
Answer: Both products have visibility into the health status, health events, and topology and
provide resource metrics charting, anomaly detection, and capacity forecasting of
the VxRail clusters. However, Aria Operations and SaaS multi-cluster management
are designed to solve different customer problems. Aria Operations focuses on the
management and optimization of the virtual application infrastructure for the
complete SDDC stack as well as hybrid/public cloud. SaaS multi-cluster
management focuses on active multi-cluster management of customer’s entire
VxRail footprint from a centralized point. It does not manage the virtualized
workload running on the VxRail clusters.
Answer: Aria Operations can be installed on-premises or consumed as a cloud-based
service. SaaS multi-cluster management is a cloud-based service.

Question: When should I position VxRail Management Pack for Aria Operations vs.
SaaS multi-cluster management?
Answer: The Management Pack itself is free of charge but requires that the customer
purchase Aria Operations licensing entitlements in order to use it. Customers
already using Aria Operations can benefit by adding VxRail context into their
existing monitoring, troubleshooting, and optimization activities of their virtual
infrastructure.
Answer: SaaS multi-cluster management is part of VxRail HCI System Software and does
not require additional license for its feature set, except for active management. It is
suitable for the majority of the VxRail customer base. It requires VxRail clusters to
be able to connect to Dell cloud via the connectivity agent. Customers looking to
more efficiently manage their VxRail clusters at scale and leverage operational
intelligence to simplify administration can benefit from SaaS multi-cluster
management.

Question: Where can I find more information on VxRail Management Pack for Aria
Operations?

Dell Technologies – Internal Use – Confidential 53


Internal Use - Confidential
June 20, 2024

Answer: Please refer to the VxRail Management Pack for vRealize Operations User Guide.

Delivery Options
Integrated Rack
Question: How does the delivery option called VxRail Integrated Rack Services differ
from the existing delivery option?
Answer: Customers who decide on the existing delivery option are looking for flexibility in
networking and racking options. Customers who choose the VxRail integrated rack
deployment option choose to have Dell Technologies rack and stack the VxRail
appliances and optionally other customer selected networking and other desired
infrastructure in a Dell Technologies 2nd Touch Facility prior to shipping. For non-
Dell supplied 3rd party products, the customer will be responsible for procuring and
shipping the products to a Dell Technologies 2nd Touch Facility for rack integration.
There are new ProDeploy Rack Integration for VxRail offers available in the sales
tool for easy quoting of factory rack integration work (previously it required a
customized quote).

Question: What rack design configurations are available for VxRail Integrated Rack?
Answer: With Flexible Dell Technologies 2nd Touch Facility factory services, customers
have options on the rack and networking components they would like used.
Customers can purchase from Dell Technologies, a rack from our Dell
Technologies partner, APC, or supply their own consigned 3rd party rack.
Customers also have options relating to network switches as well. Customers can
purchase Dell EMC PowerSwitch with OS10 EE switches from Dell Technologies or
they can supply their own consigned 3rd party switches. Any 3rd party consigned
items supplied by the customer must be purchased separately by the customer
outside of Dell Technologies. Support for those components would be provided by
the component vendor and not from Dell Technologies. So, depending on which
components are used for the system, customers have a choice of what support
experience they would like to have for their infrastructure.
Note: Rack Delivery services are only orderable through DSA/Gii Ordering tools.

Question: How do customers order VxRail Integrated Rack?


Answer: When a VxRail Integrated Rack deliverable is desired, VxRail nodes, Dell EMC
switches, and GiS services must be ordered using the DSA/Gii ordering tools.

Question: Are the fixed rack design configuration service templates only for VCF on
VxRail? Could they also be used for VVD and/or other VxRail use cases?
Answer: Fixed rack design configurations are no longer being offered and are EOL. All rack
integration services for VxRail must now be purchased as custom rack integration
services engagements. These can be quoted by working with your local Dell EMC
services sales specialist just as you would for other professional services offerings.

Question: What networking/scale requirements will be supported (single rack only,


multi-rack) for GiS services?

Dell Technologies – Internal Use – Confidential 54


Internal Use - Confidential
June 20, 2024

Answer: The GIS services will support individual rack configurations only, however,
customers can order multiple racks. Racks will include ToR switches and
management switches (when required).

Question: Can Dell sell Panduit racks for integration in the 2nd Touch Facility?
Answer: Panduit is not in the Dell price book catalog nor is it orderable in DSA/Gii. It must
be customer consigned material if a customer desires to use Panduit racks.

Question: Are deployment/installation services still required with the order of VxRail
Integrated Rack services?
Answer: Once the custom rack arrives in a customer datacenter, the typical onsite VxRail
installation and deployment services engagement begins. These VxRail installation
and deployment services are required no matter if a customer chooses to have
their physical infrastructure pre-racked and stacked at the Dell Technologies 2nd
touch facility prior to it arriving at their datacenter. The standard VxRail or VCF on
VxRail deployment and installation services would be performed by Dell
Technologies or partner professional services teams to configure the environment
per the designed physical site survey requirements.

Sales
Licensing
Question: What is the general licensing guidance now that perpetual licensing is end of
life?
Answer: VxRail clustered nodes, such as vSAN and dynamic nodes, require VMware
vSphere Foundation subscription at a minimum. VMware Cloud Foundation
subscription could also be used.
VxRail non-clustered hardware, such as satellite node and embedded witiness in
the VD-4000, requires vSphere Standard subscription at a minimum. VMware
vSphere Foundation or VMware Cloud Foundation subscriptions could also be
used.
VMware Cloud Foundation on VxRail requires VMware Cloud Foundation
subscription.
Answer: vSphere, vSAN, vCenter Server, vSphere with Tanzu, and Aria Operations licenses
are included in vSphere Foundation and VCF subscriptions.

Question: Where can I find more information about ordering and licensing for VxRail?
Answer: Refer to Dell Sales Tool Ordering and Licensing guide.

Question: Where can customers go to find licenses ordered from Dell?


Answer: The Dell Digital Locker is where customers can go to find their licenses.

Question: Where can I find more information about VMware licensing?


Answer: Refer to the vSphere Product Line Comparison and VMware Subscription PnP FAQ
Guide (VMware Partner portal) additional information. You can also go the internal
SharePoint for VMware Subscriptions with VxRail.

Dell Technologies – Internal Use – Confidential 55


Internal Use - Confidential
June 20, 2024

Question: What is VCPP and does VxRail support it?


Answer: The VMware Cloud Provider Program is supported.

Question: Are RecoverPoint for Virtual Machines (RP4VM) licenses included with the
purchase of a VxRail?
Answer: Yes, except for VxRail satellite nodes. Standard support includes 5 licenses per
node (E, P, V, D, and S Series) and 15 licenses per VxRail G Series chassis. There
is a limitation with RP4VM that prevents support for standalone hosts such as
VxRail satellite nodes.

Question: Is RP4VM starter pack moving to a 1-year subscription instead of a per VM


starter pack?
Answer: No. RP4VM includes the ESXi sockets where protected VMs reside, and only
“production” ESXi sockets are counted. RP4VM starter packs remain a perpetual
license.

Question: Where can I find more information on RP4VM licensing?


Answer: Find the RP4VM FAQ as well as ordering and licensing information in the Dell EMC
RecoverPoint for Virtual Machines Ordering and Licensing Guide here.

Question: How are stretched cluster deployments licensed?


Answer: VxRail stretched cluster deployments support either vSAN Enterprise or Enterprise
Plus licensing.

Question: Do we support shutting off cores in the BIOS to help customers stay in
compliance with software licensing?
Answer: Open an SR or request an RPQ to ensure we can properly support BIOS changes.

Tools
Question: Can the EIPT (Enterprise Infrastructure Planning Tool) be used for VxRail?
Answer: Yes, for specific power, cooling, weight, dimensions, etc., refer to the EIPT Tool.

Question: Is there a sizing tool available for VxRail systems?


Answer: Yes, there is the online VxRail Sizing Tool: https://vxrailsizing.emc.com/.

Question: Is there documentation for racking VxRail systems?


Answer: The Enterprise Systems Rail Sizing and Rack Compatibility Matrix provides
mounting features and key dimensions of the rack rails used for mounting many
Dell enterprise systems and peripheral devices in a rack enclosure.

Question: Where can I direct product enhancement requests?


Answer: VxRail enhancement requests can be submitted via the VxRail | ISG Request For
Enhancements (RFEs) Portal.

Question: Where can I direct questions related to VxRail platform security?

Dell Technologies – Internal Use – Confidential 56


Internal Use - Confidential
June 20, 2024

Answer: Security requests and questions can be submitted in the Security & Customer Trust
portal.

Training
Question: What technical resources can I use to learn about VxRail?
Answer: VxRail Bootcamp series
Answer: VCF on VxRail bootcamp series

Question: What resources can I use to stay up to date on VxRail?


Answer: http://vxrail.is/ignite.

Question: Where can I access additional VMware specific training content?


Answer: From VMware’s Partner Connect page you can access Partner University under the
Training tab. To self-register for VMware Partner Connect Portal visit:
https://vmware.secure.force.com/PartnerForms/PC_UserSelfRegistration
Answer: VMware’s The Core (https://core.vmware.com/) page is their home for technical
guidance on the core technologies, VCF, vSphere and vSAN, that provide modern
cloud infrastructure.

End of Sales Life (EOL)


End of Sales Life (EOL) for 14th Generation Nodes

Question: What should I know about VxRail 14th Generation nodes End of Sales Life?
Answer: VxRail 14th Generation nodes are no longer for sale as of May 9th, 2023. Below are
other relevant dates for existing VxRail 14th Generation nodes.
• End of Expansion (EOE): April 28th, 2028
• End of Standard Support (EOSS): April 30th, 2028
Answer: The P580N dates are as follow:
• End of Life (EOL): February 5th, 2024
• End of Expansion (EOE): January 31st, 2029
• End of Standard Support (EOSS): January 31st, 2029

Question: What happens to support contracts that exceed the EOSS dates?
Answer: Once EOSS dates are coded, entitlements quoted past the EOSS date are
terminated, and the unused portion of the standard support contract quoted beyond
the EOSS date is credited back to the customer automatically.

Dell Technologies – Internal Use – Confidential 57


Internal Use - Confidential
June 20, 2024

Support Services
Question: Does the purchase of VMware Extended Support impact the support length of
the associated VxRail hardware?
Answer: No, support length of the associated VxRail hardware remains unchanged. Notably,
hardware support for VxRail nodes running on Quanta platforms ended on
September 30, 2022 and VxRail nodes running on PowerEdge 13G platforms
ended on May 31, 2023.

Question: Does ProSupport Suite provide code upgrades by Dell Support for the
customer?
Answer: Yes, but it depends on the ProSupport Suite level purchased by the customer. If the
customer purchases ProSupport Next Business Day, then code upgrades by Dell
Support are not available. All other ProSupport offers include the code upgrades by
Dell Support. The customer can perform their own code upgrades. Note – VCF on
VxRail is different from VxRail in that ALL ProSupport Suite for VCF on VxRail
offers include code upgrades by Dell Support.

Deploy Services
Question: Are ProDeploy Suite offers mandatory?
Answer: No, but they are highly recommended to ensure the best deployment experience
for the customer. Customer can deploy their own VxRail nodes but should only do it
if they have experience with doing the installation. ProDeploy Suite for VxRail offers
are sold by the node. They can be sold with onsite or guided hardware deployment
and with remote or onsite configuration. ProDeploy Plus for VxRail is the highest
level of deployment providing a ‘white-glove’ onsite hardware installation and onsite
configuration experience. It is the default option for VxRail in the sales tools.

Question: Are there ProDeploy offers for factory rack integration?


Answer: Yes, there are new ProDeploy Rack Integration for VxRail offers available in the
sales tools for regions that have a 2T facility. These new offers are standardized to
expedite quoting (previously quotes had to be customized for each quote which
delayed sales cycles).

Solutions
Question: Is VxRail SAP HANA certified?
Answer: Yes. As outlined in the VxRail SAP HANA Design Guide, SAP HANA is fully
validated and supported on vSphere 8.0 and vSAN 8.0; and vSphere 7.0U3, with
vSAN 7.0U3 or VxRail dynamic nodes, based on the following platforms:
• All-flash dual-socket VP-760 and VE-660
• All-NVMe quad-socket P580N
• All-NVMe dual-socket P670N, E660N, and E560N

Dell Technologies – Internal Use – Confidential 58


Internal Use - Confidential
June 20, 2024

• All-Flash dual-socket P670F, E660F, P570F, E560F, and D560F

Competition
Question: Where do I get additional information about positioning VxRail systems
against the competition?
Answer: See the VxRail system battle cards in the VxRail Enablement Center.
Answer: Visit the competitive Klue site.

Dell Technologies – Internal Use – Confidential 59


Internal Use - Confidential

You might also like