Download as pdf or txt
Download as pdf or txt
You are on page 1of 307

Cisco UCS X-Series with

Intersight
Deployment Workshop

Participant Guide
Americas Headquarters Asia Pacific Headquarters Europe Headquarters
Cisco Systems, Inc. Cisco Systems (USA) Pte. Ltd. Cisco Systems International BV Amsterdam,
San Jose, CA Singapore The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at
http://www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of
Cisco trademarks, go to this URL:http://www.cisco.com/c/en/us/about/legal/trademarks.html. Third-party trademarks that are mentioned are
the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other
company. (1110R)
DISCLAIMER WARRANTY: THIS CONTENT IS BEING PROVIDED “AS IS” AND AS SUCH MAY INCLUDE TYPOGRAPHICAL, GRAPHICS,
OR FORMATTING ERRORS. CISCO MAKES AND YOU RECEIVE NO WARRANTIES IN CONNECTION WITH THE CONTENT PROVIDED
HEREUNDER, EXPRESS, IMPLIED, STATUTORY OR IN ANY OTHER PROVISION OF THIS CONTENT OR COMMUNICATION BETWEEN
CISCO AND YOU. CISCO SPECIFICALLY DISCLAIMS ALL IMPLIED WARRANTIES, INCLUDING WARRANTIES OF MERCHANTABILITY,
NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR
TRADE PRACTICE. This learning product may contain early release content, and while Cisco believes it to be accurate, it falls subject to the
disclaimer above.
Cisco UCS X-Series with Intersight
Deployment Workshop Overview
„ Cisco UCS X-Series Solution Architecture
„ Cisco UCS X-Series Hardware Components & Traffic
Flow
„ Cisco UCS X-Series and Intersight Installation
Considerations
„ Cisco UCS X-Series Management through IMM

Workshop Description
The Cisco UCS X-Series with Intersight Deployment
Workshop will enable Cisco customers, internal and
Partner SEs/FEs to deploy the UCS X9508 chassis,
X210C compute nodes, X6400 Series Fabric
Interconnects and options. Workshop participants will
also learn how to prepare the UCS X-Series hardware
components for discovery by Intersight Managed
Mode (IMM). The workshop also includes information
on how the UCS solution architecture has evolved to
the X-Series and how to manage solution components
with IMM.

What You will Learn

Participants going through this workshop will


learn how to:
„ Articulate the solution level architecture of the
UCS X-Series solution, and the role of IMM and
the key differentiators of the X-Series solution
from previous UCS architecture
„ Identify the function of each hardware
component in the UCS X-Series solution and
the communication and traffic flow between
components.
„ Enumerate the key requirements to
successfully deploy a UCS-X solution with
IMM.
„ Identify the key features Cisco IMM as they are
accessed to manage a UCS X-Series
deployment.
Target Audience

This workshop will benefit


installation engineers and
other personnel who are
responsible for installing,
troubleshooting, and
providing first-level support Duration
for a UCS X-Series
implementation in customer
networks.

2-3 Hours
over 2-3 days
Cisco UCS X-Series with Intersight
Deployment Workshop
Section 1
Solution Architecture

Thank you for joining this training session on UCS X-Series deployment.

1 Cisco UCS 1.0 Cloud & Compute

Agenda 2
Founda�ons
Introducing Cisco UCS® X-Series
with Intersight™
3 Managing Cisco UCS 2.0 with
Cisco Intersight

© 2022 Cisco and/or its affiliates. All rights reserved. .

Cisco UCS X-Series with Intersight Deployment Workshop Page 1


Copyright 2022 Cisco System, All rights reserved.
UCS Benefits
Hardware Consolida�on
• Cisco UCS 5108 Blade Server Chassis significantly reduce footprint of
data center, rack server
Hardware Abstrac�on
• Server Profile create virtual iden�ty for a server
• Iden�ty can move between hardware pla�orms

Unified Fabric
• Fabric Interconnect over Ethernet, Fibre Channel, and Fibre Channel over Ethernet
(FCoE) allows for flexible deployments

© 2022 Cisco and/or its affiliates. All rights reserved. .

What is Unified Computing System (UCS)?


Some of the key benefits that UCS brings to the data center compute market are hardware
consolidation and unified fabric. By consolidating hardware, the customer can reduce the size
of the footprint to deliver compute with less cabling and less disruption to that cabling
infrastructure as the customer proceeds through the equipment's lifecycle. This reduction is
achieved through a unified fabric that allows for flexible configurations using Ethernet, FC, or a
combination of FCoE - a single fabric focused on bandwidth rather than technology.

Hardware consolidation in the unified fabric is the underlying foundation of a hardware


abstraction model. It sits on top of these components to allow the customer to deliver
applications independent of specific hardware. To accomplish this:
• Define the pools and polices needed for the physical hardware.
• Create and apply the Server Policy to a compute node.
• Instantiate the hardware identity required for application function.
• Hardware consolidation
- Cisco UCS 5108 Blade Server Chassis significantly reduces the size of the rack
server footprint in the data center
• Hardware abstraction
- Server Profile allows virtual server identity
- Identify movement from one hardware platform to another
• Unified Fabric
- Ethernet, Fibre Channel (FC), and Fibre Channel over Ethernet (FCoE) on the
fabric interconnect (FI)
- Flexible deployments

Cisco UCS X-Series with Intersight Deployment Workshop Page 2


Copyright 2022 Cisco System, All rights reserved.
Unified Computing System (UCS) FIs provide a southbound unified connection to UCS servers in
either blade chassis or rack form factor, then upstream into a Data Center or Enterprise
networks and FCoE storage networks. The solution also supports configurations for directly
attaching Fibre Channel or Ethernet storage devices to the FIs without the need for intervening
switches.
• Consolidation achieved through the unified fabric
• Connection source:
- Servers to the fabric interconnects (FIs) use only Ethernet
- FIs use Ethernet to the LAN Nexus switches
- Fibre Channel or FCoE are used between the FIs and the SAN switches or directly to
devices
• Intersight Software as a Service (SaaS) and appliance options are available for managing
the deployment
• Intersight offers management options for UCSM managed domains

Cisco UCS X-Series with Intersight Deployment Workshop Page 3


Copyright 2022 Cisco System, All rights reserved.
The ability to move the identity of a compute node from one hardware server to another,
through hardware abstraction is a significant feature of Cisco UCS. This has a number of
benefits such as reducing down time during a hardware failure event. Hardware Abstraction
maps a compute node identity to hardware through a server profile. The compute node’s
identity consists of addresses used by the network to identify the server hardware. Examples of
these identities would be the MAC address of networking hardware on the compute node, or
world-wide port name/world-wide node name (WWPN/WWNN) to identify the storage
networking on a compute node.
Server profiles are a collection of policies that individually represent specific technology
components of server identity. This allows attributes of the server identity and the technology
to be abstracted, instantiated, applied, and then moved to a different physical device as needed
by the customer requirements or in a disaster recovery scenario.
• Holds the server identity and polices such as BIOS and BOOT policies
• Control boot process and BIOS options without entering server BIOS
• Associating a server profile configures identity and policy information

Cisco UCS X-Series with Intersight Deployment Workshop Page 4


Copyright 2022 Cisco System, All rights reserved.
In addition to the policies and profiles, Cisco UCS Manager (UCSM) offers additional
management requirements for the underlying UCS infrastructure components: fabric
interconnects (FIs), chassis, fans, power supplies, and other supporting equipment in the blade
chassis or rack servers. Protocols such as SNMP handle monitoring, Return Material
Authorizations (RMAs), inventory, and connecting to network management application.

In UCSM, switch elements, such as FI, include port and traffic management.
Compute nodes have many internal devices: mezzanine cards, storage disks, and the internal
management interface called the CIMC (Cisco Integrated Management Controller). The CIMC is
used to communicate hardware identity information to the FIs for inventory and management
of identities.

Cisco UCS X-Series with Intersight Deployment Workshop Page 5


Copyright 2022 Cisco System, All rights reserved.
Managing the Current UCS Deployment
Interfaces Management Processes Managed Endpoints

Cisco UCS Manager Switch Elements


Ac�ve
Chassis Elements
Firmware
and Config Blade Server Elements
Mul�-protocol Data Sync
Rack Server Elements
Support
Cisco UCS Manager
Subordinate

Redundant Management Plane

© 2022 Cisco and/or its affiliates. All rights reserved. .

This figure illustrates a managed UCS deployment. Managed endpoints include switching
chassis and compute node components, which are discovered, cataloged, and managed by UCS
Manager in an XML database. The API is what gives the customer access to the XML database.
The database information is also translated to HTML for GUI access and searchable via the CLI
on the console.
Standalone deployments are an option, but this creates a single point of failure for
management and data transmission.
HA configuration, firmware, policies and profiles are all controlled through the UCS Manager
interface.
• UCSM provides three ways to connect to the hardware and control the different
deployment elements
- GUI (HTTP/HTTPS)
- Console (CLI)
- API (Application Programmable Interface)
• Administrators decide the connection type
• Connect to the active fabric interconnect - UCSM active in a cluster deployment
• The subordinate will not have the option to login because the UCSM cluster uses a one-
way synchronization mechanism to prevent “split-brain” scenarios

Cisco UCS X-Series with Intersight Deployment Workshop Page 6


Copyright 2022 Cisco System, All rights reserved.
Managing the Current UCS Deployment (Cont.)

The UCS product portfolio has always included a robust range of devices and implementation
models. Traditionally, Cisco provided the B-series with a high level of compute power able to
support only two drives.
The C-series compute nodes have been the primary option for customers looking for a more
“dedicated” networking/storage solution. Some of the C-series computes can support up to 24
drive bays with multiple drive options.
Cisco also has support for “pre-built” deployments such as VBlocks and FlexPods. In these
solutions, the customer selects what hardware to deploy. Then, Cisco will pre-build the
solution, box it, and ship it to the customer. This allows the customer to simply unbox the
solution, plug it in, and start their deployment.
With such a robust product portfolio, the customer needed a management solution that would
work for all deployments across multiple data centers. This was the driving factor to position
IMM and Intersight management at the center of the UCS deployment.

Cisco UCS X-Series with Intersight Deployment Workshop Page 7


Copyright 2022 Cisco System, All rights reserved.
UCSM Interface

This is a representation of the UCSM interface as it stands today. It has evolved over time,
beginning as a JAVA applet and then migrating to an HTML5 model. However, it remains as a
software component of the fabric interconnect. This means that each domain, without UCS
Central, is managed on a per-domain basis. This can be very cumbersome in larger
deployments, especially across multiple data centers.

Cisco UCS X-Series with Intersight Deployment Workshop Page 8


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 9
Copyright 2022 Cisco System, All rights reserved.
In this topic we will address the Cisco UCS Chassis and Fabric Interconnect evolution to the X-
Series.

Cisco UCS X-Series with Intersight Deployment Workshop Page 10


Copyright 2022 Cisco System, All rights reserved.
Over the last decade, Cisco has been focused on simplifying infrastructure and operations.
Some of the key advancements that have been made are as follows:
• Unified/Converged Fabric
- Reducing the data center footprint
• Stateless System (Server Profiles), Policies, Unified Management
- Ease of deployments and upgrades while supporting equipment replacement for
scalability

Cisco UCS X-Series with Intersight Deployment Workshop Page 11


Copyright 2022 Cisco System, All rights reserved.
Cisco Focus for Last Decade (Cont.)
• Resiliency and Performance for Mission Critical Apps
- Consistent performance while reducing downtime during hardware failures and
upgrades
• Solutions, Ecosystem Partnerships
- Simplifying the turn up of full application stacks
• Power and Cooling, Chassis and Blade Design
- Multi-generational CPU support for the same chassis

Cisco UCS X-Series with Intersight Deployment Workshop Page 12


Copyright 2022 Cisco System, All rights reserved.
Another focus for Cisco has been on the modernization of the application ecosystem. With
advancements in virtualization, management, and containerized systems, Cisco grew with the
industry. In doing so, the UCS X-Series will include the following:
• Multi-cloud integration such as Azure
• Next generation artificial intelligence/machine learning (AI/ML), and container support
• Prebuilt solutions such as FlexPod and FlashStack
• Application modernization running in parallel with vendors such as VMware

Cisco UCS X-Series with Intersight Deployment Workshop Page 13


Copyright 2022 Cisco System, All rights reserved.
Opportuni�es
Expand Support For Emerging Workloads
Blades con�nue to be Preferred Choice For Mission Cri�cal Enterprise Apps
Oracle, SQL, SAP, Large VDI installations etc.
But Blades Have Limited # Local Drives and # GPUs
DAS Storage GPUs
Resul�ng in Blades Tradi�onally Unable To Support Some Workloads
Blades lack needed # of drives
HCI So�ware Racks, Specialized Racks
Blades lack needed # of GPUs
BigData Racks Blades lack needed # of drives
Splunk Racks Blades lack needed # of drives
GPU Workloads Specialized Racks Blades lack needed # of GPUs
Cloud Na�ve Racks/Cloud Cloud makes it easier to use

Opportunity To expa nd s upported workl oa ds (es peci a l l y a t s ca l e where bl a des bri ng s i gni fica nt TCO fa vora bi l i ty)

© 2022 Cisco and/or its affiliates. All rights reserved. .

When diving into the application side of these deployments, below are some of the application
considerations Cisco would like to facilitate:
• HPC (High Performance Computing) environments
• Big Data
• Splunk for management
• Higher Graphics Processing Unit (GPU) workloads
• Support for Cloud Native applications

Cisco UCS X-Series with Intersight Deployment Workshop Page 14


Copyright 2022 Cisco System, All rights reserved.
As always, Cisco maintains an eco-friendly approach to hardware. Some of the major
advancements that have been made in that area are as follows:
• High Watt CPU, GPU
• PCIe Gen4/5/6
• Compute Express Link (CXL)
• External Persistent Memory
• 100G/200G Networking
• New SmartNIC Features
• 64TB+ Gen5 nonvolatile memory express (NVMe) Drives
• Others…

Cisco UCS X-Series with Intersight Deployment Workshop Page 15


Copyright 2022 Cisco System, All rights reserved.
When sizing the deployment, looking at the application’s current and future needs is very
important. With that said, a highly visible and robust management solution is needed. Not only
can Intersight manage the hardware deployment, it can also monitor the deployment to ensure
optimal application performance. This not only includes general purpose applications such as
Big Data, it can also include mission critical applications such as SQL and SAP. The Intersight
management console is not only important for initial deployment, but throughout the lifecycle
of the data center.

Cisco UCS X-Series with Intersight Deployment Workshop Page 16


Copyright 2022 Cisco System, All rights reserved.
The evolution of the 5108 Chassis to the new 9508 Chassis:
• Communication between the server and I/O Modules (IOM).
• Communication between the IOM and the fabric interconnect (FI).
• Upgrades to the FIs over the years.
• Also it is important to mention the management of the UCS will be changing.
UCS X-Series is an evolution of the UCS hardware architecture. One of the most direct ways of
seeing this is the commonality of the FIs between the original UCS architecture and the X-Series
architecture. Both can use the fourth-generation 6400 Series FIs.
In the earlier chassis, a series of IOM contained the network attachment point that connected
the servers to the FI. UCS X-Series continues to have a module to accomplish that function, now
called the IFM (Intelligent Fabric Module).

Cisco UCS X-Series with Intersight Deployment Workshop Page 17


Copyright 2022 Cisco System, All rights reserved.
Consolidating workloads into fewer platforms can accomplish:
• Flexible infrastructure “anywhere”
• Standardized on UCS for a completely unified data center infrastructure experience
• Eliminate complex, multi-module, and customized HW form factors
• De-risk overprovisioning and avoid under-provisioning to address modern workloads
• Structured data on databases, business applications, and web apps
• Unstructured data and data-intensive apps for big data, machine learning, and AI
• Next generation apps using offload engines
• Hyperconverged workloads
• Storage and network can be IO-shared across multiple server nodes
UCS allows for the abstraction of identity requirements into policies and profiles and then
applies those policies and profiles to the hardware that meets the application's needs. There
are a variety of blade and rack form factors meeting those requirements. There are limitations
in any given form factor, directly through size and indirectly through the available power or
cooling delivered in areas such as CPU, memory, storage, network bandwidth, and so on.
In dealing with these constraints, customers must identify hardware requirements for their
application often leading to an expansion of the number of form factors deployed in the data
center. UCS X-Series expands the chassis architecture capabilities by having additional
bandwidth or capacity in each area, thus reducing the sprawl of form factors in the data center.

Cisco UCS X-Series with Intersight Deployment Workshop Page 18


Copyright 2022 Cisco System, All rights reserved.
Expanding the chassis architecture capabilities enhances the ability to manage what is delivered
to a specific application without creating larger pools of stranded resources anytime an
application is deployed. Traditionally, this has been referred to as composability.
• Simplify composability as code
- Dynamically compose via Intersight; software-defined mix and match
- Decide in-chassis or in the domain; no longer locked into decisions at purchase
- All aspects of servers include composability
• PCIe fabric
- Only blade system to allow GPU and accelerator
- Storage-class memory
- Storage goes beyond just Serial attached SCSI (SAS)

Cisco UCS X-Series with Intersight Deployment Workshop Page 19


Copyright 2022 Cisco System, All rights reserved.
A modular architecture begins with basic capabilities and can be expanded and modified over
time as application requirements change.
• Extend the limitations of pre-ordained system functionality that have led to complex,
and tough-to-run siloed compute architectures
• Software-define each node to perform workloads in a more cost-effective, easily
managed single architecture
• In the UCS evolution, capacity is dramatically expanded for compute, memory, network,
and storage

Cisco UCS X-Series with Intersight Deployment Workshop Page 20


Copyright 2022 Cisco System, All rights reserved.
The initial target of the new architecture is to readily deploy today's applications onto new
architectures, specifically reducing the need to select myriad rack-based architectures that do
not benefit from shared power, shared cooling, or shared fabrics with a chassis-based
architecture. Having a long-term roadmap of deploying tomorrow's applications without having
to have a major overhaul of hardware architecture is a key aspect of UCS X-Series.
• Protect investment - Cisco design is provisioned to accommodate next-gen and next-
trend capabilities, power and cooling, fabrics, and more
• UCS X-Series is future ready

Cisco UCS X-Series with Intersight Deployment Workshop Page 21


Copyright 2022 Cisco System, All rights reserved.
UCS X-Series is a major innovation in hardware and management, which is done through Cisco's
Intersight management products. Specifically, the new IMM or Intersight Manage Mode
capability allows management of fabric interconnect-based chassis architectures and rack
servers attached to fabric interconnects. Policy and profile-based management that UCSM once
provided is now accomplished through the Intersight cloud or appliance-based solution.
• Entire management architecture reinvented – not just UCS fabric and chassis
• Intersight becomes the single point of management for UCS and beyond
• Simplify IT administration with a hands-off, concierge-level management experience
with Cisco Intersight
• Automate and off-load routine system management and governance
• Intelligent, cloud-based management with Cisco Intersight
- Simplify deployment and maintenance of infrastructure
- Analytics provide actionable intelligence for IT management
• UCS X 9508 optimized for Intersight
- Global software-defined server profiles within a domain or multisite
- Analytics and Hyper Converged Infrastructure (HCL) ready
- Flexible Manageability: SaaS-Managed and Tethered Appliance Managed

Cisco UCS X-Series with Intersight Deployment Workshop Page 22


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 23
Copyright 2022 Cisco System, All rights reserved.
Intersight Management Module for UCS management can be implemented in cloud attached
SaaS or through an on-premise appliance model when required.

Cisco UCS X-Series with Intersight Deployment Workshop Page 24


Copyright 2022 Cisco System, All rights reserved.
The Current fabric interconnect (FI)-based model of UCS management has
limitations/drawbacks:
• Limited scale due to object model living on FIs – FI resources limited
• Software packaging increasingly difficult as we drag along legacy with our expanding
product portfolio
• Feature velocity and unintended feature fixes slow to release due to software bundling
• Complex legacy API (xml based) and proprietary server policy automation
How does the customer benefit by moving the UCS architecture's management from the UCSM
in the FI to IMM in the Intersight cloud? One key is the limited scale of management available
through FI-based UCSM. You can only manage through a single UCSM that is physically
connected to that FI domain. Going beyond the single FI domain means that you must use UCS
Central to UCSM.
There are some complexities in aggregating and handling the data from a significant number of
UCS domains. These domains may be on different firmware versions, have different pilot policy
models, and different application portfolios. Furthermore, updates, adding features, or fixing
bugs in UCSM is achieved through the management interfaces of the FIs, potentially requiring
undesirable reboots and traffic disruptions.
With the UCSM, the ability to deploy new features is reduced. Their desire to maintain a stable
environment for the application outweighs the desire or ability to consume new features and
unintended feature fixes. UCSM apps deliver an API-based management capability.

Cisco UCS X-Series with Intersight Deployment Workshop Page 25


Copyright 2022 Cisco System, All rights reserved.
Customer Benefits of IMM
• Global profile and policy scale with Intersight SaaS
• Continuous integration/continuous delivery (CI/CD) feature velocity by hosting UCSM
functionality in Intersight – Feature Rollout being separated from FW
• Pinpoint patching by removing the need to distribute firmware in bundles
• Modern API using OpenAPI framework – facilitates integrations
• Industry standard server policy automation with Redfish
The UCS hardware delivers an API and management tools to utilize those APIs. This model
continues with the UCS X-Series architecture. Intersight is a state-of-the-art cloud-native
application. It moves to an open API model rather than legacy XML-based models – a key
benefit for customers who want to utilize APIs.
Moving from UCSM domain-based scale to global scale required an increase in the number of
endpoints that can be managed. With IMM, customers can accommodate thousands of
endpoints at a local or global level. Through IMM, Cisco can deliver new features weekly that
are non-disruptive to customer environments. Just like features, unintended feature fixes can
also be rolled out as needed and focused on specific points of defect without impacting other
services. Velocity expands with Intersight, and IMM applies that velocity to new architectures.

Cisco UCS X-Series with Intersight Deployment Workshop Page 26


Copyright 2022 Cisco System, All rights reserved.
Until now, the Enterprise Data Center was a very complex web of management and
infrastructure components. One of the major hurdles Cisco wanted to overcome was the
massive amount of management applications needed for a data center solution. Customers
were looking for a more streamlined management solution. We will discuss later how Cisco’s
Intersight SaaS dashboard or on-site appliance alleviates this issue.

Cisco UCS X-Series with Intersight Deployment Workshop Page 27


Copyright 2022 Cisco System, All rights reserved.
There are several major architectural changes in Cisco Intersight:
• OpenAPI
• Redfish object model schema
- https://www.dmtf.org/standards/redfish
• Single control plane - not dependent on embedded control plane
- UCS Manager scope was a single domain - embedded hardware manager
- UCS Central scope was UCS Manager - a manager of UCS Managers - built a
solution to manage our control planes
- UCS Central expanded the number of control planes
Intersight is more than just a replacement for UCSM/UCSC, it has a growing number of data
center management features and functions beginning at the platform level:
• Managing compute
• Managing chassis through orchestration
• Integrating third-party products
• Integrating storage and virtualization tools
• Artificial Intelligence-based predictive and application-centric predictive management
tools
With these tools, customers can manage their data center hardware compute and their
applications as they move into cloud environments. The hybrid cloud is a key focus for the
Intersight product portfolio.

Cisco UCS X-Series with Intersight Deployment Workshop Page 28


Copyright 2022 Cisco System, All rights reserved.
Intersight Manage Mode is not a one-for-one replacement for UCSM, but it is an alternative
moving forward. It is the next generation mechanism through which policy and profile-based
management is applied to UCS hardware.
IMM expands the capabilities of UCSM by encompassing global policy and profile definition
previously accomplished through UCS Central, as well as orchestration capabilities historically
accomplished through UCS Director.
Intersight is an expanding portfolio of applications and features for management. It provides
everything from endpoint management to OS deployment, as well as orchestration, storage
integration, application-level sizing, cost analysis, and optimization.

Cisco UCS X-Series with Intersight Deployment Workshop Page 29


Copyright 2022 Cisco System, All rights reserved.
In this illustration, the white boxes represent items common between the existing UCS
architecture and the new architecture. The blue boxes represent new components that are
unique to Intersight-based management. The biggest change you don't see is that there is a box
missing in the prior architecture. The box for UCS manager no longer exists.
Management now lives outside of the FI and outside of the entire fabric for the SaaS cloud-
based deployment model within the Intersight architecture, which brings some key benefits in a
UCS domain's operational lifecycle. The connection to Intersight is through device connectors.
In rack servers there is a single device connector, which makes the connection to Intersight.
Cisco publishes very clear requirements for network connectivity for the device connector to
communicate with Intersight.
Customers can quickly manage connections securely in a chassis architecture because there are
many more endpoints, including device connectors. In this model, device connectors that are
downstream of the fabric interconnect device connector are referred to as child device
connectors. One of the key benefits of Intersight Managed Mode is that the customer does not
have to claim all of those device connectors directly into Intersight. The customer can contain
the parent device connector in the FI through the FI’s local management interface. Once
claimed, Intersight automatically claims all of the child device connectors as those devices are
discovered through the FI. If the FI is unclaimed out of Intersight, Intersight will automatically
unclaim all of those child device connectors.

Cisco UCS X-Series with Intersight Deployment Workshop Page 30


Copyright 2022 Cisco System, All rights reserved.
UCS component additions to support IMM (Cont.)
Additionally, the child device connectors cannot be claimed directly in Intersight. There's no
mechanism for creating a stranded or orphaned device connector on southbound components
with a fabric interconnect. A key benefit is a single management communication path between
the fabric interconnect and Intersight. All of the management connections through the other
endpoints in the architecture are contained through that one connection, making it much
simpler to secure and monitor that management environment. All of those connections are
encrypted and secure.
The cloud model is not the only deployment model for Intersight. Intersight can be deployed on
a local appliance for customers who cannot utilize a cloud-based management solution.
• With management code moved to Intersight, feature updates and bug fixes do not get
deployed to the fabric interconnect (FI).
• FI firmware is reduced to NXOS code and online upgradable communications modules
that do not impact the fabric.
• This greatly reduces the frequency of fabric disruptive actions. FI firmware can be
managed similar to other network devices.
• DC (Device Connector) and EP (Endpoint) proxies allow Intersight to reach server CIMC
and VIC and chassis IOM/IFM management interfaces through the fabric interconnect
device connector.
• This avoids the need to claim every device directly in Intersight. A single claim allows
discovery of an entire fabric domain.
• On the X Series compute node, the Endpoint Proxy is unnecessary, because Redfish is
aggregated within the compute node by the CIMC.

Cisco UCS X-Series with Intersight Deployment Workshop Page 31


Copyright 2022 Cisco System, All rights reserved.
UCS X-Series, does not support management through UCSM. UCSM does not have the models,
and the other metadata necessary to manage the new X-Series. UCS X-Series can only be
managed through Intersight Managed Mode within Intersight. This is going to be significant in
customer environments both for Brownfield and Greenfield deployments. Within Brownfield,
there will be limits to the mixing of X-Series with the existing UCS architecture. Cisco publishes
these limitations within the Intersight documentation.
This is a big leap in bringing the full capabilities of Intersight to UCS deployments.

Additionally, when a customer does a Greenfield deployment, even if it is uniquely X-Series, it


still represents a new management environment. This requires the transition from UCS
management to Intersight. This can be done on existing deployments already running UCSM.
This is also true in deploying new fabric interconnect domains into Intersight Managed Mode.
UCS X-Series chassis can be mixed with the original 5108 chassis. Both five blade servers and
five rack servers are currently supported, as well as the newly introduced M6 rack servers,
blade servers in the 5108 chassis, and X-Series components.
• All X9508 chassis and nodes (M6 and beyond) must be managed through Cisco IMM.
• UCS domains will be setup in IMM mmode, thus all HW in those domains will need to be
on the IMM supported HW list.

Cisco UCS X-Series with Intersight Deployment Workshop Page 32


Copyright 2022 Cisco System, All rights reserved.
Intersight Managed Mode is required with UCS X-Series (Cont.)
• Possible Mixed Domain Example – Intersight IMM:
• 6454 or 64108 FIs
• X9508 Chassis with M6 compute nodes
• California 5108 Chassis with B200 M6 Blades
• California 5108 Chassis with M5 Blades (2204/2208/2408 IOMs and 4th Gen VICs)
• FI Attached C-Series M5/M6 Servers

Cisco UCS X-Series with Intersight Deployment Workshop Page 33


Copyright 2022 Cisco System, All rights reserved.
Intersight can be managed with an appliance option, although the preferred method, based on
the benefits of cloud-based management, is the Intersight.com SaaS offering.
Cisco offers two appliance-based deployment models for those customers whose security
requirements don’t allow use of cloud-based management. Intersight can be deployed as a
virtual machine in standard VM architectures in two different ways. One is a connected virtual
appliance. The appliance has a device connector, which connects to Intersight and allows for
updates of metadata. The firmware allows for outbound delivery of tech support logs and alert
notifications, supporting some of the automation features of Intersight: connected TAC,
proactive RMA, and security and field notice advisories.
For customers who can support no connection out of the data center into a cloud, Cisco offers
the private virtual appliance model, which is referred to as air gapped. It requires that the
device connector on the endpoints be able to reach the virtual appliance on-prem. One of
Intersight management's common features is that the end user, the administrator or consumer,
never requires a direct network connection to the managed endpoints. They only have to reach
the Intersight connection interface no matter where it is hosted, whether on Intersight.com or
a private virtual appliance.

Cisco UCS X-Series with Intersight Deployment Workshop Page 34


Copyright 2022 Cisco System, All rights reserved.
New Private Virtual Appliance Deployment Option (Cont.)
• Intersight has appliance options in addition to cloud SaaS model
• Suited to specific security/compliance requirements
• Same user experience as the cloud model - open APIs and continuous enhancements
• Feature restrictions
• License requirements vary
• VMware ESXi 6.5+, Microsoft Windows Hyper-V Server 2016 and 2019 supported for
Virtual Appliance

Cisco UCS X-Series with Intersight Deployment Workshop Page 35


Copyright 2022 Cisco System, All rights reserved.
Welcome to Section 2 where we will cover the Cisco UCS X hardware components and traffic
flow, specifically the topics in this Agenda.

Cisco UCS X-Series with Intersight Deployment Workshop Page 36


Copyright 2022 Cisco System, All rights reserved.
UCS X-Series uses existing 4th generation fabric interconnects (FIs) in the 6400 series: either
the 6454 or the 64108. Cisco Intersight is a management platform that manages the
software. The FIs IOM (Input/Output Module) is now referred to as Intelligent Fabric Module
(IFM). Cisco uses the IFM to extend to the Nexus fabric, the switching fabric offered by the
FIs. The module is installed in the back of the chassis.
• Connects the X9508 to the Intelligent Fabric Module (IFM)
• IFMs have same function as traditional IOM (Input/Output Module)
• Allows servers to communicate to the world
• IFMs connect to current FIs allows LAN/SAN communication for the servers
• Current FIs have Cisco UCS Manager (UCSM) to provision LAN/SAN interfaces and Server
Profiles
• Ability to control what VLANs/VSANs are configured on each port
• FIs will be removed and IFMs will be the major configuration point

Cisco UCS X-Series with Intersight Deployment Workshop Page 37


Copyright 2022 Cisco System, All rights reserved.
In this section, we will begin by looking at the hardware at a high level and then dive down
into each component, starting with the system chassis. We will also look at power, cooling,
fabric, compute options, and available storage choices. Lastly, we will look at the network
and data flow.

Cisco UCS X-Series with Intersight Deployment Workshop Page 38


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 39
Copyright 2022 Cisco System, All rights reserved.
The Cisco UCS X9508 Modular System Chassis and its components are part of the Cisco
Unified Computing System (UCS). The Cisco UCS X9508 Modular System Chassis is scalable
and flexible for the compute footprint of today and tomorrow. The chassis allows eight
compute nodes, six power supplies, two Ethernet fabric modules (Intelligent Fabric Modules),
two Flexible Expansion Modules (FEMs), and four system fans. The chassis is 7RU high and
mounts in a standard 19-inch data center rack.

• New 9508 chassis builds on the original UCS 5108 blade chassis
• Expands power and cooling capacity supporting future generations of processors, NICs,
accelerators, and memory technology
• Provides for a massive leap in per-blade and overall chassis network bandwidth
• A second fabric is enabled for capitalizing on processor connected fabrics, both present
and future
• Supports expanded node-local storage and offers i-chassis storage expansion
• Future-proofing for a long lifecycle
• Select and upgrade fabric modules independently per-chassis
• Pre-enabled for optical interconnects through direct, orthogonal fabric connections and
water cooling through open chassis with ample room for water pipes, both open and
closed loop

Cisco UCS X-Series with Intersight Deployment Workshop Page 40


Copyright 2022 Cisco System, All rights reserved.
The X9508 Chassis is completely open until populated. The midplane has been removed,
which was previously required to carry signals from the networking cards on the compute
nodes. No copper is used to transport the networking signals from the compute node to the
IFMs or the FEMs. This open chassis allows greater airflow.

Other disadvantages of a midplane in the chassis is it is generally not easily field replaceable.
This locks in whatever that midplane can deploy. Similar components will have different part
numbers when installed in various midplanes.

UCS X-Series is about handling the workload customers bring to it with precisely the right
technology to service application requirements, in addition to flexibility and simplicity. Also,
removing the midplane improves airflow, therefore fans do not have to spin as fast, requiring
less power for cooling. Efficient cooling reduced overall power requirement.

• Lack of Intersight Orchestrator (IO) midplane requiring network traces reduces airflow
impedance, significantly increasing cooling efficiency
• Minimal midplane for power and management signal distribution
• Fans provide higher cooling at reduced RPM = lower acoustics and higher reliability
• Improved cooling properties and advanced DC power distribution = higher power
efficiency
• Ample power headroom for high power CPU and GPU roadmap

Cisco UCS X-Series with Intersight Deployment Workshop Page 41


Copyright 2022 Cisco System, All rights reserved.
Blades are oriented vertically and the fabric modules are oriented horizontally. This allows
connection from each blade to each fabric module. Any blade can be pulled in or out and
guides ensure alignment with the fabric module in the back of the chassis.
• Modular LAN-on-Motherboard (mLOM) connectors mate directly to the fabric board,
which eliminates IO midplane.
• Similar connection from the Mezz connector to the X Fabric modules.
• When using a Mezz network interface a bridge board is used to route interfaces from
the Mezz to the IFM via the mLOM connectors.

Cisco UCS X-Series with Intersight Deployment Workshop Page 42


Copyright 2022 Cisco System, All rights reserved.
This illustration represents a full deployment with eight blades connected to two IFMs and
two FEMs. IFMs are at the top and the FEMs at the bottom. Each blade has one connector
going to each module for a total of 32 connections across the system.

• X9508 chassis supports two independent fabrics


• Top network fabric and bottom X Fabric can be independently upgraded with new
technologies

Cisco UCS X-Series with Intersight Deployment Workshop Page 43


Copyright 2022 Cisco System, All rights reserved.
The Cisco UCS X210c M6 Compute Node is connected to the module. More accurately, it is
the VIC or network card sitting on the compute node making the connection. This makes the
fabric independent of the compute resource. Choose VIC and fabric modules to match each
other.
In the initial release for Cisco UCS X-Series, the VIC capacity is 100-gigabit; four by 25-gigabit
connections across the two FEMs. So, this is two by 25-gigabit per IFM, 100 gigabits total for
the mLOM for network connectivity, independent of the compute node. A different choice of
VIC and IFMs for network capabilities does not require a different Compute Node. Cisco UCS
X-Series allows selection of components that fit the deployed workloads.

• Virtual Interface Card (VIC) or the network card makes the connection
• Fabric is independent of the compute
• Choose compatible VIC and fabric modules
• Initial release for X-Series
- VIC capable of 100-gigabit connection
- Four by 25-gigabit connections across the two FEMs
- Two by 25 gigabits per IF
- 100 gigabits total for mLOM for network connectivity, independent of compute node
• Future compute nodes can be selected independent of the fabric

Cisco UCS X-Series with Intersight Deployment Workshop Page 44


Copyright 2022 Cisco System, All rights reserved.
This is a comparison between the 5108 of the last ten plus years to the system Cisco UCS
X9508 Modular System Chassis. There is a 1RU increase in height and two additional power
supplies, each of which has more power than the original. Those combined go from 10-
kilowatts to 16.8-kilowatts total power available in the chassis; close to 50% increase in total
power. The power supplies moved from 12-volt power distribution to 54-volt power
distribution. Delivering the same amount of power with higher voltage lowers current; the
power traces within the system, the wiring, and the traces.

In the back of the chassis, each of the four system fans are 100 millimeters in size. There are
three fans within each fabric module, both IFM and EFM. At initial release, there is not an
FEM with an actual fabric chip on it. Those are carriers for the three fans that provide the
cooling for the bottom of the chassis. The chassis' total airflow is 1000 CFM compared to 590
in the 5180 chassis, substantially increasing its overall capacity.

The biggest difference with networking lanes with the 5108, and the original series of blades
through M5 was primarily based on 10-gigabit Ethernet KR lanes. On the 3rd generation, the
fabric could be combined into a 40-gigabit link but primarily focused at 10 gigabit. With UCS
X-Series, they are now 25-gigabit lanes. In the future, those can be combined into higher
bandwidth with larger lanes. Another significant change on the compute nodes and
expansion fabric's potential is that PCIe Gen4 is now available.

Cisco UCS X-Series with Intersight Deployment Workshop Page 45


Copyright 2022 Cisco System, All rights reserved.
Key Differences Between 5108 and X9508 (Cont.)
• Compare existing 5108 chassis to UCS X9508 Modular System Chassis
- Size - 1U increase in height
- Power supplies
• Two additional power supplies
- Combined takes you from 10-kilowatts total power
available in the chassis to 16.8-kilowatts
- Better than 50% increase in total power
• Moved from 12-volt power distribution to 54-volt
• Same amount of power with higher voltage and lower
current
- Fans
• Each of the four system fans are100 millimeters in size
• Each fabric module has three fans within that fabric module,
both IFM and EFM
• Carriers for the three fans provide cooling for the bottom of
the chassis
• Chassis' total airflow is 1000 cubic feet per minute (CFM)
compared to 590 in the 5180 chassis
- Network Fabric
• 5108 chassis and original series of blades through M5
primarily based on 10-gigabit Ethernet KR lanes
• 3rd generation, fabric could be combined into a 40-gigabit link
primarily focused at 10-gigabit
• Now 25-gigabit lanes with X-Series
- X Fabric IO - PCIe Gen4 available

Cisco UCS X-Series with Intersight Deployment Workshop Page 46


Copyright 2022 Cisco System, All rights reserved.
Customer survey data indicates customer interest in using accelerators of some type: graphics
processing units (GPUs), Field-Programmable Gate Array (FPGA), or custom silicon to increase
workloads. Cisco seeks to use accelerators for these applications. Machine learning is of
interest to one third of those surveyed. A fourth of low latency processing is using acceleration
modeling at a high percentage. Accelerators tend to increase power usage, which in turn
increases the need for adequate cooling.

• Machine learning essentially a third of the survey


• A fourth of low latency processing using acceleration modeling at a very high percentage
• Demand for acceleration devices will increase in the future
• Counterpoint: Not everything will be accelerated, customers still need a choice

Cisco UCS X-Series with Intersight Deployment Workshop Page 47


Copyright 2022 Cisco System, All rights reserved.
As CPU and Accelerator power demands rise the X-Series chassis will remain flexible enough
to support them through adaptable power and cooling solutions.

Shown is the five-year power consumption rate across the chassis' primary power
consumer—the CPU, which includes memory power usage. The amount of power drawn by
the memory isn’t insignificant, but it is tightly coupled to the CPU. Graphics Processing Units
(GPUs), which standalone from other acceleration technologies, are already very high-
powered. The last row is a generic row for accelerators, which might be FPGAs or a system
on a chip-based smart NIC. There are a variety of other accelerators for network functions or
security chips.

The increase in power drives the industry from air cooling to water cooling, a niche solution
primarily in places like High-Performance Computing (HPC). Clusters are becoming more
mainstream. The far right shows the top power consuming devices in each category, which
indicate there will be demand for liquid cooling in five to ten years. There will be a range of
GPUs and a range of accelerators. The most demanding workloads for the highest in skews
are likely to be looking for water cooling.

This graphic shows a clear, growing demand for acceleration in approximately 30 percent of
use cases, but this leaves two-thirds of use cases without need.

• Five-years rate across the chassis' primary power consumer—the CPU

Cisco UCS X-Series with Intersight Deployment Workshop Page 48


Copyright 2022 Cisco System, All rights reserved.
• Top row combination of CPU and memory
• Power drawn by memory is coupled to the CPU
• GPUs have high power requirements
• Last row represents accelerators
• Increase in power drives air cooling to water cooling
• Clusters are becoming more mainstream
• Far-right shows demands for liquid cooling
• Range of GPUs and accelerators

Cisco UCS X-Series with Intersight Deployment Workshop Page 49


Copyright 2022 Cisco System, All rights reserved.
In this section, we will look at the hardware and software components that support Cisco’s
unified fabric.

Cisco UCS X-Series with Intersight Deployment Workshop Page 50


Copyright 2022 Cisco System, All rights reserved.
The current UCS architecture is using 4th generation FI, which are the 6454 and 64108. Both
offer a large number of 25-gigabit ports and a smaller quantity of 100-gigabit or 40-gigabit
ports, typically used for northbound connections to the data center fabric. These
connections will be connecting to your top-of-rack (TOR) solutions.
• 4th Generation UCS Fabric Interconnect (FI)
• Two form factors - 1RU 6454 and 2RU 64108
• Supports
• UCS IFM 9108-25G
• UCS VIC 14425 and VIC 14825
• UCS X-Series supported from firmware 4.1 (2)
• 1/10/25/40/100 Gbps Ethernet and 8/16/32 FC ports
• UCS X-Series management only thru Intersight

Cisco UCS X-Series with Intersight Deployment Workshop Page 51


Copyright 2022 Cisco System, All rights reserved.
UCS FI 6454
• 16 x UP (10/25 or 4/8/16/32G FC), 28 x 10/25GbE, 4x 1/10/25 GbE & 6 x 40GbE quad
small form-factor pluggable (QSFP)+ ports
• 3.82Tbps switching performance
• 1RU fixed form factor, two power supplies and four fans

Cisco UCS X-Series with Intersight Deployment Workshop Page 52


Copyright 2022 Cisco System, All rights reserved.
UCS FI 64108
• 16 x UP (10/25 or 4/8/16/32G FC), 72 x 10/25GbE, 8x 1/10/25 GbE & 12 x 40GbE
QSFP+ ports
• 7.42Tbps switching performance
• 2RU fixed form factor, two power supplies & three fans

The FI 6454 and FI 64108 both fabric scale the number of 25-gigbit ports directly and double
the number of 145, 100-gigbit ports. There are 16 ports capable of fiber channel on both
switches.

Cisco UCS X-Series with Intersight Deployment Workshop Page 53


Copyright 2022 Cisco System, All rights reserved.
Ports 45 to 48 on the smaller switches are 1-gigabit capable connections, with twice that
number on the larger switches. Knowing which ports support fiber channels can save you
from some headaches.

Cisco UCS X-Series with Intersight Deployment Workshop Page 54


Copyright 2022 Cisco System, All rights reserved.
• Point out the mapping of specific capabilities by port.

Cisco UCS X-Series with Intersight Deployment Workshop Page 55


Copyright 2022 Cisco System, All rights reserved.
• Point out the mapping of specific capabilities by port.

Cisco UCS X-Series with Intersight Deployment Workshop Page 56


Copyright 2022 Cisco System, All rights reserved.
This is the same chart, just for the larger switch.
• Point out the mapping of specific capabilities by port.

Cisco UCS X-Series with Intersight Deployment Workshop Page 57


Copyright 2022 Cisco System, All rights reserved.
• Point out the mapping of specific capabilities by port

Cisco UCS X-Series with Intersight Deployment Workshop Page 58


Copyright 2022 Cisco System, All rights reserved.
• Point out the mapping of specific capabilities by port

Cisco UCS X-Series with Intersight Deployment Workshop Page 59


Copyright 2022 Cisco System, All rights reserved.
When you first look at the IFM, you see three fans that snap in and snap out. Although they are
not hot-swappable, they are easily replaceable because the IFM can be removed from the
chassis to replace the fans taking down only one side of the fabric. On the back, each IFM has
eight ports, and each port is capable of 10 or 25 gigabits. In this architecture, the fabric
interconnects (FIs) are 25 gigabit. The connections supporting the transceivers and the copper
or optical cables are 25 gigabit.

• Three fans snap in and snap out


• Not hot-swappable
• High reliability
• Each IFM has eight ports, each port is capable of 10 or 25 gigabits
• 1-8 ports cabled to FI, numbered 1-8, port groupings are based on space, no functional
difference
• In this architecture, FIs are 25 gigabit
• 25-gigabit connections supporting transceivers, copper or optical cables

Cisco UCS X-Series with Intersight Deployment Workshop Page 60


Copyright 2022 Cisco System, All rights reserved.
For those familiar with the original architecture and the IOMs located in the chassis and the
IOM host, this is known as the chassis management controller. This also continues with the
IFMs on Cisco UCS X-Series. The Chassis Management Controller (CMC) is very similar, and the
software is common. The Cisco UCS X-Series chassis is managed by Cisco Intersight Manage
Mode (IMM).

• FPGA is secured
• Cannot be updated or altered without the image signed by Cisco
• Altered FPGA does ensure its boot image
• If image changed from the signatures installed it will refuse to boot
• Hosts CMC for chassis management
• Equivalent function to IOM on first gen chassis

Cisco UCS X-Series with Intersight Deployment Workshop Page 61


Copyright 2022 Cisco System, All rights reserved.
This slide highlights both the standard fabric paths coming from the compute nodes through
the IFMs and up to the fabric interconnects (FIs). It provides a view of how the management
path works between the device connector and the FIs and Intersight. It also depicts
communication between the device connector, the FIs and the endpoints downstream in the
System X chassis. Different views of different aspects of the same information include
upstream bandwidth, downstream connectivity and bandwidth, and the management traffic
signaling.
In Intersight, each endpoint is equipped with its own DC.
To support Intersight use case, the notion of parent and child DC is used.
• In Intersight, parent DC is the FI-DC and child DCs are CIMC-DC and CMC-DC.
• The high-level idea is that the child DCs follow the parent DC into its account.
• The user only performs claim and un-claim operations on the parent DC.
• When the parent DC is claimed, the child DC is automatically claimed.
• Similarly, when the parent DC is unclaimed, the child DC is automatically unclaimed and
moved into the onboarding account along with the parent DC.
• Note that user cannot claim or un-claim child DC directly from Intersight.

Cisco UCS X-Series with Intersight Deployment Workshop Page 62


Copyright 2022 Cisco System, All rights reserved.
In this section, we will look at Cisco UCS x210c M6 Compute Node. We will also compare the
UCS x210c M6 and UCS B200 M5. Lastly, we will look at DRAM memory rules.

Cisco UCS X-Series with Intersight Deployment Workshop Page 63


Copyright 2022 Cisco System, All rights reserved.
This is the first look at the compute node that will be available with Cisco UCS X-Series, which
will be known as the Cisco UCS x210c M6 Compute Node. The Cisco UCS x210c M6 Compute
Node is a two-socket compute node using code named Ice Lake from Intel and is now known
as the 3rd generation Z on scale processor.

• Internal components of the B server and the connection to the fabric modules
• Compute node - Cisco UCS x210c M6 Compute Node
• Two-socket node using code named Ice Lake from Intel known as 3rd generation Z on
scale processor
• VIC and mezzanine are at the back of the compute node
• Existing b200 product line is 25-gigabit connectivity to both boards
• Flexible front mezzanine for storage
• Blank space allowed - not forced to pay for infrastructure to support drives
• Between the two CPUs is an additional module
• Existing M.2 hardware rate controller is used as a boot drive

Cisco UCS X-Series with Intersight Deployment Workshop Page 64


Copyright 2022 Cisco System, All rights reserved.
This processor brings memory from 24 DIMMs to 32 DIMMs. It usually uses DDR4 DIMMs.
The 200-series of Optane persistent memory is now available coinciding with these systems.
Up to half of the DIMMs can be Optane along with DDR4 DIMMs. At the top of the picture,
which is the back of the compute node, are the VIC locations and a mezzanine. For those
familiar with the existing b200 product line, there is full 25-gigabit connectivity to both
boards, as well as the flexible front mezzanine used for storage.
• Processor memory from 24 Dual In-line Memory Modules (DIMMs) to 32 DIMMs
• Usually uses Double Data Rate 4 (DDR4) DIMMs
• 200 series of Optane persistent memory is available coinciding with these systems
• Up to half of the DIMMs can be Optane along with DDR4 DIMMs
• Expansion from a maximum of two drives to a maximum of six drives
• Increasing amount of local storage supported on a compute node
• Options available for serial attached small computer serial interface (SCSI) / Serial AT
Attachment (SAS /SATA) with RAID and blanks for those who do not need local storage

Cisco UCS X-Series with Intersight Deployment Workshop Page 65


Copyright 2022 Cisco System, All rights reserved.
This slide is a comparing the 2nd generation Zeon scalable to the 3rd generation Zeon
scalable. There are now several types of speed that vary across specific skews within the 3rd
generation product line, as well as secure extensions to the instruction architecture on the
processor for secure applications.
• 2nd generation Zeon scalable to 3rd generation Zeon scalable
• Numbers here are per socket, 16 DIMMs total
• Six terabytes max versus 12
• Same scaling for persistent memory
• Per-core performance, 1.4 to 1.5 on an equivalent 3rd gen
• Top bin skews are moving from 28 cores to 40 cores
• Minimum core count has increased from four cores to eight
• UPI is the fabric that connects the two sockets on the blade together to exchange
memory transactions
• Three links on both generations, each link's bandwidth has about a 10% increase

Cisco UCS X-Series with Intersight Deployment Workshop Page 66


Copyright 2022 Cisco System, All rights reserved.
This is a quick comparison overall between the B200 M5 and the new Cisco UCS X210c M6
Compute Node. The Cisco UCS X210c M6 Compute Node does follow the M6 generation
right along with the M6 from the original UCS architecture. Third generation processor
architecture unfortunately is unscalable.
The Total Distributed Power (TDP) maximum is up to 270 watts, delivering up to 40 cores per
socket or 80 cores per compute node. Currently, memory modules are 32 DDR4 DIMMs
going from 2933 MHz maximum to 3200 MHz maximum with a capacity of up to eight
terabytes. If you are going to install Optane persistent memory, you can install up to 16 of
those. Half of the DIMMs on the blade can be Optane, which would leave you with up to 16
DDR4. The maximum size on Optane is 512 gigabit per DIMM versus 256 gigabit per DIMM
on the DDR4.
There are six hot-swappable drives: SaS or NVMe plus the 2.2 drives for a maximum of nearly
100 terabytes per compute node. The RAID controller is a newer generation that comes on
the SaS front mezzanine. The VIC model numbers are unique, now the 14000 series instead
of 1400 series. They have 25-gigabit connections to each of the fabric modules. The total
fabric bandwidth on the blade has up to 825-gigabit connections, four per IFM, totaling 100
gigabits to each IFM.

Cisco UCS X-Series with Intersight Deployment Workshop Page 67


Copyright 2022 Cisco System, All rights reserved.
• Comparison between the B200 M5 and the new X210c M6
• X210c follows M6 generation from original UCS architecture
• 3rd generation is unscalable
• Total Distributed Power (TDP) maximum is up to 270 watts
• Up to 40 cores per socket or 80 cores per compute node
• Currently, memory modules are 32 DDR4 DIMMs going from 2933 maximum to 3200
MHz maximum with a capacity of up to 8 terabytes
• Optane persistent memory | install up to 16
• Half of the DIMMs on the blade can be Optane, up to 16 DDR4
• Max size Optane is 512 gig per DIMM versus 256 gig per DIMM on the DDR4
• Six hot-swappable drives: SaS or NVMe plus the 2.2 drives for a maximum of nearly
100 terabytes per compute node
• RAID controller is new on the SAS front mezzanine
• VIC model numbers are unique
• 14000 series instead of 1400 series
• 25-gigabit connections to each of the fabric modules
• Total fabric bandwidth on blade has up to 825-gigabit connections - four per IFM
• Totaling 100 gigabits to each IFM

Cisco UCS X-Series with Intersight Deployment Workshop Page 68


Copyright 2022 Cisco System, All rights reserved.
Comparison of B200 M6 to UCS Washington M6:
• Support for full TDP range of Ice Lake CPUs up to 40 cores @ 270W
• Up to 12TB total memory (4TB DDR4 + 8TB Barlow Pass PMM)
• Huge increase in disk capacity and bandwidth
• 8x 25GbE per node @ FCS, 2x 100GbE post FCS

Cisco UCS X-Series with Intersight Deployment Workshop Page 69


Copyright 2022 Cisco System, All rights reserved.
This is the host side view of vNIC presentation for a single VIC. Two are two additional vNICs
available when the port expander/VIC 14825 is added.

Cisco UCS X-Series with Intersight Deployment Workshop Page 70


Copyright 2022 Cisco System, All rights reserved.
This is a logical view of the node showing where the various connectors are located. One CPU
is populating the socket underneath the heatsink for the back CPU2, that is not populated.
The illustration displays the top of the blade with the back on the right. When the blade is
installed into the chassis, the blade's front will be to the left. The bottom right corner will
rotate and be on the top of the blade when it is inserted into the chassis. That is the location
of the mLOM connector. It has two connectors, one for the top IFM and one for the bottom
IFM making the primary network connection. You can see that there is some room in the
middle for the Zeon Technology, where the PCM technology is located.
There is room for the mLOM to expand into that area and the mezzanine area as well. So, an
mLOM can be the standard size and an extended size, which would allow the board to
consume all of this area or a full size, which would cover the entire back of the chassis. The
32-DIMM slots and the TPM is located at the back of the blade under the mezzanine where
the mezzanine card would go.

Additionally, you can see that the front of the blade has an ample open space where the front
mezzanine can connect into the front mezzanine connectors and has room for drives or other
potential future options.

• Using the diagram show the layout of each component

Cisco UCS X-Series with Intersight Deployment Workshop Page 71


Copyright 2022 Cisco System, All rights reserved.
Looking at the front of the blade, you see six drives. One nice feature is that these drives and
their carriers are common with M5 and M6 rack servers. The standard form factor used for
the hot-swap drives. Whether SAS/SATA or NVMe.

Because M6 is several years newer than M5, the drives themselves are generally not
common because you will see newer drives on M6. It is still common between the six racks
servers and the X-Series Cisco UCS X210c M6 Compute Node. They are not common with the
b200 M6 because the b200 M6 has a unique form factor for its drives. The other notable
difference for this blade is the new console connector. The form factor is the USBC, but it is
known as an oculant console port. The significant impact of that is that those thousands of
KVM dongles you have scattered all over the data center will now have a new model of KVM
dongle that you'll have to start populating. Of course, the blade will come with that adapter.
As usual, that is giving you a VGA connector and a couple of serial ports for the keyboard and
mouse and a serial console port.

Cisco UCS X-Series with Intersight Deployment Workshop Page 72


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X210c M6 Compute Node Front Panel (Cont.)

• Front of the blade has six drives


• Drives and their carriers are common with M5 and M6 rack servers
• Standard form factor used for the hot-swap drives, whether SAS/SATA or
nonvolatile memory express (NVMe)
• M6 is several years newer than M5, the drives themselves are generally not common
• Common between the six racks servers and the X-Series X2 compute node
• Not common with the b200 M6 because the b200 M6 has a unique form factor for its
drives
• New console connector on blade
• Form factor is the USBC, but it is known as an oculant console port
• New model of KBM dongle

Cisco UCS X-Series with Intersight Deployment Workshop Page 73


Copyright 2022 Cisco System, All rights reserved.
You cannot mix different technologies, such as RDIMMs and LRDIMMs. You cannot mix 3DS
DIMMs and non-3DS DIMMs. The reason is that the 128-gigabit DIMMs available are non-
3DS, and the 256-gigabit DIMMs will be 3DS. The load-reducing DIMMs are 64-gigabit and
cannot be mixed with these DIMMs either.
• Each CPU has eight memory channels with 2 DPC
• Speed varies by CPU model, check specs to confirm
• For best performance populate eight or sixteen DIMMs per CPU
• No mixing
• Registered DIMM (RDIMM) and load-reduction DIMM (LRDIMM)
• LRDIMM 3DS (256GB) and non-3DS (128GB)

Cisco UCS X-Series with Intersight Deployment Workshop Page 74


Copyright 2022 Cisco System, All rights reserved.
M5 DIMMs from M5 and earlier generation servers are not supported. The DIMMs are 3200
megahertz, and they are specific to the M6 product line. They are common across the M6
product line and interchangeable amongst M6 rack servers and the b200 M6. They are not
interchangeable with M5 or earlier server architectures.
A general rule is populating DIMMs across the CPUs identically. The processors will probably
handle non-recommended configurations, but memory performance would be impacted
substantially. When using persistent memory, all DDR4 DIMMs must be the same type and in
the same row. For the set of memory rules, there will be a M6 memory guide.

• M5 and earlier DIMMs not supported


• Populate slots across CPUs identically
• All DIMMs must be the same when used with persistent memory (PMEM)
• Refer to M6 Memory Guide:
https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-
computing/ucs-c-series-rack-servers/c220-c240-b200-m6-memory-guide.pdf

Cisco UCS X-Series with Intersight Deployment Workshop Page 75


Copyright 2022 Cisco System, All rights reserved.
In this section, we will look at Cisco UCS X-Series storage options.

Cisco UCS X-Series with Intersight Deployment Workshop Page 76


Copyright 2022 Cisco System, All rights reserved.
These are the local drive options:
• M.2 module in between the two CPUs
• M.2 module accommodates two M.2 storage devices
• Device has a hardware RAID chip built into the module, supporting RAID 1 across two
M.2 drives
• Only supports building a single volume across those two drives
• Different storage options on the front mezzanine or the blank option

Cisco UCS X-Series with Intersight Deployment Workshop Page 77


Copyright 2022 Cisco System, All rights reserved.
These ae the font hot swap drive mezzanine options:
• Optional front mezzanine supports up to six hot swap drives
• SAS HW RAID or NVMe mezzanine
• SAS/SATA HW RAID
• 6x U.2 SAS, SATA, or NVMe
• Up to 4x NVMe (x4 PCIe Gen4)
• Just a bunch of disks (JBOD) or RAID 0,1,5,6,10
• 4-gigabit cache
• SuperCap
• NVMe mezzanine
• 6x NVMe (x4 PCIe Gen4)
• RAID with Virtual RAID on CPU (VROC)

Cisco UCS X-Series with Intersight Deployment Workshop Page 78


Copyright 2022 Cisco System, All rights reserved.
The front mezzanine module is a drive cage that contains the compute node’s storage
devices. The front mezzanine storage module can include any of the following possible
physical configurations, the green being SAS/SATA drives.
The left larger box is the combinations of drives available when you are using the first mezz
option.
On the right is if you are using the NVMe Mezzanine option. It supports only NVMe drives up
to six. While the SAS/SATA mezzanine supports anywhere from 0 to 4, NVMe mezzanines
support from two to six SAS/SATA drives, which can be configured into varying combinations
of RAID volumes supporting multiple lines.

Cisco UCS X-Series with Intersight Deployment Workshop Page 79


Copyright 2022 Cisco System, All rights reserved.
The SAS/SATA module is known as the X10c mezzanine and RAID controller. It is the 3900
basic code-named Aero. It does have a cache. There is a SuperCap included and neither the
cache or the SuperCap are an option, they are always included. The X10c RAID controller
includes six drives up to four NVMe. RAID levels are up to RAID 10 or JBOD mode for those
who want to pass the drives through directly to the OS. The NVMe drives do support Intel
VROC, and they are all on the same root port. They can all be in the same VROC volume.
• PID: UCSX-X210C-RAIDF
• Broadcom 3900 ASIC (Aero) with 4GB Cache
• Gen 4 PCIe x8 (˜2GB/s Per Lane)
• SuperCap

The SAS/SATA controller SED drives are supported with key manager supported in either
local and remote mode. The Key Management Interoperability Protocol (KMIP) support is a
post initial release feature.
• SAS/SATA module known as the X10c mezzanine and RAID controller
• 3900 basic code named Aero
• Does have a cache
• SuperCap included
• X10c RAID controller includes six drives up to four NVMe
• RAID levels are up to RAID 10 or JBOD mode for those who want to pass the drives
through directly to the OS
• NVMe drives do support Intel VROC, are all on the same root port, can all be in the
same VROC volume

Cisco UCS X-Series with Intersight Deployment Workshop Page 80


Copyright 2022 Cisco System, All rights reserved.
• SAS/SATA controller Self-Encrypting Drives (SEDs) are supported with key manager
supported in either local or remote mode
• KMIP supports post initial release feature

Cisco UCS X-Series with Intersight Deployment Workshop Page 81


Copyright 2022 Cisco System, All rights reserved.
X10c RAID Controller (Cont.)
• 6 U.2 Drives
• 4x SAS/SATA/NVMe + 2x SAS/SATA
• 1x 12Gb SAS or 1x 6Gb SATA Per Drive to RAID Controller
• 4x PCIe 4.0 NVMe Per Drive to CPU
• SAS/SATA - RAID Levels 0,1,5,6,10 and JBOD
• NVMe – JBOD and Intel VROC Capable with Intel Drives
• SAS/SATA - SED with Local and Remote (KMIP) Support*

Cisco UCS X-Series with Intersight Deployment Workshop Page 82


Copyright 2022 Cisco System, All rights reserved.
This is the front mezzanine flipped over. It does have to be removed from the compute node.
The thumbscrews allow it to be removed. The SuperCap is on the bottom. A small connector
must be removed so that the SuperCap can be pulled out from its dedicated tray, allowing it
to be replaced as necessary.
• Front mezzanine flipped over
• Remove from the compute node with thumbscrews
• SuperCap is on the bottom
• Remove small connection to pull SuperCap out from dedicated tray | replace as
necessary

Cisco UCS X-Series with Intersight Deployment Workshop Page 83


Copyright 2022 Cisco System, All rights reserved.
This diagram conceptually shows the compute node, which has front mezzanine connectors.
The front mezzanine connector offers a total of 24 lanes of PCIe Gen4, eight of which go to
the RAID controller. The remaining 16 go to four of the drives giving four lanes per drive.
There are other options for having all of the drives as SAS/SATA or four of the drives as the
NVMe.
• Front mezzanine connector offers a total of 24 lanes of PCIe Gen4, eight of which go to
the RAID controller
• Remaining 16 go to four of the drives giving four lanes per drive
• There are other options for having all of the drives as SAS/SATA or four of the drives as
the NVMe.

Cisco UCS X-Series with Intersight Deployment Workshop Page 84


Copyright 2022 Cisco System, All rights reserved.
The X10c is slightly distinguished in being a pass-through controller instead of a RAID
controller. It consumes all 24 lanes of PCIe Gen4, which is 2 Gbps per lane. Eight Gbps
theoretical limit per drive across those four lanes. Six drives total. It does support Intel VROC
as long as you're using Intel in NVMe drives. The M6 drive family supports the next
generation Intel in NVMe drives with PCIe Gen4 support for the highest performance.
• X10c is a pass-through controller instead of a RAID controller
• Consumes 24 lanes of PCIe Gen4, 2 Gbps per lane
• Eight Gbps theoretical limit per drive across four lanes
• Six drives total
• Support Intel VROC while using Intel in NVMe drives
• M6 drive family supports next generation Intel in NVMe drives with PCIe Gen4 support

Cisco UCS X-Series with Intersight Deployment Workshop Page 85


Copyright 2022 Cisco System, All rights reserved.
Looking at a logical view, the front mezz connector available from the compute node is,
mating with the front mezz connector on the front mezz itself. This makes use of all 24 lanes
and dividing them evenly across the six drive slots.
• Front mezz connector available from the compute node mates with the front mezz
connector on the front mezz itself
• Uses all 24 lanes dividing them evenly across six drive slots

Cisco UCS X-Series with Intersight Deployment Workshop Page 86


Copyright 2022 Cisco System, All rights reserved.
The M.2 Hardware RAID module sits between the two CPUs. The M.2 RAID module supports
one or two M.2 SSDS configured in JBOD if you do not want them in RAID1 with a single
volume. It does require UEFI boot mode for the compute node. This is referred to the boot
optimized rate module providing a boot source for the server for those that boot locally
without consuming one of the front drive slots.
• UCS-M2-HWRAID
• Supports one or two M.2 SSDS
• JBOD or RAID1 configuration
• Boot support
• Requires Unified Extensible Firmware Interface (UEFI) mode

Cisco UCS X-Series with Intersight Deployment Workshop Page 87


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 88
Copyright 2022 Cisco System, All rights reserved.
This is an illustration of the VIC 14425 (mLOM). There is a connector for each IFM. The top one
goes to the IFM 1, and the bottom one goes to IFM 2. This is the component that is coupled to
the IFM. The brains on this card must match protocol and signals compatible with the IFM
mated to on the other side. Those two are the innerlock components within the chassis. At
initial release, this is a 4th generation VIC. Two by 25 gigabit per fabric for 100 gigabit total for
the card.
Technologies supported include virtualized networking, newer storage protocols like FC-NVMe
and NVMe over fabrics, and network virtualization offloads such as VxLAN, Geneve, and
Network Virtualization using Generic Routing Encapsulation (NVGRE). The mezzanine card is the
14825. It is very similar to the mLOM card. However, it goes in the mezzanine slot offering 100
gigabits total throughput to the two IFMs. The mezzanine goes at the bottom, and it connects
to the fabric expansion modules.
• Direct fabric IO connections mate the Mezz connector to the X Fabric modules at the
bottom rear of chassis
• Route Mezzanine network signals to the IFM modules at the top rear a bridge card used
• PCIe and network signals are routed through the mLOM card to the CPU and IFM by the
bridge
• 2 x x16 PCIe Gen4 connections (1 to each CPU) routed to the OD connectors to the FEM
• Allows PCIe expansion while getting full network bandwidth
• Not forced to choose one or the other

Cisco UCS X-Series with Intersight Deployment Workshop Page 89


Copyright 2022 Cisco System, All rights reserved.
The mezzanine can expand the network bridge by using the mezzanine to mLOM bridge. The
bridge is located in the middle of two boards. The two boards are installed first, and then the
bridge chip goes on top afterward. Reverse the steps to remove only one of the boards. The
mezzanine VIC delivers networking signals that are going to reach the IFM connectors. The
IFM at the top of the blade has the connectors that go to the fabric expansion modules at
the bottom.

• Direct fabric IO connections mate the Mezz connector to the X Fabric modules at the
bottom rear of chassis
• Route Mezzanine network signals to the IFM modules at the top rear using a bridge card
• Signals are routed through the mLOM card to the IFM by the bridge

Cisco UCS X-Series with Intersight Deployment Workshop Page 90


Copyright 2022 Cisco System, All rights reserved.
Throughput is available based on network options chosen for the compute node.
• mLOM is required, but mezzanine card is optional
• With just mLOM, 100 gigabit for compute node, 50 to each IFM
• 200 Gbps by adding the Mezz
• Speed per VIC adapter seen to the OS shows as 50 gigabit
• Maximum single flow across the board is 25 gigabit
Pinning must result in at least one vNIC per port group to achieve maximum bandwidth. See
the following slides for port group details. Links in a port group are automatically port
channeled.
At the OS level, vNICs can have switch independent OS bonding/teaming to achieve
maximum bandwidth, subject to flow hashing.

Cisco UCS X-Series with Intersight Deployment Workshop Page 91


Copyright 2022 Cisco System, All rights reserved.
Traffic flows

There are two port groups on the mLOM card. The mLOM receives its signals from CPU 1. It
is allowing a single socket system to be able to work with a single mLOM. The mezzanine
gets its signals from CPU 2 and cannot be used in a single socket CPU configuration. Each
port group has two 25-gigabit lanes going to the IFM. Each port group is a physical connector
going to the IFM.

• Two port groups on the mLOM card


• mLOM receives its signals from CPU 1
• Single socket system to work with a single mLOM
• Mezzanine gets signals from CPU 2, cannot be used in a single socket CPU configuration
• Each port group
• Two 25-gigabit lanes to the IFM
• A physical connector going to the IFM

Cisco UCS X-Series with Intersight Deployment Workshop Page 92


Copyright 2022 Cisco System, All rights reserved.
There is an option with the mezzanine card requiring a second CPU. The PCIe connection is
made through the mLOM connector and connects to the mezzanine by the bridge. Those
connections connect to the fabric module and the expansion module. Now, there are two
more port groups with four more 25-gigabit lanes. There is 100 gigabit total going to each
IFM composed of 25-gigabit lanes, which are seen two at a time, representing a 50-gigabit
device presented to the OS. The maximum individual flow is 25 gigabit.
Optional mezzanine card requiring a second CPU:
• PCIe connection through the mLOM connector and connects to the mezzanine by the
bridge. Those connections connect to the fabric module and the expansion module.
• Two more port groups with four more 25 Gigabit lanes.
• 100 Gigabit total going to each IFM composed of 25 Gigabit lanes, seen two at a time
representing a 50 gigabit device presented to the OS.
• Maximum individual flow is 25 gigabit.

Cisco UCS X-Series with Intersight Deployment Workshop Page 93


Copyright 2022 Cisco System, All rights reserved.
In this illustration, CPU 1 and CPU 2 each have a PCIe Gen4 lane coming to the mLOM
connector. The connection from CPU 1 goes to the mLOM. The other connection from CPU 2
crosses the bridge and goes to the mezz adapter. That is two PCIe Gen4 connections. One
from each CPU goes into the mLOM connector, and one goes to the mezz. The mezz network
connections come back over the bridge connector and go to the physical connectors on the
mLOM. That is why the networking, even if you are using mLOM and a mezzanine, goes to
the IFMs at the top of the chassis.
A PCIe Gen4 connection from each CPU is routed to the mezz connector that the card plugs
into. Those lanes are routed to those connectors that mate to the flexible expansion module.
Today, if you buy a Cisco UCS X210c M6 Compute Node and deploy it with an mLOM and a
mezzanine in the future, you can purchase flexible expansion modules with PCIe capability.
They will work without having to re-engineer your compute nodes. Each compute node has
two PCIe Gen4 connections, one from each CPU routed out the FEM connector into the FEM
device.
• CPU 1 and CPU 2 each have a PCIe Gen4 lane coming to the mLOM connector
• CPU 1 connects to the mLOM
• Other CPU 2 connection crosses the bridge to the mezz adapter
• Two PCIe Gen4 connections
• One from each CPU goes into the mLOM connector then to the mezz
• Mezz network connections return over the bridge connector and to the physical
connectors on the mLOM
• A PCIe Gen4 connection from each CPU is routed to the mezz connector that the card
plugs into
• Lanes are routed to connectors that mate to the flexible expansion module

Cisco UCS X-Series with Intersight Deployment Workshop Page 94


Copyright 2022 Cisco System, All rights reserved.
• Today, Cisco UCS X210c M6 Compute Node deployed with an mLOM and a mezzanine in
the future, purchase flexible expansion modules with PCIe capability
• They will work without having to re-engineer your compute nodes
• Each compute node has two PCIe Gen4 connections
• One from each CPU routed out the FEM connector into the FEM device

Cisco UCS X-Series with Intersight Deployment Workshop Page 95


Copyright 2022 Cisco System, All rights reserved.
The mLOM has one Gen4x16 from each CPU, and the rear has the same. The VIC are Gen3,
but it presents with a Gen4, and it is consuming a Gen3, which is half the maximum
bandwidth. It is about 128 gigabit maximum, which is more than enough to support the 100
gigabits that the card is capable of. The mezzanine receives its connection from CPU 2 via the
bridge and is Gen 3. The front mezz is Gen 4 by 24. The expansion module also has an x16
connection from each CPU.
Total PCI Express (PCIe):
• mLOM has one Gen4x16 from each CPU, and the rear is the same
• VIC are Gen3, but presents with a Gen4, consuming a Gen3, which is half the maximum
bandwidth
• About 128 Gigabit maximum, enough to support the 100 gigabits that the card is
capable of
• Mezzanine receives its connection from CPU 2 via the bridge and is Gen 3
• Front mezz is Gen 4 by 24
• Expansion module also has an x16 connection from each CPU

Cisco UCS X-Series with Intersight Deployment Workshop Page 96


Copyright 2022 Cisco System, All rights reserved.
This slide highlights a Trusted Platform Module (TPM) Product ID (PID). The module is also
FIPS140-2 compliant. The mezzanine card, containing the TPM, is installed at the back of the
blade.
• TPM PID (Product ID), is FIPS140-2 compliant
• Mezz card installed at the back of the blade

Cisco UCS X-Series with Intersight Deployment Workshop Page 97


Copyright 2022 Cisco System, All rights reserved.
In this section, we will look at an overview of power, space, and cooling considerations
required for successful deployment of equipment.

Cisco UCS X-Series with Intersight Deployment Workshop Page 98


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS complies with a number of leading stringent Eco-design standards for re-use,
power consumption, and idle power efficiency.
UCS certifications are listed in appropriate registries or in Cisco documentation. Cisco is a
leading proponent for these standards.

Cisco UCS X-Series with Intersight Deployment Workshop Page 99


Copyright 2022 Cisco System, All rights reserved.
This is an example of an environmental specification table
Refer to the proper hardware installation guide for the recommended operating
temperature range. There are normal and extended operating ranges up to 40 or 45 °C and
restrictions on how the system is used in those ranges.

Cisco UCS X-Series with Intersight Deployment Workshop Page 100


Copyright 2022 Cisco System, All rights reserved.
The power supplies have their own fans and are a self-contained cooling zone. Each delivers
15 CFM for its own uses. The power supplies are on the front right, or the back left of the
chassis and arranged vertically. The fans are hot-swappable, offering N+1 and N+N
redundant. The power input modules are devices at the top above and below the left-most
system fan.
The power input modules are channels that allow the same chassis skew to be deployed for
various power requirements. AC or DC is independent of the power supply cable. Power
input matches the cabling to the chosen power supply.
There are four chassis fan slots across the back. The overall chassis can deliver about 1000
CFM of air from the front to the back. That is about 110 CFM per compute node and 40 CFM
per fabric module.
• The benefits of the higher voltage distribution are extended to the fans, further
improving system efficiency.
• Ample cooling overhead capacity for future growth provides support for evolving
workload demands.
• Each horizontal cooling zone is N+1.
• Maximum chassis airflow is approximately 1000 CFM
• Approximately 110 CFM per node, 40 CFM per fabric module and 15 CFM per power
supply unit (PSU)
• Hot-swappable N+1 redundant
• Large higher voltage system fans provide more airflow at lower speeds
• Lowers fan noise reduces power consumption

Cisco UCS X-Series with Intersight Deployment Workshop Page 101


Copyright 2022 Cisco System, All rights reserved.
By segmenting cooling zones the fans can be run at optimal speeds within zones to improve
efficiency. This reduces fan power consumption for required cooling needs.
• Power supplies are in their own cooling zone
• Large range of vertical and horizontal zones
• Modules are a combination of the four fans plus four modules
- Two above and two below
- IFMs and the FEMs have fans within the module themselves
- Fans are replaceable but they are not hot-swappable

Cisco UCS X-Series with Intersight Deployment Workshop Page 102


Copyright 2022 Cisco System, All rights reserved.
Generally, airflow through the chassis is front to back, and is not selectable. Air enters the
chassis through the compute nodes and power supply grills at the front of the chassis and
exits through the fan modules on the back of the chassis. Airflow combined with the fans and
the blades creates the airflow channels and zones, with the power supplies being their own
zones. The open chassis reduces the airflow impedance through the system allowing for a
significant increase in overall cooling capacity and efficiency.
• Airflow is front to back
• Open chassis reduces airflow impedance through the system
• Improves overall cooling capacity and efficiency

Cisco UCS X-Series with Intersight Deployment Workshop Page 103


Copyright 2022 Cisco System, All rights reserved.
The CMC is the chassis controller that monitors all the thermal sensors. It has a logic
sequence state machine measuring sensors, performs some evaluations, and decides how
fast the chassis fan should spin. This includes the main chassis fans and the fans located
within the fabric modules.
A higher performance chassis controller allows for an increase in monitoring sample
frequency. Power consumption changes with temperature change. The X-Series uses power
consumption data to initiate fan control, improving efficiency and reducing under or over-
shoot in cooling response.
• CMC monitors all thermal sensors
• Logic sequence state machine measures sensors, performs evaluations and decides
how fast the chassis fan should spin, Including main chassis fans and fans located
within the fabric modules
• Fans are variable
• Coordinated speeds ensure air flows through the chassis is smooth

Cisco UCS X-Series with Intersight Deployment Workshop Page 104


Copyright 2022 Cisco System, All rights reserved.
The compute node has thermal sensors on the processors, the memory DIMMs, the VICs, the
drives, the controllers, and the system board. The measured room ambient temperature is
calculated from a sensor at the front of the blade as well.
More component sensors are monitored and used to drive fan speed settings than on
previous platforms. There are no faults within normal operating ranges. The system takes
the lowest measured temperature and assumes this is the data center temperature.
• Incremental and maximum fan speed increase thresholds assign to each component
• Thermal faults asserted when temperatures exceed normal operating limit
• NOT all sensors participate in fan speed control
- RoC, HDD, SSD or NVMe

Cisco UCS X-Series with Intersight Deployment Workshop Page 105


Copyright 2022 Cisco System, All rights reserved.
The fabric module has a wide range of thermal sensors and allows the CMC to identify the
most thermally challenging part of the system. This ensures that each part has enough
airflow or cooled air supplied to it to stay within its operating range.
• Incremental and maximum fan speed increase thresholds assign to each component
• Thermal faults asserted when temperatures exceed normal operating limit

Cisco UCS X-Series with Intersight Deployment Workshop Page 106


Copyright 2022 Cisco System, All rights reserved.
The power supply handles its cooling and has its own temperature sensors. It is fully self-
contained. The chassis management controller is not involved with setting the fan speed, but
it does monitor the fan. It can report fan failure, power supply failure, or power supply
thermal issues.
• PSU fan speed control independent of chassis fan speed control
• Incremental and maximum fan speed increase thresholds assign to each component
• Thermal faults asserted when temperatures exceed normal operating limit

Cisco UCS X-Series with Intersight Deployment Workshop Page 107


Copyright 2022 Cisco System, All rights reserved.
The chassis has both fan speed sensors and thermal sensors in the fans themselves. Alarms
are generated based on selected operating environment range. There is Post-FCS support
for ASHRAE extended operating ranges.
• Chassis inlet
• A pseudo sensor determined by the lowest node inlet temperature
• Sensor assigned a single dynamic maximum speed increase threshold
• Fan health has single maximum speed increase threshold
• Chassis has both fan speed sensors and thermal sensors

Cisco UCS X-Series with Intersight Deployment Workshop Page 108


Copyright 2022 Cisco System, All rights reserved.
The default fan speeds for UCS X-Series are expected to be adequate to meet pretty much
any workload at the initial release. The Cisco UCS X-Series with Intersight dashboard also has
the ability to monitor all of the fan sensors.
• Local Fan Control Policy applied profile (chassis or server)
• Fan Control Policy changes are non-disruptive
• Policy defines the minimum allowable fan speed, based on inlet air temperature and
hardware configuration
• Default fan speeds for UCS X-Series are expected to be adequate to meet pretty much
any workload at the initial release

Cisco UCS X-Series with Intersight Deployment Workshop Page 109


Copyright 2022 Cisco System, All rights reserved.
A future selectable power source type is available without requiring different chassis SKUs.
It can be field retrofitted for rare occasions when the source power changes. It provides high
efficiency 56V power distribution and substantial chassis power overhead for future needs.

Cisco UCS X-Series with Intersight Deployment Workshop Page 110


Copyright 2022 Cisco System, All rights reserved.
The X-Series can achieve the same volumetric airflow at a significantly reduced fan speed
and resulting power consumption. The chassis efficiency improvements provide even higher
flow increases at higher fan speeds. The X-Series infrastructure is laying the groundwork for
an extended platform life.
There are a couple of comparison points between the M4 blade and the M5 blade. This is the
airflow in the chassis and the blades operating range, which need to be properly cooled to
stay within that range. With the X-Series, an M5 blade that requires about 50% fan speed,
would require less than 40% fan speed. The higher volume airflow allows the fans to work at
slower speeds, which reduces the fans' power demand and improves the cooling efficiency.
• Airflow comparison between the existing chassis and the new X-Series chassis
• Comparison between the M4 blade and the M5 blade
• Chart represents the airflow in the chassis and the blades operating range
• Must be cooled appropriately to stay within the operating range
• The X-Series, an M5 blade requires about 50% fan speed, would require less than 40%
fan speed
• Higher volume airflow allows the fans to work at slower speeds, reducing the fans'
power demand and improves cooling efficiency

Cisco UCS X-Series with Intersight Deployment Workshop Page 111


Copyright 2022 Cisco System, All rights reserved.
This illustration compares power at a particular rating of power and the amount of heat
removed from the chassis. Lower power on X-Series results in more heat removed. To
remove the same amount of heat significantly lowers the amount of power, as necessary.
A 30+% less fan power is required to remove the same amount of heat from the chassis. This
is more than double the total capacity for removing heat than the existing chassis.
• Power and the amount of heat removed from the chassis
• Lower power on X-Series results in more heat removed
• Removing the same amount of heat significantly lowers the amount of power, as
necessary

Cisco UCS X-Series with Intersight Deployment Workshop Page 112


Copyright 2022 Cisco System, All rights reserved.
The power calculator support for X-Series is now available. The power requirements should
be optimized based on actual system configuration prior to ordering.

Cisco UCS X-Series with Intersight Deployment Workshop Page 113


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight
Deployment Workshop
Section 3
Cisco UCS X-Series Installation Considerations

Welcome to this section of the UCS Deployment Workshop on Deployment Considerations for
the X-Series components.

In this topic, we will walk through some deployment options available.

Cisco UCS X-Series with Intersight Deployment Workshop Page 114


Copyright 2022 Cisco System, All rights reserved.
A typical deployment has several options or devices besides the Unified Computing System
(UCS) fabric components of the Fabric Interconnects (FIs). In this illustration, fiber channel
storage, often attached to a fabric, provides a common storage pool to accommodate the
devices in one or more domains. With UCS, connecting components is key in obtaining the
components' functionality and cabling to meet the devices' total bandwidth needs. The end
solution determines what connections you need.
• Customer specified / Custom implementation Model
• Deployments have several devices besides the UCS fabric components and the FIs.
• Fiber channel storage provides a common storage pool to accommodate devices in
one or more domain.
• Connecting components correctly is key in solution functionality and cabling to
meet the devices' total bandwidth needs.

Cisco UCS X-Series with Intersight Deployment Workshop Page 115


Copyright 2022 Cisco System, All rights reserved.
A FlexPod deployment uses fiber channel and Ethernet to connect to storage devices. Those
that do not have device protocol-specific connections have Ethernet connections to the fabric
interconnects (FIs). Storage devices support multi-protocols such as Small Computer System
Interface over IP (iSCSI). While this is specific to a particular FlexPod deployment in a domain,
the same storage provides services to internal devices, other UCS domains, or even outside of
the UCS domains.
• Cookie Cutter/Off the Shelf deployment option
• FlexPod deployments use fiber channel and Ethernet to connect to storage
devices.
• UCS domain supports UCS X-Series chassis and may include C-Series servers.
• Those with no device protocol-specific connections have Ethernet connections to
the FIs.
• Storage devices support multi-protocol.
• FlexPod deployment domains use the same storage as internal devices, other UCS
domains, or outside of the UCS domains.

Cisco UCS X-Series with Intersight Deployment Workshop Page 116


Copyright 2022 Cisco System, All rights reserved.
The FlashStack with Cisco UCS X9508 Modular System Chassis with Small Computer System
Interface over IP (iSCSI) Design is a cookie cutter/off the shelf deployment option. iSCSI is the
basis of a FlackStack deployment, has no fiber channel and has complete network connectivity.
• Cookie Cutter/Off the Shelf deployment option – iSCSI
• iSCSI is the basis of a FlackStack deployment
• No fiber channel
• Complete network connectivity

Cisco UCS X-Series with Intersight Deployment Workshop Page 117


Copyright 2022 Cisco System, All rights reserved.
Looking at other options for storage, this is a fiber channel-specific design. The storage may be
common for very few, very high-speed deployments such as NVMe over fiber channel. There is
no network cabling to the storage side. Nothing has been changed on the actual compute
portion of the deployment.
When planning the total solution, identify a complete list of devices and the protocols to
communicate. Then work your way through the cabling based on the total bandwidth required
across the device types.
• Cookie Cutter/Off the Shelf deployment option – Fiber Channel
• Storage might be common for nonvolatile memory express (NVMe) over fiber
channel
• No network cabling to the storage side
• Nothing changed on the compute deployment
• Identify the following while planning
• Complete list of devices and protocols
• Cabling requirements based on the total bandwidth required across the device
types

Cisco UCS X-Series with Intersight Deployment Workshop Page 118


Copyright 2022 Cisco System, All rights reserved.
The Fabric Interconnects and the X-Series chassis are not the only components in the total UCS
solution. There are several components in the solution, such as top of rack leaf switches, C-
Series servers, fiber channel storage arrays, and the storage devices themselves. While working
through the total solution, all devices need cable and power. The complete solution must
satisfy the bandwidth and connecting of all these devices into the final solution.
• Other XXX-Series solution components may include:
• Top of rack leaf switches
• C-Series servers
• Fiber channel storage arrays
• Storage devices themselves

Cisco UCS X-Series with Intersight Deployment Workshop Page 119


Copyright 2022 Cisco System, All rights reserved.
In this topic, we will walk through some things to consider as part of the installation.

Cisco UCS X-Series with Intersight Deployment Workshop Page 120


Copyright 2022 Cisco System, All rights reserved.
In this deployment example, fibre channel (FC) storage is being used. Shown in red are the
fiber channel storage links and the connected ports. This deployment uses four links per Inter
Fabric Module (IFM) to the fabric interconnect (FI).
The customer has identified that the deployment is going to use four links per IFM to the FI and
what interfaces are going to be used. The customer has also identified the uplink requirements
to the top of rack switches for network traffic.
In this topology, the customer has selected the 100G uplink option. Also, in this illustration, the
connection that the IFMs deliver traffic internally to the chassis is shown. Those are internal
connections that are connected between the IFMs and the compute nodes. There is no cabling
required to make these connections.
Looking at the IFMs, cables connect them to the first group of ports. Even though the ports are
physically separated, there is no logical or functional construct related to that physical
separation. It is simply eight ports connected to the IFM. In general, connect cables left to right,
port one through eight, as needed. All of the cables between the IFM and the FI will be
automatically port channeled by the software when the domain is configured.

Cisco UCS X-Series with Intersight Deployment Workshop Page 121


Copyright 2022 Cisco System, All rights reserved.
After deciding on the type of cabling to use and what links the deployment requires, the
customer will need to look at the specifics to be included in the solution. One of these
requirements would be the number of ports needed. Select either 1U or 2U fabric
interconnects (FIs) to supply the total number of ports required.
As a general rule of thumb, the customer selects the ports to use after selecting the cabling
types. By doing so, the customer knows what type of ports are being used for storage and
compute nodes.
• Provide the intelligence until Intersight Managed Mode (IMM) is integrated
• Specifics to the solution
• Port count requirements
• 1U or 2U FIs
• Select ports to use after selecting the cabling types

There are specific ports carved out for fiber channel, FI to IFM links, and the network uplinks.
Some ports support specific protocols or speeds that you must be aware of and adhere to. For
instance, if you are looking for fiber channel storage on the 1U, 6454 FI, the fiber channel is
available on ports 1 to 16.

Cisco UCS X-Series with Intersight Deployment Workshop Page 122


Copyright 2022 Cisco System, All rights reserved.
Intersight helps with configuring those ports for fiber channel mode when Intersight sets up the
domain. It is essential to understand the connectivity options in the fabric before choosing the
ports. You do not want to buy unnecessarily, knowing that you might want to expand the fiber
channel capacity in the future. If that's the case, make sure that you do not use more ports than
necessary for Ethernet protocol because those same ports, one through sixteen, can support
both Ethernet and fiber channel as required.
• 6454 Fabric Interconnect details
• Specific ports for fiber channel, FI to IFM links, and uplinks
• Some ports support specific protocols or speeds
• Understand connectivity options in the fabric before choosing the ports
• Only use the necessary ports for Ethernet protocol
• Ports, 1 through 16 support Ethernet or fiber channel

Cisco UCS X-Series with Intersight Deployment Workshop Page 123


Copyright 2022 Cisco System, All rights reserved.
Similarly, for the 2U, the 108-port fabric interconnect (FI) have the same options as the 6454 FI.
Fiber channel is supported on ports 1 through 16. 1-gigabit connectivity is limited to a few
specific ports on both models. The actual connections you have are small form-factor
Pluggables (SFP) 28 connections for the 25 gigabit ports and QSFP connections for the 100
gigabit ports, which will affect the maximum speeds those ports can handle.
• 64108 Fabric Interconnect Details
• 2U, 108 port fabric interconnect have the same options as the 6454 FI
• Fiber channel is supported on ports 1 through 16
• 1-gigabit connectivity is limited to specific ports on both models
• SFP 28 connections for 25-gigabit ports
• Quad small form-factor pluggable (QSFP) connections for 100-gigabit ports

Cisco UCS X-Series with Intersight Deployment Workshop Page 124


Copyright 2022 Cisco System, All rights reserved.
In the front of the fabric interconnect (FI), there are some additional cabling requirements.
There are clustering connections called L1/L2 links with ports in the front of the FI that must be
cabled together. L1 to L1 and L2 to L2 connections are required for FIs in IMM mode.
Additionally, the FIs’ management port must be attached to the management fabric within the
data center. These uplinks ensure the FI device connector has a path to reach the Intersight
management point.
• Additional cabling requirements
• With Cisco UCS Manager (UCSM), clustering connections called L1/L2 links must be
cabled together
• L1 to L1 and L2 to L2 is required for fabrics in IMM mode
• FIs’ management port must be uplinked to the management fabric within the data
center
• Ensures FI device connector has a path to reach the Intersight management
point

Cisco UCS X-Series with Intersight Deployment Workshop Page 125


Copyright 2022 Cisco System, All rights reserved.
There is a serial console port on the front of the fabric interconnect. Serial console access is
required to configure the fabric interconnect into Intersight mode. The customer will configure
the initial network settings so that the device connector can reach the Intersight portal to claim
the device.
During the initial configuration, the setup wizard is used to select the IMM option of either UCS
Managed Mode or IMM Mode.
• Serial console port on the front of the fabric interconnect
• Serial console access required to configure fabric interconnect into Intersight
mode
• Configure initial network settings so that the device connector can reach the
Intersight portal to claim the device
• Console port used for initial configuration
• During initial configuration setup wizard is used to select the IMM option:
• UCS Managed Mode
• IMM Mode

Cisco UCS X-Series with Intersight Deployment Workshop Page 126


Copyright 2022 Cisco System, All rights reserved.
The fabric connections and the two IFMs are at the top of the chassis. Both FEMs at the bottom
of the chassis or two blank modules, require fans to deliver cooling to the chassis. Based on the
number of power supplies used in the solution, the customer will identify the number of power
cables needed. Deploying all six power supplies is not always required. N + N is the desired
resiliency model for both the fabric and the power side. This configuration will provide full
redundant power to all of the compute nodes.
Once bandwidth and power requirements are determined, you will need to double the cabling
requirements to accommodate the N+N resiliency model. The UCS power calculator offers a
web tool to help determine the total power requirements, the number of power supplies, and
the cooling requirements necessary to operate the equipment. You can find the power
calculator at the following link:
https://ucspowercalc.cloudapps.cisco.com/public/index.jsp#listProject
• Fabric connections and two IFMs are at the top of the chassis
• FEMs or two blank modules require fans to cool the chassis
• Identify
• Number of power cables required
• Power supplies required - UCS power calculator web tool

Cisco UCS X-Series with Intersight Deployment Workshop Page 127


Copyright 2022 Cisco System, All rights reserved.
One of the last decisions to make is where to place the management. This can be done at any
point along the deployment sequence and is often determined well before the deployment.
IMM is available through Intersight.com using the software as a service (SaaS) model. IMM can
also be deployed in a virtual appliance, and that virtual appliance has two options. It can either
be connected to the appliance, where the virtual appliance itself is claimed in Intersight to
exchange metadata and perform support options. Such options include delivering logs to TAC,
loading field notices and security advisories mapped to the hardware, and obtaining and
deploying firmware updates. The virtual appliance will claim the devices themselves. Intersight
claims the virtual appliances.
Suppose an external network connection is not available. In that case, the virtual appliance can
be deployed in private mode or air-gapped, where it does not have any physical connection.
Cisco's network will manually obtain any metadata or firmware. It will then be brought over to
the appliance in some manner and deployed for support.
• IMM is available through Intersight.com using the software as a service (SaaS) model
• IMM can be deployed in a virtual appliance, and that virtual appliance has two options
• Connected
• Private or air-gapped

Cisco UCS X-Series with Intersight Deployment Workshop Page 128


Copyright 2022 Cisco System, All rights reserved.
Please refer to the appropriate installation guides when deploying a X-Series fabric solution. For
regulatory compliance and safety information, consult the installation guides for full details.
• Refer to all appropriate installation guides for full details
• Before you install, operate, or service the system, see the Regulatory Compliance and
Safety Information for Cisco UCS for important safety information
https://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/hw/regulatory/
RCSI-0423-book.pdf

Cisco UCS X-Series with Intersight Deployment Workshop Page 129


Copyright 2022 Cisco System, All rights reserved.
In this topic, we will look at specifics on installing and removing hardware components into the
chassis.

Cisco UCS X-Series with Intersight Deployment Workshop Page 130


Copyright 2022 Cisco System, All rights reserved.
While looking at the front of the chassis, the customer has two essential decisions to make:
how many power supplies do you need, and what are the compute demands for the solution.
Knowing this helps determine how many compute nodes to deploy.
Always use blanking panels to fill all remaining slots that are not occupied by either compute
nodes or power supplies. This is done automatically as part of the ordering process. Using a
rack without blanking panels could result in improper cooling and air flow for the unit.
• Power supplies needed
• Compute demands for the solution
• Blanking panels fill slots not occupied by either compute nodes or power supplies
• Done automatically as part of the ordering process

Cisco UCS X-Series with Intersight Deployment Workshop Page 131


Copyright 2022 Cisco System, All rights reserved.
Anyone familiar with UCS products or most state-of-the-art hardware is familiar with the
latching mechanisms. When the compute node is on its side, the top is the side with the lid.
When putting the compute node in the chassis, the top is going to be on the left. With the
compute open, the mLOM will be on top of the blade. This is where the IFMs are, so the mLOM
will be able to connect to the IFMs for networking.
• When the compute node is on its side, the top is the side with the lid.
• When putting the compute node in the chassis, the top is going to be on the left.
• With the compute open, the modular LAN on motherboard (mLOM) and the IFMs will
be on top on the blade.

Cisco UCS X-Series with Intersight Deployment Workshop Page 132


Copyright 2022 Cisco System, All rights reserved.
When installing a Compute Node:
• Keep blade server vertical and slide it into the chassis.
• Connectors are on the inside of the chassis.
• When blade server is almost installed, grasp ejector handles and arc them toward
each other.
• Blade powers up.
• Push ejectors until they are parallel with the face of the blade server.
• When the blade server is completely installed, retention latches click into place.

Cisco UCS X-Series with Intersight Deployment Workshop Page 133


Copyright 2022 Cisco System, All rights reserved.
On the front of the chassis is the power supply unit (PSU). Power supplies are long and heavy.
There is no requirement for which slots power supplies go in. However, for locations with no
power supply, you must use blanks to fill the space. The back of the chassis is divided into three
at the top and three at the bottom to help support N+N grade redundancy. You usually would
populate them similarly on the front, i.e., 1 and 4, then 2 and 5, and then finally 3 and 6.
• PSU is located on the front of the chassis.
• Power supplies are long and heavy.
• Power supplies can go in any available slot .
• Use blanks to fill unoccupied spaces.
• Back of chassis is divided into three at the top and three at the bottom to support N+N
grade redundancy.
• Populate on the front, i.e., slots 1 and 4, 2 and 5, and finally 3 and 6.

Cisco UCS X-Series with Intersight Deployment Workshop Page 134


Copyright 2022 Cisco System, All rights reserved.
You are now looking at the back of the chassis. The companion to the power supply is the
power entry module. The power entry module provides the interface between the power cords
coming from facilities and the power supply input. The power supply and the power entry
module match each other. For instance, an AC power supply will require a power entry module
that supports AC and, likewise, for DC. The ability to change the modular parts of the system
will keep the chassis relevant for future technologies.
• Companion to the power supply is the power entry module.
• Provides the interface between the power cords coming from facilities and the power
supply input.
• Power supply and power entry module match each other.
• Changing modular parts of the system helps to keep the chassis relevant to future
technologies.

Cisco UCS X-Series with Intersight Deployment Workshop Page 135


Copyright 2022 Cisco System, All rights reserved.
The PSU Keying Bracket attaches to the side of the chassis that prevents incompatible pairings.
It is specific to the type of power supply and power entry modules deployed.

Cisco UCS X-Series with Intersight Deployment Workshop Page 136


Copyright 2022 Cisco System, All rights reserved.
Intelligent Fabric Modules (IFMs) are installed in the rear of the chassis. They are always
deployed in pairs, so there are no IFM module blanks installed. Swing the ejector handles to the
open position. Placing one hand underneath the IFM, align the module with the empty IFM slot
on the rear of the chassis.

Cisco UCS X-Series with Intersight Deployment Workshop Page 137


Copyright 2022 Cisco System, All rights reserved.
To install the IFM, hold it level and slide it almost into the chassis until you feel some resistance.
This resistance is expected. It occurs when the connectors at the rear of the IFM contact the
socket inside the chassis. Grasp each of the ejector handles, and keeping them level, arc them
inward toward the chassis. This step seats the IFM connectors into the sockets on the midplane.
Push the ejector handles until both are parallel with the face of the IFM. Make sure the ejector
latch is fully inserted in the front panel.

Cisco UCS X-Series with Intersight Deployment Workshop Page 138


Copyright 2022 Cisco System, All rights reserved.
To remove the Intelligent Fabric Module, pinch the interior end of both handles to disengage
the ejector latch. This step unlocks the module handles so that they can move. Keeping the
handles level, pull them towards you so that they arc away from the chassis. You might feel
some resistance as the IFM disconnects from the socket inside the chassis.

Cisco UCS X-Series with Intersight Deployment Workshop Page 139


Copyright 2022 Cisco System, All rights reserved.
Slide the module about halfway out of the chassis, then place your other hand underneath the
IFM to support it. Continue sliding the IFM out of the chassis until it is completely removed.
Support the IMF and slide it out of the chassis.

Cisco UCS X-Series with Intersight Deployment Workshop Page 140


Copyright 2022 Cisco System, All rights reserved.
This is an example of an IFM and the location of the three fans. There are no replaceable
components on the IFM other than the IFM itself being at the replaceable field module.

Cisco UCS X-Series with Intersight Deployment Workshop Page 141


Copyright 2022 Cisco System, All rights reserved.
The top of the chassis requires two IFMs. The bottom of the chassis does not require an
expansion module. When you do not have an expansion module, use a rear module blank to
provide fans. The fan location is in the same positions as on the IFM.

Cisco UCS X-Series with Intersight Deployment Workshop Page 142


Copyright 2022 Cisco System, All rights reserved.
The fan modules have a slightly different latching mechanism. In the illustration, the person's
thumb is on the release latch. To install a fan module, hold the fan module with the handle at
the bottom and place your other hand underneath the fan module to support it. Next, align the
fan with the fan bay in the rear of the chassis. Gently slide the fan into the chassis until it is
flush with the face of the chassis. Make sure that the latch on the handle is engaged with the
chassis. When the fan module is almost completely installed, you might feel some resistance.
The resistance is normal, and it occurs when the connector at the rear of the fan contacts the
corresponding socket inside the chassis.

Cisco UCS X-Series with Intersight Deployment Workshop Page 143


Copyright 2022 Cisco System, All rights reserved.
When removing the fan module, grasp the handle and press down on the release latch. Slide
the fan module partially out of the chassis and place your hand underneath it to support it.
When the fan disconnects from the midplane, it will power down.
• Grasp the fan module handle and push down on the release button
• Slide the fan module partially out of the chassis
• When the fan disconnects from the midplane, it powers down

Cisco UCS X-Series with Intersight Deployment Workshop Page 144


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight
Deployment Workshop
Section 4
Cisco UCS X-Series Management through IMM

In this section we will look at a high-level view of Intersight Managed Mode.

Cisco UCS X-Series with Intersight Deployment Workshop Page 145


Copyright 2022 Cisco System, All rights reserved.
Cisco Intersight is a management platform delivered as a service with embedded analytics for
your Cisco and 3rd party IT infrastructure. Intersight Managed Mode (IMM) is a new
architecture that manages the Unified Computing System (UCS) fabric interconnected systems.
It unifies the capabilities of the UCS systems and the cloud-based flexibility of Intersight, thus
unifying the management experience for the standalone and fabric interconnect (FI) attached
systems. IMM standardizes policy and operation management for the 4th generation FI and the
Cisco UCS M5 servers.
• Collection of features/functionality built into Intersight.
• Previously only supported standalone C-Series servers and HyperFlex (HX) customers
from cloud.
• Intersight can now push policies and server profile to fabric-interconnected devices
(previously done with UCS Manager (UCSM) or UCS Central).
• Previously provided dashboard alerts, firmware upgrades, and connected TAC.
• Moving forward, only way to manage Washington chassis and X-Series servers (not
supported under UCS Manager/UCS Central).
• Domain profiles - new construct, comes with Intersight Managed Mode (IMM), allows
for grouping of typical Day 0 configs that can be applied at domain level.
• Config moves from FI to cloud - allows for rapid update of cloud software without
necessarily upgrading associated FI’s and/or downstream components.

Cisco UCS X-Series with Intersight Deployment Workshop Page 146


Copyright 2022 Cisco System, All rights reserved.
It is crucial to complete each step chronologically, as shown here. It is essential to have
absolute compliance. Every step will be explained in detail as we move through the rest of this
section.

Cisco UCS X-Series with Intersight Deployment Workshop Page 147


Copyright 2022 Cisco System, All rights reserved.
The only way that you can manage the new UCS X-Series is by using Intersight with the FI in
IMM mode. X-Series is not supported by UCSM or UCS Central. Cisco Intersight uses adaptive
cloud-powered infrastructure management providing simplified configuration, deployment,
maintenance, and support.
Unlike the SaaS Intersight deployment, the virtual appliance models will not see some of the
elements until claiming at least one FI in Intersight Managed Mode.
• Can be used via Cloud / software-as-a-service (SaaS), or on-premises Connected Virtual
Appliance (CVA) or Private Virtual Appliance (PVA)
• PVA support for IMM delayed
• Using on-premises, some elements in UI not displayed until first Fabric Interconnect (FI)
is claimed

Cisco UCS X-Series with Intersight Deployment Workshop Page 148


Copyright 2022 Cisco System, All rights reserved.
In this section, we will look at Intersight account level licensing.

Cisco UCS X-Series with Intersight Deployment Workshop Page 149


Copyright 2022 Cisco System, All rights reserved.
Cisco Intersight uses a subscription-based license model with multiple featured tiers, including
Base, Essentials, Advantage, or Premier. A free 90-day trial license is available to customers that
create an account regardless of the tier. As more Intersight features are desired, organizations
have the option to license additional features at higher tiers.
• Can make any tier default tier by clicking “set default” at top right in UI
• New accounts have free 90-day trial option regardless of tier selected
Intersight IMM Supported Hardware:
https://www.intersight.com/help/supported_systems#supported_hardware_for_intersi
ght_managed_mode

Cisco UCS X-Series with Intersight Deployment Workshop Page 150


Copyright 2022 Cisco System, All rights reserved.
The Essentials tier is necessary for IMM. If you choose to use the 90-day free trial, after 90 days
Intersight functionality returns to Base, and any premium feature will become unavailable.

Cisco UCS X-Series with Intersight Deployment Workshop Page 151


Copyright 2022 Cisco System, All rights reserved.
Existing users can work together with their Account Team to get IMM Trial Licenses added to a
customer’s Intersight Smart/Virtual account.

Cisco UCS X-Series with Intersight Deployment Workshop Page 152


Copyright 2022 Cisco System, All rights reserved.
Intersight provides the capability to have multiple active license tiers within a single Intersight
account. Each server can be licensed at a different tier depending on their specific
requirements. The “Start Trial” button automatically grants a new Intersight account, the 90-
day license for servers at the Premier level. You will need to move servers from the Free base
tier to the elevated license tier.

Cisco UCS X-Series with Intersight Deployment Workshop Page 153


Copyright 2022 Cisco System, All rights reserved.
Selecting the gear icon at the top right of the Servers page opens the “Manage Grid” menu.
Once there, verify that “License Tier” is selected to view which Servers have which licenses right
from the main page itself.
• Click “Servers” and then gear at the top of page to “Manage Grid” and make sure
“License Tier” is selected to view tier

Cisco UCS X-Series with Intersight Deployment Workshop Page 154


Copyright 2022 Cisco System, All rights reserved.
To select multiple servers, choose the ellipsis (…) menu located at the top left side of the page.
It is important to note that the ellipsis (…) menu on the left side of the page is for multi-server
actions, and the ellipsis (…) menu(s) on the right side of the page is used for single-server
action. Optionally, the default licensing tier to assign can be set in the licensing settings.

Cisco UCS X-Series with Intersight Deployment Workshop Page 155


Copyright 2022 Cisco System, All rights reserved.
In this section we will look at how to access Intersight and some general Intersight features
relevant to managing an X-Series deployment. We do not cover all Intersight features.
Reference Intersight help or other Intersight training modules for comprehensive feature
coverage.

Cisco UCS X-Series with Intersight Deployment Workshop Page 156


Copyright 2022 Cisco System, All rights reserved.
Using Google Chrome gives the best IMM access user experience. To create policies and
profiles, you will need to log in as an Admin and configure the setup correctly.

Cisco UCS X-Series with Intersight Deployment Workshop Page 157


Copyright 2022 Cisco System, All rights reserved.
An Intersight account is required to use IMM. To create an account, connect to
https://Intersight.com. Once there, either create an account or log in to Intersight with your
credentials. Refer to the previous slides that reviews licensing options before starting the setup
process.

Cisco UCS X-Series with Intersight Deployment Workshop Page 158


Copyright 2022 Cisco System, All rights reserved.
In this section we will look at several dashboards available in Intersight.

Cisco UCS X-Series with Intersight Deployment Workshop Page 159


Copyright 2022 Cisco System, All rights reserved.
Once logged in, Intersight provides dashboard views of multiple widgets that display
information about specific components.

Cisco UCS X-Series with Intersight Deployment Workshop Page 160


Copyright 2022 Cisco System, All rights reserved.
From the UI Dashboard, you can view server and fabric interconnect (FI) health summaries,
server inventory, device versions, and so on. The dashboard is user-customizable, allowing you
to focus on the information and tasks relevant to your needs. You can create, customize, and
manage multiple dashboard views by adding, removing, or rearranging widgets on the
dashboard. To add more reporting/summaries, click the Add Widget button on the top right of
the dashboard to add a widget.

Cisco UCS X-Series with Intersight Deployment Workshop Page 161


Copyright 2022 Cisco System, All rights reserved.
In the Widget Library, select the widget(s) that you want to pin to the dashboard. Some of
these include Faults and Summaries, Utilization Monitoring, and Licensing, as shown here.

Cisco UCS X-Series with Intersight Deployment Workshop Page 162


Copyright 2022 Cisco System, All rights reserved.
Cisco Intersight allows you to look more deeply into a specific widget. As pictured here, the
Server Health widget provides you with the specific device or policy it affects.

Cisco UCS X-Series with Intersight Deployment Workshop Page 163


Copyright 2022 Cisco System, All rights reserved.
It is helpful to see the license tier of each server. However, it is not shown by default. Selecting
the gear icon at the top right of the Servers page opens the Manage Grid menu. Once there,
verify that License Tier is selected so that you can view which servers have which licenses right
from the main page itself. Columns can be adjusted using a drag-and-drop method.

Cisco UCS X-Series with Intersight Deployment Workshop Page 164


Copyright 2022 Cisco System, All rights reserved.
In this section we will look at configurations with IMM vs UCSM/UCSC.

Cisco UCS X-Series with Intersight Deployment Workshop Page 165


Copyright 2022 Cisco System, All rights reserved.
Server Profiles look for policies in the same sub-organization level that they are created, and if
they are not found, they resolve upward in the hierarchy to the root. Every policy has default
policies, so if a mandatory policy is not defined in the Service Profile, the default will be
selected and deployed automatically.

Cisco UCS X-Series with Intersight Deployment Workshop Page 166


Copyright 2022 Cisco System, All rights reserved.
UCS Central enforces the same organizational hierarchy and policy resolution behavior as
UCSM. UCS Central also introduces an additional construct of Domain Groups. Each domain
belongs to a Domain Group, and that membership reflects in all global VLANs and VSANs. Keep
in mind that moving domains from one group to another is risky. You might move a domain to a
domain group that the VLAN or VSAN does not resolve to in the hierarchy, and in turn, outages
can occur.

Cisco UCS X-Series with Intersight Deployment Workshop Page 167


Copyright 2022 Cisco System, All rights reserved.
IMM is a flat organizational structure without a hierarchy. Modifying or changing organizations
is non-disruptive to workloads. It is important to note though, server profile changes will not
automatically be applied to deployed server workloads. There is a sequence of steps that need
to take place. You must modify a server profile in “underplayed changes” status. You must
deploy the server profile to make changes to server memory. Then, reboot the server to pick up
the changes made.

Cisco UCS X-Series with Intersight Deployment Workshop Page 168


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 169
Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 170
Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 171
Copyright 2022 Cisco System, All rights reserved.
In this section we will look at domain profiles and policies.

Cisco UCS X-Series with Intersight Deployment Workshop Page 172


Copyright 2022 Cisco System, All rights reserved.
With Cisco Intersight all pools, policies, and profiles are stored within a single dashboard. There
are also wizard driven configurations that allow the customer to ensure they have configured
all policies necessary for proper operation.

Cisco UCS X-Series with Intersight Deployment Workshop Page 173


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 174
Copyright 2022 Cisco System, All rights reserved.
Some of the ID pools have value hints (MAC, World Wide Name [WWN]), and these values are
assigned when the policy that references a pool is attached to a profile. Values will be released
if the associated policy gets removed or if the pool changes in the policy. Pools will need to be
in the same organization for you to see and use them. The default organization is “default,”
eliminating complexity with the previous hierarchical structure of other management tools.

Cisco UCS X-Series with Intersight Deployment Workshop Page 175


Copyright 2022 Cisco System, All rights reserved.
The settings required for the policies are marked with an asterisk (*). You can now name
objects whatever you would like, even duplicating names if you prefer, because each managed
object is referenced by a unique managed object ID (MOID). All of the policies are filtered, as
there are over 60 policies, making it easy for you to filter according to whatever policy you are
looking to apply.
• Easier in IMM to change object names
• Filter on left-hand side to reduce policies displayed

Cisco UCS X-Series with Intersight Deployment Workshop Page 176


Copyright 2022 Cisco System, All rights reserved.
IMM Domain Profile:
• Is brand new with IMM
• An exciting enhancement, especially for larger customers
• Can clone one-to-many

Cisco UCS X-Series with Intersight Deployment Workshop Page 177


Copyright 2022 Cisco System, All rights reserved.
The Domain Profile, a brand new construct with IMM, is a significant enhancement, especially
for larger customers. It allows cloning of one-to-many in a single operation. Profiles can be
cloned to deploy to adjacent Domains rapidly. The customer will create the required Domain
Policies, and in turn, build a Domain Profile.

Cisco UCS X-Series with Intersight Deployment Workshop Page 178


Copyright 2022 Cisco System, All rights reserved.
Listed here are the IMM domain policies. Further content will cover how to create each one.
Port policy is the most commonly used policy. It is also important to note that System quality of
service (QoS) is a mandatory policy. Syslog and Simple Network Management Protocol (SNMP)
policies are optional.
• Port policy is the most commonly used policy
• System QoS is mandatory

Cisco UCS X-Series with Intersight Deployment Workshop Page 179


Copyright 2022 Cisco System, All rights reserved.
Listed here are the IMM Domain Policies. Further content will cover how to create each one.

Cisco UCS X-Series with Intersight Deployment Workshop Page 180


Copyright 2022 Cisco System, All rights reserved.
Enter a name for your Port policy and then select the associated switch mode and then click
Next. Port policy configures the ports and port roles for the fabric interconnect (FI). Each FI has
a set of ports in a fixed port module that you can configure. You can enable or disable a port or
a port channel. The port policy is associated with a switch model.

Cisco UCS X-Series with Intersight Deployment Workshop Page 181


Copyright 2022 Cisco System, All rights reserved.
Use the blue slider to designate Fibre Channel (FC) ports from the left. As with previous
management tools, you can use the slider shown to select a range of ports.

Cisco UCS X-Series with Intersight Deployment Workshop Page 182


Copyright 2022 Cisco System, All rights reserved.
To configure port roles, click on the ports with the switch. For multiple ports, hold the Shift key
and the selected ports display in blue. The port numbers appear under Selected Ports above the
switch image. Click Configure to choose the role for the selected ports. For now, select the ones
that you want to change.

Cisco UCS X-Series with Intersight Deployment Workshop Page 183


Copyright 2022 Cisco System, All rights reserved.
An example of Appliance port would be one connected to the NATAPP array.
• Select the port role type.
• Select Server from the drop-down menu.

Cisco UCS X-Series with Intersight Deployment Workshop Page 184


Copyright 2022 Cisco System, All rights reserved.
This is an example of Server Ports configured.

Cisco UCS X-Series with Intersight Deployment Workshop Page 185


Copyright 2022 Cisco System, All rights reserved.
For Uplink port channels, select Port Channel and then choose Create Port Channel.

Cisco UCS X-Series with Intersight Deployment Workshop Page 186


Copyright 2022 Cisco System, All rights reserved.
From the Role type, select the port channel role type. Add the Port Channel ID and Admin
Speed. You can provide optional network policies such as Link Aggregation and Link Control.
Select the valid ports for the port channel. Click Save.

Cisco UCS X-Series with Intersight Deployment Workshop Page 187


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 188
Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 189
Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 190
Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 191
Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 192
Copyright 2022 Cisco System, All rights reserved.
Select Port Channels configured on the FI’s. Click Save.

Cisco UCS X-Series with Intersight Deployment Workshop Page 193


Copyright 2022 Cisco System, All rights reserved.
This is a graphical representation of the port configuration. The Port Policy is DMZ-Ports.

Cisco UCS X-Series with Intersight Deployment Workshop Page 194


Copyright 2022 Cisco System, All rights reserved.
This is an example of VLANs. The top arrow points to where the VLANs and VSANs configuration
is located.

Cisco UCS X-Series with Intersight Deployment Workshop Page 195


Copyright 2022 Cisco System, All rights reserved.
The arrow shows network connectivity, NTP, and system QoS.

Cisco UCS X-Series with Intersight Deployment Workshop Page 196


Copyright 2022 Cisco System, All rights reserved.
To create a new UCS domain, select the UCS Domain Profiles tab and then click Create UCS
Domain Profile located at the top right of the page.

Cisco UCS X-Series with Intersight Deployment Workshop Page 197


Copyright 2022 Cisco System, All rights reserved.
Enter a domain profile name and click Next.

Cisco UCS X-Series with Intersight Deployment Workshop Page 198


Copyright 2022 Cisco System, All rights reserved.
When it comes to deploying the IMM domain profile, the customer can choose to Assign
Domain when they create the domain profile or choose Assign Later. If the customer decides to
assign later, edit the domain profile and assign it to a UCS domain before clicking Next and then
Save. To implement the changes, be sure to save your work.
Once assigned, select the domain profile in the list of profiles, use the ellipsis (…) to the right of
the row and then select Deploy. At the top of the screen, click Requests to view the workflow
to ensure the domain profile deploys successfully.
• Domain profile assigned during or after creation
• Dots to right side of a row provide access to specific actions for that row, while dots at
top left provide actions for all selected rows

Cisco UCS X-Series with Intersight Deployment Workshop Page 199


Copyright 2022 Cisco System, All rights reserved.
As mentioned previously, you can either assign the domain to create the domain profile or
choose Assign Later. Then, follow the progress task list on the left side of the page.

Cisco UCS X-Series with Intersight Deployment Workshop Page 200


Copyright 2022 Cisco System, All rights reserved.
In future releases, default server policies are included, but are intended to serve as templates
to modify or save, and are not automatically deployed by default as was in the previous
management tools.
There are many different IMM Server Policies, but we will only focus on the core required or
recommended ones. Some policies are like super policies, which reference other smaller
policies. An example of that would be the LAN Connectivity policy. There are no default policies,
though. If you neglect to include it in the server profile, it is essential to note that it will not
automatically deploy.

Cisco UCS X-Series with Intersight Deployment Workshop Page 201


Copyright 2022 Cisco System, All rights reserved.
The graphic is showing an example of the different policy filters and the policies available for
each filter.

Cisco UCS X-Series with Intersight Deployment Workshop Page 202


Copyright 2022 Cisco System, All rights reserved.
The following list describes the server policies that you can configure in Intersight.
• BIOS Policy
• Boot Order Policy
• IMC Access Policy
• LAN Connectivity Policy
• SAN Connectivity Policy
• Local User Policy
• Virtual KVM Policy
• Virtual Media Policy
We will take a look at how to set all of these up in the upcoming slides. Some UCS customers
may not be familiar with integrated management controller (IMC) access policy, as it comes
from the standalone C-series world. However, it is part of every type of server profile, so it is
important to discuss that with them.

Cisco UCS X-Series with Intersight Deployment Workshop Page 203


Copyright 2022 Cisco System, All rights reserved.
LAN Connectivity Policy has several prerequisite pools or policies that must exist before you can
create the LAN Connectivity policy. You will need a MAC address pool, an Ethernet Network
Group policy, an Ethernet Network Control policy, Ethernet QoS policy, and Ethernet Adapter
policy.
• Policies listed must exist before creating a LAN Connectivity policy

Cisco UCS X-Series with Intersight Deployment Workshop Page 204


Copyright 2022 Cisco System, All rights reserved.
Enter a name for the MAC pool.

Cisco UCS X-Series with Intersight Deployment Workshop Page 205


Copyright 2022 Cisco System, All rights reserved.
You will then add MAC pool blocks. You can create as many as needed per policy. It is required
that the first three octets comply with Cisco OUI, but allows custom configuration for the last
three octets. You can have 1000 MACs per block. It is effortless to add any additional blocks
where it is necessary.
• Create as many blocks as needed per policy
• Create 1 block for 1 IP pool and create another ID pool of 1 block (lots of flexibility)
• May need different ID pools for different organizations
• Cannot steer blocks to certain server profiles in Cisco Intersight, as done in UCS Central

Cisco UCS X-Series with Intersight Deployment Workshop Page 206


Copyright 2022 Cisco System, All rights reserved.
Enter a policy name for the Ethernet Network Group policy (VLANs).

Cisco UCS X-Series with Intersight Deployment Workshop Page 207


Copyright 2022 Cisco System, All rights reserved.
On the Policy Details page, select either a single or a range of VLANs you want to add to the
vNIC. You can create multiple policies for different VLANs or ranges for the different vNICs
found in the server profile. They will have to also be present in the Ethernet Network Group
Policy and will be uplinked from the fabric interconnects (FI’s). Within the FIs you can have
more than one native, but you cannot have multiples going upstream. Once all of the entries
are made, click Create.
• Cannot add comma separating VLANs - have to do either individual or range
• Within FIs, can have more than one native, but can’t have multiples going upstream

Cisco UCS X-Series with Intersight Deployment Workshop Page 208


Copyright 2022 Cisco System, All rights reserved.
For the Ethernet Network Control Policy, just like the others, you will need to provide a policy
name.

Cisco UCS X-Series with Intersight Deployment Workshop Page 209


Copyright 2022 Cisco System, All rights reserved.
On the Policy Details page, enable the CDP on an interface. Leave all other fields set to their
defaults unless told otherwise by the network administrator.
• Enable CDP (Cisco Discovery Protocol)
• Leave defaults as is

Cisco UCS X-Series with Intersight Deployment Workshop Page 210


Copyright 2022 Cisco System, All rights reserved.
Enter a name for the Ethernet QoS policy.

Cisco UCS X-Series with Intersight Deployment Workshop Page 211


Copyright 2022 Cisco System, All rights reserved.
Keep defaults as is unless told otherwise by the network administrator, or if modifying Class of
Service (CoS) or enabling Jumbo Frames end-to-end.
• Data center ethernet = Cisco’s particular implementation of Jumbo Frames

Cisco UCS X-Series with Intersight Deployment Workshop Page 212


Copyright 2022 Cisco System, All rights reserved.
For Ethernet Adapter policy, provide a name as with all other policies. Towards the middle of
the screen, you will see a blue link identified as Select Default Configuration. Once clicked, it
opens a menu to the right, and you will select the correct operating system profile.
• Well tested different operating system vendors
• Usually no need to make changes aside from selecting the correct OS profile

Cisco UCS X-Series with Intersight Deployment Workshop Page 213


Copyright 2022 Cisco System, All rights reserved.
Once selected, you will see the selected operating system profile listed where the blue link was
before. Verify that it is correct, and then click Next.

Cisco UCS X-Series with Intersight Deployment Workshop Page 214


Copyright 2022 Cisco System, All rights reserved.
With Ethernet Adapter policy, do not change any of the defaults unless told otherwise.
Properties will automatically fill based on the operating system profile that you chose. Click
Create.
• Properties filled in automatically once correct operating system profile is chosen

• Should not change any settings below those properties

Cisco UCS X-Series with Intersight Deployment Workshop Page 215


Copyright 2022 Cisco System, All rights reserved.
For LAN Connectivity policy, provide a policy name and be sure to click FI Attached and then
click Next.
• Be sure to click FI Attached

Cisco UCS X-Series with Intersight Deployment Workshop Page 216


Copyright 2022 Cisco System, All rights reserved.
It is recommended that you choose Auto vNICs Placement. If you select this option, vNIC
placement is done automatically during profile deployment. This method is the simplest and
hides the complexities from the end-user. Complexities are taken care of by the software.
However, you can choose Manual vNICs placement if you prefer. If you select this option, you
must manually specify the placement for each vNIC. After making your selection, click Add
vNIC.
• Different choices about how you want to build policy here
• Selecting “Auto Placement” round-robins vNICs accordingly

• Simpler, hides complexities you may have


• Hidden from end-user and will be taken care of by the software

• Click Add vNIC

Cisco UCS X-Series with Intersight Deployment Workshop Page 217


Copyright 2022 Cisco System, All rights reserved.
Name the vNIC0, then add your MAC ID pool or choose Static. Choose the fabric port to which
the vNICs are associated. Then, attach previously built policies, and click Add.
• Provide vNIC0 with name (personal preference)
• Add MAC ID Pool or choose “Static”
• Common to have even numbers for your A Fabric and odd numbers for B Fabric
• Then attach previously built policies

Cisco UCS X-Series with Intersight Deployment Workshop Page 218


Copyright 2022 Cisco System, All rights reserved.
Follow the same steps as you did with vNIC0 for VNIC redundancy by creating a new vNIC1
interface.
• Add our next vNIC (vnic1)
• Since it is an odd number, choose B Fabric
• Add MAC pool and attach previously built policies just as you did with vNIC0

Cisco UCS X-Series with Intersight Deployment Workshop Page 219


Copyright 2022 Cisco System, All rights reserved.
Follow the same steps as you did with vNIC0 and vNIC1.
• vNIC2 is an even number, so here it is added to the A Fabric

Cisco UCS X-Series with Intersight Deployment Workshop Page 220


Copyright 2022 Cisco System, All rights reserved.
Again, follow the same steps as you did with previous vNICs.
• Follow same steps as with vNIC1 and vNIC2 for any remaining vNICs

Cisco UCS X-Series with Intersight Deployment Workshop Page 221


Copyright 2022 Cisco System, All rights reserved.
The vNICs you created should now be listed. You will want to verify your settings before clicking
Create.
• Here, there are four vNICs (A, B, A, B)
• In auto placement mode, vNICs are automatically distributed between adaptors

• Spread vNICs for performance and high availability

• Build vNICs in pairs for fault tolerance

Cisco UCS X-Series with Intersight Deployment Workshop Page 222


Copyright 2022 Cisco System, All rights reserved.
Another way to verify that everything is where it should be is to go into the policy using Edit
Mode. Click on Manual vNIC Placement and select the Graphic vNICs Editor button.
• Can use Manual Mode and provide the requisite’s slot ID of adapter and PCI order

Cisco UCS X-Series with Intersight Deployment Workshop Page 223


Copyright 2022 Cisco System, All rights reserved.
Using the Graphic vNICs Editor, you can create and change vNICs right from here, in a more
visually clear way. It is also a great way to verify where each vNIC is located. However, please
note that clicking back to Manual Placement Mode removes the slot ID and PCI order that auto-
placement did for you if you had chosen auto-placement first. If you made any changes to the
Graphic vNICs Editor, be sure to click Save.
• Build and modify vNICs right from graphical editor

• Great to verify where each vNIC is located


• New feature

• Clicking back to Manual Mode wipes out slot ID and PCI order that auto-placement did

Cisco UCS X-Series with Intersight Deployment Workshop Page 224


Copyright 2022 Cisco System, All rights reserved.
For Storage Area Network (SAN) Connectivity policies, the flow is very similar to that of the LAN
Connectivity policies. However, it does require sub-policies such as Fibre Channel Network
policy, Fibre Channel QoS policy, and Fibre Channel Adapter policy.
• Similar to flow for LAN Connectivity policy

Cisco UCS X-Series with Intersight Deployment Workshop Page 225


Copyright 2022 Cisco System, All rights reserved.
To create a Server Profile, you can assign a server at the beginning of the creation wizard or
assign it later. If later, you will have to edit the Server Profile. Navigate back to that original
option, and edit the Server Profile. Click Next. Choose BIOS, Boot Order, and Virtual Media
Policies in the following step. Select IMC Access, Local User, SNMP, Syslog (if needed), and
Virtual KVM in the next step. Skip Step 5. Click LAN and SAN connectivity policies for step 6.
Review and deploy your new server profile.
• Instructor can show an example of a server profile in the UI

Cisco UCS X-Series with Intersight Deployment Workshop Page 226


Copyright 2022 Cisco System, All rights reserved.
In order to create a server profile, first click Profiles. Be sure you are located under the UCS
Server Profiles tab. Click Create UCS Server Profile. Then, follow the Progress task list just as
you did when assigning domain earlier.

Cisco UCS X-Series with Intersight Deployment Workshop Page 227


Copyright 2022 Cisco System, All rights reserved.
For server profiles there will be templates coming post GA. You can clone one-to-many server
profiles. Cloned server profiles take on the exact supporting policies as the original. If you make
a policy change used by cloned server profiles, then you will get a notification that says “Not
Deployed Changes” on all your profiles. It is also important to note that clones are not
templates.
• Most UCS customers well-versed in templates
• Cloned server profile acts much like template

Cisco UCS X-Series with Intersight Deployment Workshop Page 228


Copyright 2022 Cisco System, All rights reserved.
If you make a disruptive server profile change, it will not automatically reboot the endpoint.
You will still need to deploy the profile again to pass down the changes to the endpoint server.
Then, reboot the server to pick up those disruptive changes. This is a safer, smoother way to
operate.
• In IMM world, if you make disruptive change, you will receive message “Not Deployed
Changes”

Cisco UCS X-Series with Intersight Deployment Workshop Page 229


Copyright 2022 Cisco System, All rights reserved.
To deploy a server profile, you will want to select the profile in its profile list. Choose the ellipsis
(…) menu and click Deploy. You can verify that it deploys correctly via the Requests list at the
top of the page.

Cisco UCS X-Series with Intersight Deployment Workshop Page 230


Copyright 2022 Cisco System, All rights reserved.
Just like server profiles consume server policies, chassis profiles consume chassis policies. Right
now, there are only two - the IMC Access policy and the SNMP policy. However, chassis profiles
will take on a more significant role in the future.
• Just like server profiles consume server policies, chassis profiles consume chassis
policies
• Right now, only two policies

Cisco UCS X-Series with Intersight Deployment Workshop Page 231


Copyright 2022 Cisco System, All rights reserved.
Utilizing server profile templates is a good way to duplicate server profiles consistently. Using
templates will also reduce errors made when creating multiple server profiles.
To create a new template, navigate to Configure > Templates and click Create UCS Server
Profile Template.

Cisco UCS X-Series with Intersight Deployment Workshop Page 232


Copyright 2022 Cisco System, All rights reserved.
On the General page, you will need to name the server profile template and select the server
from the Target Platform. Click Next.

Cisco UCS X-Series with Intersight Deployment Workshop Page 233


Copyright 2022 Cisco System, All rights reserved.
On the Compute Configuration page, select the compute policies and click Next.

Cisco UCS X-Series with Intersight Deployment Workshop Page 234


Copyright 2022 Cisco System, All rights reserved.
On the Management Configuration page, select the management configuration policies and
click Next.

Cisco UCS X-Series with Intersight Deployment Workshop Page 235


Copyright 2022 Cisco System, All rights reserved.
On the Storage Configuration page, select the storage age network and click Next. Once all of
the information is entered, click Derive Profiles to derive server profiles from this template.

Cisco UCS X-Series with Intersight Deployment Workshop Page 236


Copyright 2022 Cisco System, All rights reserved.
On the Summary page, review all of the information that was entered for accuracy. Click Derive
Profiles to derive server profiles from this template.

Cisco UCS X-Series with Intersight Deployment Workshop Page 237


Copyright 2022 Cisco System, All rights reserved.
Under the Server Assignment area, you can choose Assign a Server or Assign Server Later and
assign a server to the server profile later. Lastly, you can create several profiles derived from
that template. You can also do it instantaneous with a science server.

Cisco UCS X-Series with Intersight Deployment Workshop Page 238


Copyright 2022 Cisco System, All rights reserved.
There are several things that you can do with a template. First, you can derive profiles from that
template. When selecting the template, you will see all of the existing servers. You can also
create profiles.

Cisco UCS X-Series with Intersight Deployment Workshop Page 239


Copyright 2022 Cisco System, All rights reserved.
There are several things that you can do with a template. First, you can Derive Profiles from
that template. When selecting the template, you will see all of the existing servers. You can
create profiles. To do this, choose the number of profiles to derive and then select the
organization. You can Derive across orgs, clone across orgs, but you can’t derive across orgs.
The system generates a name, but you can change it to whatever you want. Click Next to create
the additional service profiles or server profiles.

Cisco UCS X-Series with Intersight Deployment Workshop Page 240


Copyright 2022 Cisco System, All rights reserved.
You can also clone a single profile from one template to another template. Cloning allows you
to reuse the same configuration for multiple templates. Each copy can be distinct. At this time,
you cannot clone templates when the template is consuming policies from a given policy.

Cisco UCS X-Series with Intersight Deployment Workshop Page 241


Copyright 2022 Cisco System, All rights reserved.
You can delete templates because it is not going to delete the server profiles.

Cisco UCS X-Series with Intersight Deployment Workshop Page 242


Copyright 2022 Cisco System, All rights reserved.
You can edit the template at any time. Follow the Progress menu to edit the profile. Any edit
made in the template gets reflected in the derived server profiles. If you make changes to a
local boot or local boot to sandboot, you will get a server profile with undeployed changes on
those servers on the server profiles. Cloning has been around since the beginning of IMM. If
you change a policy reference, the name changes to a different policy MOID. When you clone,
you have to do it for each one of the clones. Whereas with templates, there is more flexibility.
When you change something in templates, all of the subsequent server profiles created from
that template change.
.

Cisco UCS X-Series with Intersight Deployment Workshop Page 243


Copyright 2022 Cisco System, All rights reserved.
Let’s look at server profiles. A server profile can be detached from a template by selecting
Detach from Template if you do not want part of the template getting updates via the
template. A detached server profile can always be reattached to a template. Reattaching
templates take on all of the template properties, which is safer because you are not
automatically pushing changes down. This is a whole different construct architecture where you
are not pushing live changes down. Suppose you change something in the server profile,
regardless of if it is standalone or through a template, you will get undeployed changes. There
will be additional changes in the future that include server resources and resource pools, which
will help in deploying out. When creating an initial deployment, use a stand-up template for
servers and create one-to-many profiles to leverage the pool to assign and eventually automate
a mass deployment automatically.

Cisco UCS X-Series with Intersight Deployment Workshop Page 244


Copyright 2022 Cisco System, All rights reserved.
To create a template from an unassociated profile, you must select the server profile that has
not been assigned. You must detach it from the template to make it a free-standing server to
create a new template. If you select the template, the template field shows that one is
attached. You can create a template with server profiles or a gold standard profile. Creating a
gold standard profile helps perform all of the operations with just one template.

Cisco UCS X-Series with Intersight Deployment Workshop Page 245


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 246
Copyright 2022 Cisco System, All rights reserved.
In this section we show you how to submit Intersight feedback.

Cisco UCS X-Series with Intersight Deployment Workshop Page 247


Copyright 2022 Cisco System, All rights reserved.
You can submit feedback within the UI, rating your experience from poor to excellent. You can
also leave a comment or enhancement in free-form text.

Cisco UCS X-Series with Intersight Deployment Workshop Page 248


Copyright 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 249
Copyright 2022 Cisco System, All rights reserved.
This is a presentation of the new UCS X-Fabric and 5th Generation Network Fabric hardware
and capabilities introduced in March 2022 for the UCS-X Modular System. Different pieces
include the Fifth Gen Fabric and a new Front Mezzanine component for the existing compute
node and accelerator node.

Cisco UCS X-Series with Intersight Deployment Workshop Page 1


© 2022 Cisco System, All rights reserved.
This training introduces X-Fabric with its uses. The X-Fabric enables us to use the new X440p
PCIe node, which initially provides for support of NVIDIA GPUs. In addition to the NVIDIA
GPUs and the PCIe node, we will also add support for a multitude of NVIDIA GPUs (T4, A16,
A40, and A100) on the accelerator node and T4 on the compute node in the new Front Mezz.
The last topic will be a discussion of 5th Generation Fabric.

Cisco UCS X-Series with Intersight Deployment Workshop Page 2


© 2022 Cisco System, All rights reserved.
X-Series was released in 2021 with new modular abilities to accommodate more workloads.
And now in 2022, we are expanding on that capability, like we said we would, with the
support for GPUs and additional network fabric bandwidth and capabilities. This is all along
the lines of bringing more applications that have required specialized infrastructure or
applications that have required specialized rack servers because of the need for GPUs or
specific network connectivity.

Cisco UCS X-Series with Intersight Deployment Workshop Page 3


© 2022 Cisco System, All rights reserved.
Here are the range of capabilities that are actually in the system and that we're adding over
time. In this addendum, we'll be discussing support for high wattage GPUs and 100 gig
networking. We're continuing to fill out the continuum of technologies and capabilities
within the UCSX series, which will provide a single place to run the wide variety of
applications.

Cisco UCS X-Series with Intersight Deployment Workshop Page 4


© 2022 Cisco System, All rights reserved.
On the left of this image, in blue, are the capabilities we delivered at the beginning. On the
right is our expansion of new capabilities, with huge amounts of in-chassis networking, 100
gig end-to-end flows, and up to 24 GPUs in a single chassis supporting these new workloads.

Cisco UCS X-Series with Intersight Deployment Workshop Page 5


© 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 6
© 2022 Cisco System, All rights reserved.
The X-Fabric is going to provide us these new 9416 fabric modules and the X440 PCIE nodes
on which you can install a variety of GPUs today. The X-Fabric as a whole consists of the XFM
modules which will connect the compute with the new PCIe nodes so high-powered GPUs or
high quantities of GPUs can be attached to servers. In the near-term, X-Fabric is about
providing a cable-less PCIe fabric within the chassis. In the future, it could be a cable-less CXL
fabric. X-series provides flexible PCIe fabrics, without the cables, in a modular architecture.

Cisco UCS X-Series with Intersight Deployment Workshop Page 7


© 2022 Cisco System, All rights reserved.
Previously, in the X-Series chassis, the back two module slots, known as XFM blanks, have
provided only fans in the bottom of the chassis. These have been replaced with the new
X9416 X-Fabric modules (XFM), which provide connectivity between pairs of adjacent slots in
the front of the chassis for connecting X16 Gen4 PCIe links, one from each CPU through each
X-Fabric module and then into the adjacent slot, thus creating four pairs of two slots.
As an overview of the compute node layout, the CPUs are front-to-back in the middle, a row
of DIMMS in the top, and then, a row of memory DIMMS in the middle. The CPUs are aligned
with the four chassis fans, and the memory is aligned with the fabric modules.
Having three cooling fans on these components, the overall airflow pulls from the front of
the chassis through the compute nodes, through the fabric modules, and out the back.
Because the CPUs are the predominant heat generators in the system, it becomes clear why
the chassis fans are in the back, which don't have any other electronic circuitry in-line, so
that the maximum heat egress from the CPUs can be affected.
At this point, the fabrics can pull the heat from the memory as well as from any components
that are on the fabric modules themselves. The power supplies have their own fans to
complete the full cooling solution.

Cisco UCS X-Series with Intersight Deployment Workshop Page 8


© 2022 Cisco System, All rights reserved.
For customers who have an existing chassis with the X-Fabric module blanks in them today,
there is no problem hot swapping those in a running system with the new X 9416 modules,
to provide support for the new PCIe nodes. Then, because there is no back plane in the
chassis, when the X-Fabric modules are placed in the back, they connect directly to the
compute nodes in the front. A connector on each X-Fabric module is there to each of the
compute nodes.
If, however, a node is removed and reinserted in the front, then it mates and un-mates
directly to its corresponding fabric modules as well, the network fabric being at the top and
the X-Fabric being at the bottom. The greatest benefit here is that no configuration is
required, along with no new policies or other settings in Intersight that are necessary to
configure and consume this new X-Fabric module.

Cisco UCS X-Series with Intersight Deployment Workshop Page 9


© 2022 Cisco System, All rights reserved.
When we look at the internal views of the X-Fabric module, we see it is fairly simple with no
active components in the PCIe path, just the three fans and a circuit board, which provides
traces between adjacent pairs of slots.
These are the same fans as on the IFM or on the original blank with eight connectors on the
back of each one: one connector to each node slot and two of those node slots. This gives
each node slot a stack of two connectors, one from each XFM.

Cisco UCS X-Series with Intersight Deployment Workshop Page 10


© 2022 Cisco System, All rights reserved.
Next, we will discuss how the compute node itself makes that connectivity, which is through
the X-Fabric Mezzanine card.
In the Mezzanine card, we have that distinction of the top versus the bottom. At the top, we
have the mLOM card, the VIC, providing the fabric connectivity. It looks very similar. It has
these same two connectors. When it's installed, its top connector connects to the top IFM on
the A-fabric, and its bottom connector connects to the B-side in IFM 2.
Similarly, on the bottom of the blade, we have the Mezzanine card, and it has the two
connectors. The one on the right will connect to XFM 1, which is at the top, and the bottom
one will connect to XFM 2 at the bottom. If it's a VIC, for instance, with an ASIC, the
connectors don't connect to the VIC. They just pass through the card to the connector that
goes to the compute node motherboard and directly to the CPUs as seen in the diagram.
From CPU1, there is a PCIe Gen4 x 16, and the same happens from CPU2, passing through
this board to those connectors.
There are some PCIe re-timers in this path. If this was a VIC card, then there would be an
ASIC here, and the Mezzanine card’s X-Fabric connectors would not be used. The 94825 VIC
card has a bridge that connects it to the mLOM at the top. Even though the Mezz is at the
bottom, it's a network card, so it has to talk to the IFM. That would be done through the
bridge card and not actually through the Mezzanine card’s X-Fabric connectors.

Cisco UCS X-Series with Intersight Deployment Workshop Page 11


© 2022 Cisco System, All rights reserved.
The connector for the bridge is on the bottom of the board and cannot be seen in this
diagram. You would be using that VIC if you were using the 25G IFM and you had expanded it
to receive a full 200G, 100G per fabric, 4 x 25. But if you're not using that card with the
original VIC, if you're doing 100G or if you're using the new Fifth Gen VIC (which is 100G to
each fabric with only mLOM), then you would use this new PCIe pass-through Mezz. This is
basically the same card, populated only with re-timers, but it does not have an ASIC on it.

Cisco UCS X-Series with Intersight Deployment Workshop Page 12


© 2022 Cisco System, All rights reserved.
Intersight XFM Inventory
• XFM inventory is found in the Chassis view
• No Actions for the X9416

© 2022 Cisco and/or its affiliates. All rights reserved. .

When you have installed the X-Fabric modules, they show up in the chassis inventory below
the IO modules, which is where you find your IFMs. Now that you have the XFMs; you can
receive the pertinent details on them, such as serial number. Below that you see the fan
modules and their operational state.

Cisco UCS X-Series with Intersight Deployment Workshop Page 13


© 2022 Cisco System, All rights reserved.
In the XFM are dual-inline radial fans even though this is a single fan module, giving it two
fans like the chassis fans.
In this diagram you can see how the PCIe nodes will show up. Note that they are on the
server inventory page, and there will be a new tab labeled GPUs. You can see the server in
Slot 7 and the PCIe node in Slot 8, which means it's connecting to the compute node in Slot
7. You can also see two GPUs: GPU 1 and GPU 4, giving you an artifact of how the slots on
the PCIe node map out.

Cisco UCS X-Series with Intersight Deployment Workshop Page 14


© 2022 Cisco System, All rights reserved.
Here you can see which card you have in the mezz slot on the compute node connecting to
the XFM.

Cisco UCS X-Series with Intersight Deployment Workshop Page 15


© 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 16
© 2022 Cisco System, All rights reserved.
Now that we’ve established the X-Fabric, it is made available through our new X440p PCIe
node. The front bezel is very basic, distinguishable from a blank because it doesn't have the
two thumb and finger holes to pull it out. Instead, it actually has the standard latching
mechanism and a Health and Power LED at the bottom.
This node provides either two or four PCIe slots, standard slots that are non-proprietary, off-
the-shelf PCIe devices. It does this through two riser cards always present. You have to
choose your risers when you order the node, and you choose either Type A or Type B, always
two of the same type. In the diagram, you can see the two: the A and the B. In the diagram,
you can see 1A and 2A, and 1B and 2B down at the bottom. The A is for the high-power GPUs
supporting NVIDIA A16, A40, or A100.
The type B supports single-wide, full height, full length. However, the GPUs we are
supporting are the NVIDIA T4 Tensor Core GPUs which are half height, half length, and do
not take up much space. We’ve stacked two slots. The key difference between the risers is
that there is just one slot here on type A that will be plumbed x16, and there are two slots
here on type B. The x16 will be bifurcated to x8 slots in these risers. That is all automatic and
does not require any configuration. The firmware on the compute node will recognize the
device and properly configure the PCIe topology as the server boots up. Then, finally, NVIDIA
T4s GPUs do not require a power cable; only the high-power GPUs do, and the power cables
come with the risers. Because you must order the risers when you order the PCIe node, all of
the cables should be immediately available.

Cisco UCS X-Series with Intersight Deployment Workshop Page 17


© 2022 Cisco System, All rights reserved.
One key item is, NVIDIA does not support more than one model of GPU attached to a server
at the same time. That is a restriction of their firmware driver’s software. Because these PCIe
connections are static, this is always attached to this compute node. The server is going to
see all of the GPUs in the compute node all the time. There's no dynamic configuration.
Therefore, it is not allowed to mix GPUs in the nodes at all, because otherwise, we would be
violating NVIDIA's mixing restrictions. Conceivably, in the future, we could introduce other
types of PCIe cards supported, giving you, for example, a GPU in one riser and an FPGA card
in a different riser.
In the diagram you can see what looks like a Mezzanine card with two Fabric connectors.
That is exactly what it is, but it is a fixed static component of the PCIe node. You do not need
to order this separately; it's integrated with it, giving you physical connectivity.
Also, in the diagram you also notice there's a mounting bracket at the top of the board. This
is where the mLOM would be on a compute node. However, that space is there now for the
PCIe node. The architecture is there to be used later for a PCIe device which also had the
ability to talk out over the fabric, for example.
Overall, the X-Fabric provides two key use cases. One is that we have expanded the amount
of hardware we can connect to an individual server. The second is that we can have pools of
hardware or additional resources that we can dynamically assign to different servers and
potentially share among them.
This first generation of X-Fabric is focused on that first use case. By doing that, it keeps
things simple. We don't need the dynamic functionality, a switching architecture, and all of
the management logic and complexity around configuration. We no longer have to buy a
rack server to carry that workload and can focus only on building a bigger server in our
modular chassis with off-the-rack items, expanding the size of the compute nodes that I can
deploy.
One thing to note is that there are no bulkhead connectors on the X-Fabric modules, and of
course, on the PCIe nodes themselves. So, your PCI cards cannot have a bulkhead connector
on them. There's no cabled architecture; so, network cards or GPUs with display ports are
not going to be supported. For now, you will get an accelerated ability to push data into the
PCIe nodes, and computation can be done. That’s the best way to think of this X-Fabric in the
first release of functionality.
We can pull the results back out of the card initially, with GPUs, but of course, other devices,
such as FPGA-based accelerators, security devices, are all things that would be in
consideration for future capability on the X-Fabric. Instead of putting compute into the
expansion, we might also put storage into the expansion.

Cisco UCS X-Series with Intersight Deployment Workshop Page 18


© 2022 Cisco System, All rights reserved.
This covers the three server technology pillars of the data center: networking, which is
carried by the fabric at the top; CPU, which is carried by the compute node and/or with the
GPUs on the expansion node; and storage, which is provided now on the front Mezz and
potentially in the future with a storage node added to the X-Fabric. The next step of
development in X-Fabric is storage expansion.

Cisco UCS X-Series with Intersight Deployment Workshop Page 19


© 2022 Cisco System, All rights reserved.
UCS X-Series GPU Portfolio

© 2022 Cisco and/or its affiliates. All rights reserved. .

Here you can see more detail on the NVIDIA products that we're supporting, including which
use cases they apply to - compute or VDI virtual workstation workloads - and the capabilities,
including power consumption and size. We have three options around VDI, specifically
virtual workstation VDI, because that’s where GPUs come into play.
The amount of frame buffer per server is used to determine the number of users that can
generally be accommodated per server given those GPUs.

Cisco UCS X-Series with Intersight Deployment Workshop Page 20


© 2022 Cisco System, All rights reserved.
In this diagram you will note the topology of a fully populated X-Series chassis with four pairs
of compute nodes and X440p PCIe nodes in two different groups. In Group 1 on the left is
the high-power GPU riser with two NVIDIA GPUs. Then, in Group 2 on the right are the nodes
with Riser B, and four NVIDIA GPUs.
We are using X210c’s with the standard six drives and either two or four NVIDIA GPUs. On all
of these compute nodes, in pink, you can see some sort of an X-Fabric Mezz. There does
have to be a card in the Mezz slot, but it doesn't matter which kind it is. We have two cards
that can be purchased that go in that Mezz slot today that provide X-Fabric connectivity. Any
Mezz card we introduce in the future will continue to provide that.
As you can see, CPU1 with the blue traces goes down to XFM1. On every slot, there's a
connection from the slot into XFM1. Then, there's a connection from CPU2. You can see it
goes down to XFM2, and that connects to the Riser2 in the adjacent PCIe node. We are used
to redundancy on the IFM side, on the Fabric side. You've got the A-side and the B-side so
that, if your profiles are properly configured and everything, if the A-side fails, all your traffic
fails over to the B-side.

Cisco UCS X-Series with Intersight Deployment Workshop Page 21


© 2022 Cisco System, All rights reserved.
But this process is different with PCIe. The server has two sets of risers and multiple PCIe
slots. For example, Riser1 has three PCI slots and Riser2 has three more PCI slots. There is no
fail-over; there is only expandability. This gives device attachment to the CPUs on a fixed
basis. Usually, Riser1 connects to CPU1 and Riser2 connects to CPU2, and you populate them
as needed to meet whatever your IO requirements are. We are seeing the same thing here,
but with a great advantage.
X-Series, the modular architecture, is intended to bring workloads back from the rack. AI,
VDI, and analytics are seeing massive growth in the datacenter, and they are primarily
supported by rack servers. Customers chose racks because chassis-based architectures,
blade architectures could no longer accommodate the unique needs of certain applications.
When we needed high power GPUs in our server; our blade chassis couldn't do that, and we
had to go buy rack servers. Now, it can. So, now, you have fewer platform types proliferating
in the data center. You can bring those workloads back into the modular chassis.
There is another benefit of the X-Series architecture. Previously, the blade architectures
could only support NVIDIA P6 GPUs, an MXM form factor that was intended for VDI. Since X-
Series is a great refresh alternative for the blades install base, the availability of T4, A16, A40,
A100 now also expands the capability of the existing blades install based targeted for
refresh.
Because these GPUs are attached to the standard PCIe topology, they show up on the PCIe
tree for the compute node. The firmware is managed through the compute node just like
GPUs on a rack server. When you do a firmware upgrade on the compute node, if there is
new firmware for those GPUs, it will be updated right away along with the compute.

Cisco UCS X-Series with Intersight Deployment Workshop Page 22


© 2022 Cisco System, All rights reserved.
In the diagram you will see a more complex configuration. This is a 100% valid configuration.
This is an example of a fully populated chassis, but not all pairs of computes and PCIe. An
important question comes up when you install X-Fabric that connects the pairs of slots: Are
you now limited to four compute nodes and have to use a PCIe riser or not? The answer is
no; you still have full flexibility.
In this example in this first pair of slots, notice there is no Mezzanine card. Because only
compute was needed with no expansion, a Mezzanine card was not bought. Because there is
no Mezzanine card, there's no connection to the X-Fabric. This is not a problem since no
connection is required. In this second pair of slots, notice a compute node with a new GPU
Mezzanine and two GPUs in that compute node. Again, this is not a problem, and the
compute nodes can be populated as needed.
As an example, the second pair of slots are identically configured servers to the first pair.
except they have an X-Fabric Mezz. The reason for the X-Fabric Mezz might be because the
owner used 14825 VICs because the end desire was 100G to both A and B fabrics. The
Mezzanine cards would provide that networking, but without the PCIe node, without the
PCIe link brought up between those. Therefore, there’s no connection between them. Those
compute nodes can't see each other, even though the physical connectivity has actually been
made.

Cisco UCS X-Series with Intersight Deployment Workshop Page 23


© 2022 Cisco System, All rights reserved.
In the third pair of slots, note the two pairs of computes actually using the PCIe riser. Also,
note the X-Fabric Mezz which gets that connection brought up to the adjacent PCIe node.
Finally, in the far-right pair of slots, note both the GPU node with Riser B supporting four
GPUs, and on the compute node, the new GPU Mezz with two more GPUs. We are
supporting NVIDIA T4 GPUs in this configuration with Riser B, and the GPU Mezz also is
supporting T4. This gives a total of six T4 GPUs attached to a single compute node.
The great advantage here is that there is no configuration necessary. Once the hardware is
populated, it will be recognized by the system with everything configured properly.

Cisco UCS X-Series with Intersight Deployment Workshop Page 24


© 2022 Cisco System, All rights reserved.
This diagram notates block diagrams. Notice the block diagram, of course, is dependent on
which riser you have. Riser A gives two X16s, a very simple typology. An X16 link is coming
from XFM1, which is connected to CPU1 over on the compute. The same process can be
done with CPU2 with very straight connectivity. The results are one GPU per Riser.
For Riser B, the only distinction is that X16 gets bifurcated to two X8s. The results here are
two GPUs on each side. Now there are two GPUs on CPU1 in the PCIe node and two GPUs on
CPU1 in the front Mezz.
Note that there are four GPUs on CPU1 and two on CPU2. While this is not perfectly
balanced, with the current generation of processors and the number of PCIe lanes, this does
not cause a problem. PCIe lanes are not being over-subscribed. Every device has full support
of dedicated PCIe lanes and plenty of horsepower in the CPUs to support them.

Cisco UCS X-Series with Intersight Deployment Workshop Page 25


© 2022 Cisco System, All rights reserved.
GPUs will show up under the compute node inventory and include the PCIe node details.

Cisco UCS X-Series with Intersight Deployment Workshop Page 26


© 2022 Cisco System, All rights reserved.
The GPU details call out the model number, vendor, and firmware version among other
things.

Cisco UCS X-Series with Intersight Deployment Workshop Page 27


© 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 28
© 2022 Cisco System, All rights reserved.
From the compute side is the new front Mezz. This is not a new compute node, but simply a
different option that has, to date, been just storage-related. Now, however, this is both
storage- and compute-related by supporting GPUs. This is supporting two half height, half-
length, off-the-shelf PCIe cards in addition to two U.2 NVMe drives. It is NVMe only; there's
no SAS or static controller. It is only PCIe, and it does support Intel VROC, giving you the
ability to create a VROC-based RAID1 volume between those two drives.
The Mezzanine supports a NVIDIA T4 GPU initially. Though it just supports T4, there is a
unique trait to this T4. It does have a different PID from the GPU that goes into the PCIe
node. When you order this Mezz, you get the option of including the T4 GPU if you choose
Riser B, a 100% off-the-shelf T4. When you order the front Mezzanine, the GPU Mezzanine,
you also get the option of ordering one or two T4 GPUs, but they have this unique PID.
The reason is because of cooling. If you think about PCIe cards, they're generally cooled from
front to back. The passive cards that don't have a fan on them go in a server, and the air is
pulled in the front of the server and flows lengthwise over the card and out through the
card's bulkhead plate. Because these are installed vertically, that means the air is traveling
from top to bottom - from the top of the card through to the bottom where the PCIe
connector is.

Cisco UCS X-Series with Intersight Deployment Workshop Page 29


© 2022 Cisco System, All rights reserved.
Because of that, what we've done is populate them with a different heat sink that
accommodates that different airflow. So, besides the different PID, they are otherwise 100%
functionally identical. No configuration is required. They use the same drivers and the same
firmware; everything else is the same. Your only concern is getting the card with the right
heat sink on them.

Cisco UCS X-Series with Intersight Deployment Workshop Page 30


© 2022 Cisco System, All rights reserved.
In this diagram note that, like all of the other Front Mezz options, they get 24 lanes of Gen4
PCIe from CPU1. Then, that is split up to 2 x 4s for the two NVMe drives and 2 x 8s for the
two GPUs. All of this is in the compute node itself.

Cisco UCS X-Series with Intersight Deployment Workshop Page 31


© 2022 Cisco System, All rights reserved.
Intersight Front Mezz Inventory Details
• Details for GPUs are shown in the server inventory

© 2022 Cisco and/or its affiliates. All rights reserved. .

This diagram shows how the PCIe nodes will show up in Intersight. Note that they are on the
server inventory page, and there will be a new tab labeled GPUs. We're looking at the server
in Slot 7, and we see the PCIe node in Slot 8. You'll see that, for instance, this is the PCIe
node in Slot 8 of the chassis, and so, that means it is connecting to the compute node in Slot
7. You can also note two GPUs: GPU 1 and GPU 4. This is an artifact of how the slots on the
PCIe node map out.

Cisco UCS X-Series with Intersight Deployment Workshop Page 32


© 2022 Cisco System, All rights reserved.
The Fifth Gen Fabric brings Cisco clearly back into the leadership position with networking in
a modular architecture, what is otherwise known as a blade architecture.

Cisco UCS X-Series with Intersight Deployment Workshop Page 33


© 2022 Cisco System, All rights reserved.
The new Fabric Interconnect, the Fifth Gen, is the 6536 version. The 5th Gen provides full
100G fabric end-to-end all the way from the top of the rack to various places: the Fabric
Interconnect uplinks, the fabric IO module, the IFM in the case of X-Series, and down to the
VIC on the blade. If you're familiar with Nexus switches, it looks a lot like the 6336 version
which it is based on 36 ports of 100G.
A significant change is a new IFM going in the back of the chassis with eight 100G ports. For
example, this change would give eight 100G ports per IFM with eight blades. That means a
dedicated 100G port per blade, per fabric going into the chassis.
Another change on the compute node itself is the new 100G VIC which supports 2 x 100G
connections per VIC. Again, this equals 100G per fabric per compute node. So, this gives us
200G per server today, but the new Beverly chip we're using on the VIC is actually capable of
200G connections. We are reclaiming the leadership, and we intend to keep that leadership
as networking continues to advance in the data center.
The goal of the Fifth Gen is to increase the peak flows that are available. Today, we have
100G per fabric per compute node, but at four-to-one over subscription and 25G maximum
flows. With the Fifth Gen, we will have 100G maximum flows. Our main goal for use cases is
most predominantly storage-related using NVMe over fabric or fiber channel storage-
related.

Cisco UCS X-Series with Intersight Deployment Workshop Page 34


© 2022 Cisco System, All rights reserved.
UCS 6536 Fabric Interconnect
• 5th Generation UCS Fabric Interconnect (FI)
• 36x 100G Ethernet ports , 1 RU form -factor
• 32x Ethernet ports (1/10/25/40/100 Gbps)
• 4x Unified ports
• 4x100G Ethernet (10/25/40/100) or
• 16x 8/16/32G FC ports after breakout

• Support X9108-IFM-100G and X9108-IFM-


25G
• Support IOM 2408 and FEX 93180YC -FX3
• Support UCS VIC 1400/14000 and 15000
Series
• Support M6 x210c, M5/M6 B- and C- Series
• Intersight Managed mode at FCS with 4.2(2)
• Post-FCS support for UCSM, IOM 2304
(IMM/UCSM), VIC 1300 (UCSM)

© 2022
2022 Cisco
Ciscoand/or
and/oritsitsaffiliates.
affiliates.All
Allrights
rightsreserved..
reserved. .

The 6536 5th Generation Fabric Interconnect provides 7.4 terabits of bandwidth. It is 36 ports
of 100G, but it's been “fabric-interconnected.” When Fiber Channel mode is used, the
unified ports are capable of full 128G bandwidth. The L2 module is in the back with our
console and our management port in addition to the two power supplies and all of the fans.

Cisco UCS X-Series with Intersight Deployment Workshop Page 35


© 2022 Cisco System, All rights reserved.
In the X-Series chassis, we’ll support either the new 100G IFM or the existing 25G IFM. We
will also support the California 5108 chassis, initially with the 25G 2408 Fabric Interconnect
and post FCS. At initial release, this will be Intersight-managed only, just like X-Series, but we
will do UCSM support post-FCS.

Cisco UCS X-Series with Intersight Deployment Workshop Page 36


© 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 37
© 2022 Cisco System, All rights reserved.
This standard eye chart informs us of which ports support which speeds and which wire
protocols. The most important thing to note is that the fiber channel is in these last four
ports, making it 36 ports, not 32. Thus, we still have 32 ports for server and uplink
connections and four for our fiber channel connectivity.
If you truly wanted a fully one-to-one no over-subscription connectivity, you would have 16
ports for servers and16 ports for up-links. Then, you would use your server ports for
however much total bandwidth you needed per server. If you wanted full non-
oversubscribed, you would attach two chassis with eight cables each per pair of Fabric
Interconnect. This gives you an amazing amount of networking. We actually have two ports
that are capable of being handcuffed all the way down to 1G for those customers needing it.

Cisco UCS X-Series with Intersight Deployment Workshop Page 38


© 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 39
© 2022 Cisco System, All rights reserved.
Internally, the IFM is nearly identical to the 25G IFM: it has the same Cisco ASIC and all of the
same internal bandwidth with the exception that now we have 800G going up instead of
200G, which we had before with 8 x 25. This way has eight 100G QSFP ports going upstream.
For those familiar with the original architecture and the IOMs located in the chassis and the
IOM host, this is known as the Chassis Management Controller. This also continues with the
IFMs on Cisco UCS X-Series. The Chassis Management Controller (CMC) is very similar, and
the software is common. The Cisco UCS X-Series chassis is managed by Cisco Intersight
Managed Mode (IMM).
• FPGA is secured.
• It cannot be updated or altered without the image signed by Cisco.
• Altered FPGA does ensure its boot image.
• If image is changed from the signatures installed, it will refuse to boot.
• This hosts CMC for chassis management.
• This has equivalent function to IOM on First Gen chassis.

Cisco UCS X-Series with Intersight Deployment Workshop Page 40


© 2022 Cisco System, All rights reserved.
The new VIC 15231 is going to go on the X-Series on the X210C today. The new properties
include new support for some new offloads, some of them with software post-FCS. However,
the NVMe over-Fabric support, fiber channel NVMe, is available right away. This all comes in
a single card at 2 x100G, 100G to each fabric A and B.

Cisco UCS X-Series with Intersight Deployment Workshop Page 41


© 2022 Cisco System, All rights reserved.
These are the details for the breakout connectivity from the unified ports when set to FC
mode. You need the 128G QSFP to OM4, an OM4 MPO to LC breakout cable, and the end
point switch SFPs. By doing this, you get four ports of 8G, 16G, or 32G per broken out FI port.
This is configured in the Intersight Port Policy.

Cisco UCS X-Series with Intersight Deployment Workshop Page 42


© 2022 Cisco System, All rights reserved.
This chart gives the throughput per compute node, depending on the combination you have
of FI, IFM, and then the VIC on the compute node. As an example, A could be your Mac
computer. This is 100G flows, 100G throughput on the VIC to each fabric, and 100G for fiber
channel on the vHBA. The vNICs and the vHBAs will show up at the OS as 100G.
B is an example of a chassis that still has the 25G IFMs. The result is different with only 50G
per vNIC with two vNICs per fabric.
C shows the Fourth Gen Fabric interconnect with the 25 gig VIC and the Fourth Gen VIC using
our existing topology.
D explores the results of using both the VIC mLOM and the VIC Mezz, an old way of getting
200G, but with 25G flows through 50G vNICs.

Cisco UCS X-Series with Intersight Deployment Workshop Page 43


© 2022 Cisco System, All rights reserved.
One configuration (A) that was demonstrated previously is that of the Mac computer using
the Fifth Gen Fabric interconnect. This includes the Fifth Gen IFM and the Fifth Gen VIC,
making the vNICs and my vHBAs both 100G. When looking at the OS, it’s from the vSphere
networking. The result is two NICs each at 100G.

Cisco UCS X-Series with Intersight Deployment Workshop Page 44


© 2022 Cisco System, All rights reserved.
In these two configurations (B and C), if you have only a single VIC here, it doesn't matter
what is used: if you use either the Fifth Gen or the Fourth Gen IOM, or the 25G IOM, which FI
is used in either of these or the IFM. Because a 25G VIC is being used, it doesn't matter; the
VIC is the limiter. In each of these cases, the vNICs are 50G and the vHBAs are 50G.

Cisco UCS X-Series with Intersight Deployment Workshop Page 45


© 2022 Cisco System, All rights reserved.
In this configuration (D), both the VIC and the Mezzanine are used, and it's going to all be
Fourth Gen. So, it doesn't matter what the FI is or what the IFM is.

Cisco UCS X-Series with Intersight Deployment Workshop Page 46


© 2022 Cisco System, All rights reserved.
This refresh of the PCIe topology for the compute node shows the addition of the new VIC,
which is Gen4 x16, as well as the connections to the X-Fabric module, with a Gen4 x16 added
to each X-Fabric module. This also denotes the connections to the GPU Mezz: 2x Gen4 X8
and 2x Gen4 X4, to those devices.

Cisco UCS X-Series with Intersight Deployment Workshop Page 47


© 2022 Cisco System, All rights reserved.
With X-Fabric, there were no changes, no Intersight configurations needed. However, we do
need to do some configurations with the Fabric, some changes that are seen in Intersight
with the new support for Fifth Gen. One is that on the initial screen for the Domain Port
Policy, you have the option for the new hint for the Fifth Gen Fabric interconnect. This is
done by indicating you are building a Port Policy for a Fifth Gen FI. As a result, you will get
the image for the Fifth Gen FI.

Cisco UCS X-Series with Intersight Deployment Workshop Page 48


© 2022 Cisco System, All rights reserved.
The Unified Port Configuration is now located on the right. The slider is the same, going from
left to right. However, this only moves in increments of one port at a time now in Fifth Gen.
Previously, each time you notched the slider, it picked pairs of top and bottom ports. Now
you can have just Port 36, or you can add Port 35, Port 34, and Port 33, still moving the slider
to the right.
In this example, the slider has been moved two slots, two notches over, and you can see that
it visually breaks those ports into four individual ports. In these ports you can see it is a 128G
port. It is not actually limited to 100G. We have a 128G breakout QSFP module that you
attach a four-way breakout optical fiber to it. On the other end of the fiber, you will attach
either an 8G, a 16G, or a 32G fiber channel SFP to plug in to your SAN ports. It does support
full 128G broken out.

Cisco UCS X-Series with Intersight Deployment Workshop Page 49


© 2022 Cisco System, All rights reserved.
A new step has been created as you configure the port policy, Step 3, to do the breakout
options. This is on the Ethernet side. You can break out Ethernet; it's either 4x10 or 4x25.

Cisco UCS X-Series with Intersight Deployment Workshop Page 50


© 2022 Cisco System, All rights reserved.
The Fiber Channel (FC) can be broken out as 4x8G, 4x16G, or 4x32G. It has to be broken out
because it does not support 64G or 128G Fiber Channel; it supports only 8G, 16G, or 32G.
For this example, both ports (Ports 35 and 36) are selected and are being configured
identically to 16G. When Set is clicked, the configuration is finished.

Cisco UCS X-Series with Intersight Deployment Workshop Page 51


© 2022 Cisco System, All rights reserved.
The port configuration on the Ethernet side gives you full flexibility. In this example, two
ports are being configured off of this breakout. You can configure each port individually on
the breakout using your breakout cable, which should be either a 4 x 10G or a 4 x 25G
breakout.
Their roles can be configured independently as server up-link or appliances, depending on
what is being plugged into. The breakouts will also be supported on the Fourth Gen for those
six 100G ports on the far end of the Fourth Gen. Those can be broken out now as well with
this change.

Cisco UCS X-Series with Intersight Deployment Workshop Page 52


© 2022 Cisco System, All rights reserved.
This diagram gives a look at the full breadth of capabilities offered by the new 6536 FI. This is
the Fabric Interconnect of Fabric Interconnects, because it goes all the way from the X-Series
with the new IFMs at 100G, making an end-to-end monster 1.6 terabit per chassis total
bandwidth, to the existing 25G X-Series. This also shows the 5108 chassis with B200 M5 or
M6 blades at either 40G or 25G.
We can also see the C-Series rack servers connected through FEXs, such as the 93180 FX
seen here which is the 25G FEX, which can be up-linked at 100G to the Fabric Interconnect.
So, with those, you can use the 1400 Series VICs at 25G, and then, finally, direct connect
rack-mount C-Series at either 25G or 100G up to the FI. This offers an awesome array of
capabilities out of this model of Fabric Interconnect.

Cisco UCS X-Series with Intersight Deployment Workshop Page 53


© 2022 Cisco System, All rights reserved.
Cisco UCS X-Series with Intersight Deployment Workshop Page 54
© 2022 Cisco System, All rights reserved.

You might also like