Professional Documents
Culture Documents
UCS-X - Includes - FullParticipationGuide - Addendum 1 - 05.16.22
UCS-X - Includes - FullParticipationGuide - Addendum 1 - 05.16.22
Intersight
Deployment Workshop
Participant Guide
Americas Headquarters Asia Pacific Headquarters Europe Headquarters
Cisco Systems, Inc. Cisco Systems (USA) Pte. Ltd. Cisco Systems International BV Amsterdam,
San Jose, CA Singapore The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at
http://www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of
Cisco trademarks, go to this URL:http://www.cisco.com/c/en/us/about/legal/trademarks.html. Third-party trademarks that are mentioned are
the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other
company. (1110R)
DISCLAIMER WARRANTY: THIS CONTENT IS BEING PROVIDED “AS IS” AND AS SUCH MAY INCLUDE TYPOGRAPHICAL, GRAPHICS,
OR FORMATTING ERRORS. CISCO MAKES AND YOU RECEIVE NO WARRANTIES IN CONNECTION WITH THE CONTENT PROVIDED
HEREUNDER, EXPRESS, IMPLIED, STATUTORY OR IN ANY OTHER PROVISION OF THIS CONTENT OR COMMUNICATION BETWEEN
CISCO AND YOU. CISCO SPECIFICALLY DISCLAIMS ALL IMPLIED WARRANTIES, INCLUDING WARRANTIES OF MERCHANTABILITY,
NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR
TRADE PRACTICE. This learning product may contain early release content, and while Cisco believes it to be accurate, it falls subject to the
disclaimer above.
Cisco UCS X-Series with Intersight
Deployment Workshop Overview
Cisco UCS X-Series Solution Architecture
Cisco UCS X-Series Hardware Components & Traffic
Flow
Cisco UCS X-Series and Intersight Installation
Considerations
Cisco UCS X-Series Management through IMM
Workshop Description
The Cisco UCS X-Series with Intersight Deployment
Workshop will enable Cisco customers, internal and
Partner SEs/FEs to deploy the UCS X9508 chassis,
X210C compute nodes, X6400 Series Fabric
Interconnects and options. Workshop participants will
also learn how to prepare the UCS X-Series hardware
components for discovery by Intersight Managed
Mode (IMM). The workshop also includes information
on how the UCS solution architecture has evolved to
the X-Series and how to manage solution components
with IMM.
2-3 Hours
over 2-3 days
Cisco UCS X-Series with Intersight
Deployment Workshop
Section 1
Solution Architecture
Thank you for joining this training session on UCS X-Series deployment.
Agenda 2
Founda�ons
Introducing Cisco UCS® X-Series
with Intersight™
3 Managing Cisco UCS 2.0 with
Cisco Intersight
Unified Fabric
• Fabric Interconnect over Ethernet, Fibre Channel, and Fibre Channel over Ethernet
(FCoE) allows for flexible deployments
In UCSM, switch elements, such as FI, include port and traffic management.
Compute nodes have many internal devices: mezzanine cards, storage disks, and the internal
management interface called the CIMC (Cisco Integrated Management Controller). The CIMC is
used to communicate hardware identity information to the FIs for inventory and management
of identities.
This figure illustrates a managed UCS deployment. Managed endpoints include switching
chassis and compute node components, which are discovered, cataloged, and managed by UCS
Manager in an XML database. The API is what gives the customer access to the XML database.
The database information is also translated to HTML for GUI access and searchable via the CLI
on the console.
Standalone deployments are an option, but this creates a single point of failure for
management and data transmission.
HA configuration, firmware, policies and profiles are all controlled through the UCS Manager
interface.
• UCSM provides three ways to connect to the hardware and control the different
deployment elements
- GUI (HTTP/HTTPS)
- Console (CLI)
- API (Application Programmable Interface)
• Administrators decide the connection type
• Connect to the active fabric interconnect - UCSM active in a cluster deployment
• The subordinate will not have the option to login because the UCSM cluster uses a one-
way synchronization mechanism to prevent “split-brain” scenarios
The UCS product portfolio has always included a robust range of devices and implementation
models. Traditionally, Cisco provided the B-series with a high level of compute power able to
support only two drives.
The C-series compute nodes have been the primary option for customers looking for a more
“dedicated” networking/storage solution. Some of the C-series computes can support up to 24
drive bays with multiple drive options.
Cisco also has support for “pre-built” deployments such as VBlocks and FlexPods. In these
solutions, the customer selects what hardware to deploy. Then, Cisco will pre-build the
solution, box it, and ship it to the customer. This allows the customer to simply unbox the
solution, plug it in, and start their deployment.
With such a robust product portfolio, the customer needed a management solution that would
work for all deployments across multiple data centers. This was the driving factor to position
IMM and Intersight management at the center of the UCS deployment.
This is a representation of the UCSM interface as it stands today. It has evolved over time,
beginning as a JAVA applet and then migrating to an HTML5 model. However, it remains as a
software component of the fabric interconnect. This means that each domain, without UCS
Central, is managed on a per-domain basis. This can be very cumbersome in larger
deployments, especially across multiple data centers.
Opportunity To expa nd s upported workl oa ds (es peci a l l y a t s ca l e where bl a des bri ng s i gni fica nt TCO fa vora bi l i ty)
When diving into the application side of these deployments, below are some of the application
considerations Cisco would like to facilitate:
• HPC (High Performance Computing) environments
• Big Data
• Splunk for management
• Higher Graphics Processing Unit (GPU) workloads
• Support for Cloud Native applications
• New 9508 chassis builds on the original UCS 5108 blade chassis
• Expands power and cooling capacity supporting future generations of processors, NICs,
accelerators, and memory technology
• Provides for a massive leap in per-blade and overall chassis network bandwidth
• A second fabric is enabled for capitalizing on processor connected fabrics, both present
and future
• Supports expanded node-local storage and offers i-chassis storage expansion
• Future-proofing for a long lifecycle
• Select and upgrade fabric modules independently per-chassis
• Pre-enabled for optical interconnects through direct, orthogonal fabric connections and
water cooling through open chassis with ample room for water pipes, both open and
closed loop
Other disadvantages of a midplane in the chassis is it is generally not easily field replaceable.
This locks in whatever that midplane can deploy. Similar components will have different part
numbers when installed in various midplanes.
UCS X-Series is about handling the workload customers bring to it with precisely the right
technology to service application requirements, in addition to flexibility and simplicity. Also,
removing the midplane improves airflow, therefore fans do not have to spin as fast, requiring
less power for cooling. Efficient cooling reduced overall power requirement.
• Lack of Intersight Orchestrator (IO) midplane requiring network traces reduces airflow
impedance, significantly increasing cooling efficiency
• Minimal midplane for power and management signal distribution
• Fans provide higher cooling at reduced RPM = lower acoustics and higher reliability
• Improved cooling properties and advanced DC power distribution = higher power
efficiency
• Ample power headroom for high power CPU and GPU roadmap
• Virtual Interface Card (VIC) or the network card makes the connection
• Fabric is independent of the compute
• Choose compatible VIC and fabric modules
• Initial release for X-Series
- VIC capable of 100-gigabit connection
- Four by 25-gigabit connections across the two FEMs
- Two by 25 gigabits per IF
- 100 gigabits total for mLOM for network connectivity, independent of compute node
• Future compute nodes can be selected independent of the fabric
In the back of the chassis, each of the four system fans are 100 millimeters in size. There are
three fans within each fabric module, both IFM and EFM. At initial release, there is not an
FEM with an actual fabric chip on it. Those are carriers for the three fans that provide the
cooling for the bottom of the chassis. The chassis' total airflow is 1000 CFM compared to 590
in the 5180 chassis, substantially increasing its overall capacity.
The biggest difference with networking lanes with the 5108, and the original series of blades
through M5 was primarily based on 10-gigabit Ethernet KR lanes. On the 3rd generation, the
fabric could be combined into a 40-gigabit link but primarily focused at 10 gigabit. With UCS
X-Series, they are now 25-gigabit lanes. In the future, those can be combined into higher
bandwidth with larger lanes. Another significant change on the compute nodes and
expansion fabric's potential is that PCIe Gen4 is now available.
Shown is the five-year power consumption rate across the chassis' primary power
consumer—the CPU, which includes memory power usage. The amount of power drawn by
the memory isn’t insignificant, but it is tightly coupled to the CPU. Graphics Processing Units
(GPUs), which standalone from other acceleration technologies, are already very high-
powered. The last row is a generic row for accelerators, which might be FPGAs or a system
on a chip-based smart NIC. There are a variety of other accelerators for network functions or
security chips.
The increase in power drives the industry from air cooling to water cooling, a niche solution
primarily in places like High-Performance Computing (HPC). Clusters are becoming more
mainstream. The far right shows the top power consuming devices in each category, which
indicate there will be demand for liquid cooling in five to ten years. There will be a range of
GPUs and a range of accelerators. The most demanding workloads for the highest in skews
are likely to be looking for water cooling.
This graphic shows a clear, growing demand for acceleration in approximately 30 percent of
use cases, but this leaves two-thirds of use cases without need.
The FI 6454 and FI 64108 both fabric scale the number of 25-gigbit ports directly and double
the number of 145, 100-gigbit ports. There are 16 ports capable of fiber channel on both
switches.
• FPGA is secured
• Cannot be updated or altered without the image signed by Cisco
• Altered FPGA does ensure its boot image
• If image changed from the signatures installed it will refuse to boot
• Hosts CMC for chassis management
• Equivalent function to IOM on first gen chassis
• Internal components of the B server and the connection to the fabric modules
• Compute node - Cisco UCS x210c M6 Compute Node
• Two-socket node using code named Ice Lake from Intel known as 3rd generation Z on
scale processor
• VIC and mezzanine are at the back of the compute node
• Existing b200 product line is 25-gigabit connectivity to both boards
• Flexible front mezzanine for storage
• Blank space allowed - not forced to pay for infrastructure to support drives
• Between the two CPUs is an additional module
• Existing M.2 hardware rate controller is used as a boot drive
Additionally, you can see that the front of the blade has an ample open space where the front
mezzanine can connect into the front mezzanine connectors and has room for drives or other
potential future options.
Because M6 is several years newer than M5, the drives themselves are generally not
common because you will see newer drives on M6. It is still common between the six racks
servers and the X-Series Cisco UCS X210c M6 Compute Node. They are not common with the
b200 M6 because the b200 M6 has a unique form factor for its drives. The other notable
difference for this blade is the new console connector. The form factor is the USBC, but it is
known as an oculant console port. The significant impact of that is that those thousands of
KVM dongles you have scattered all over the data center will now have a new model of KVM
dongle that you'll have to start populating. Of course, the blade will come with that adapter.
As usual, that is giving you a VGA connector and a couple of serial ports for the keyboard and
mouse and a serial console port.
The SAS/SATA controller SED drives are supported with key manager supported in either
local and remote mode. The Key Management Interoperability Protocol (KMIP) support is a
post initial release feature.
• SAS/SATA module known as the X10c mezzanine and RAID controller
• 3900 basic code named Aero
• Does have a cache
• SuperCap included
• X10c RAID controller includes six drives up to four NVMe
• RAID levels are up to RAID 10 or JBOD mode for those who want to pass the drives
through directly to the OS
• NVMe drives do support Intel VROC, are all on the same root port, can all be in the
same VROC volume
• Direct fabric IO connections mate the Mezz connector to the X Fabric modules at the
bottom rear of chassis
• Route Mezzanine network signals to the IFM modules at the top rear using a bridge card
• Signals are routed through the mLOM card to the IFM by the bridge
There are two port groups on the mLOM card. The mLOM receives its signals from CPU 1. It
is allowing a single socket system to be able to work with a single mLOM. The mezzanine
gets its signals from CPU 2 and cannot be used in a single socket CPU configuration. Each
port group has two 25-gigabit lanes going to the IFM. Each port group is a physical connector
going to the IFM.
Welcome to this section of the UCS Deployment Workshop on Deployment Considerations for
the X-Series components.
There are specific ports carved out for fiber channel, FI to IFM links, and the network uplinks.
Some ports support specific protocols or speeds that you must be aware of and adhere to. For
instance, if you are looking for fiber channel storage on the 1U, 6454 FI, the fiber channel is
available on ports 1 to 16.
• Clicking back to Manual Mode wipes out slot ID and PCI order that auto-placement did
When you have installed the X-Fabric modules, they show up in the chassis inventory below
the IO modules, which is where you find your IFMs. Now that you have the XFMs; you can
receive the pertinent details on them, such as serial number. Below that you see the fan
modules and their operational state.
Here you can see more detail on the NVIDIA products that we're supporting, including which
use cases they apply to - compute or VDI virtual workstation workloads - and the capabilities,
including power consumption and size. We have three options around VDI, specifically
virtual workstation VDI, because that’s where GPUs come into play.
The amount of frame buffer per server is used to determine the number of users that can
generally be accommodated per server given those GPUs.
This diagram shows how the PCIe nodes will show up in Intersight. Note that they are on the
server inventory page, and there will be a new tab labeled GPUs. We're looking at the server
in Slot 7, and we see the PCIe node in Slot 8. You'll see that, for instance, this is the PCIe
node in Slot 8 of the chassis, and so, that means it is connecting to the compute node in Slot
7. You can also note two GPUs: GPU 1 and GPU 4. This is an artifact of how the slots on the
PCIe node map out.
© 2022
2022 Cisco
Ciscoand/or
and/oritsitsaffiliates.
affiliates.All
Allrights
rightsreserved..
reserved. .
The 6536 5th Generation Fabric Interconnect provides 7.4 terabits of bandwidth. It is 36 ports
of 100G, but it's been “fabric-interconnected.” When Fiber Channel mode is used, the
unified ports are capable of full 128G bandwidth. The L2 module is in the back with our
console and our management port in addition to the two power supplies and all of the fans.