Professional Documents
Culture Documents
WQQQQQ FTW
WQQQQQ FTW
Blade Server
OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
DETAILED VIEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
BASE SERVER STANDARD CAPABILITIES and FEATURES . . . . . . . . . . . . . . . . . 4
CONFIGURING the SERVER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
STEP 1 CHOOSE SERVER SKU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
STEP 2 CHOOSE CPU(S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
STEP 3 CHOOSE MEMORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
STEP 4 CHOOSE OPTIONAL FRONT MEZZANINE: Storage Controller or GPU . . . . . . . . . . . . 22
STEP 5 CHOOSE HARD DISK DRIVES or SOLID-STATE DRIVES (OPTIONAL) . . . . . . . . . . . . . . 25
STEP 6 CHOOSE REAR MEZZANINE CARDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
STEP 7 ORDER A TRUSTED PLATFORM MODULE (OPTIONAL) . . . . . . . . . . . . . . . . . . . . . . 35
STEP 8 CHOOSE OPTIONAL SECURE DIGITAL CARDS or M.2 DEVICES . . . . . . . . . . . . . . . . 36
STEP 9 ORDER INTERNAL USB 3.0 DRIVE (OPTIONAL) . . . . . . . . . . . . . . . . . . . . . . . . . . 38
STEP 10 CHOOSE OPERATING SYSTEM AND VALUE-ADDED SOFTWARE . . . . . . . . . . . . . . . 39
STEP 11 CHOOSE OPERATING SYSTEM MEDIA KIT (OPTIONAL) . . . . . . . . . . . . . . . . . . . . 44
STEP 12 CHOOSE SERVICE and SUPPORT LEVEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
STEP 13 CHOOSE LOCAL KVM I/O CABLE* (OPTIONAL) . . . . . . . . . . . . . . . . . . . . . . . . . 52
SUPPLEMENTAL MATERIAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
NETWORK CONNECTIVITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
SPARE PARTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
UPGRADING or REPLACING CPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
DISCONTINUED EOL PRODUCTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
TECHNICAL SPECIFICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
OVERVIEW
Designed for demanding virtualization and database workloads, the UCS B480 M5 combines a large memory
footprint with four-socket scalability, leveraging the Intel® Xeon® Scalable processor family of CPUs. The
UCS B480 M5 supports 2666-MHz and 2933-MHz DDR4 memory and utilizes UCS Virtual Interface Card (VIC)
technology to achieve up to 160 Gb/s aggregate I/O bandwidth, all in a dense, full-width blade form factor.
The UCS B480 M5 maintains memory performance even as capacity grows and can support up to 6 TB of
memory without compromise to CPU speed or core count. Up to four UCS B480 M5 Blade Servers can be
installed in the UCS 5108 Blade Server Chassis.
The UCS B480 M5 Blade Server is centrally managed using Cisco UCS Manager. Cisco UCSM provides a single
management interface for all hardware and software components of a UCS domain. Cisco UCSM manages up
to 160 servers, using Single Connect Technology to consolidate network, storage, and management traffic.
With Cisco UCS Central, management can be extended globally to thousands of servers in multiple domains.
DETAILED VIEWS
Blade Server Front View
Figure 2 is a detailed front view of the Cisco UCS B480 M5 Blade Server.
Notes
1. For information about the KVM local I/O cable that plugs into the console connector (a cable is included with
every Cisco UCS 5108 blade server chassis accessory kit), see CHOOSE LOCAL KVM I/O CABLE* (OPTIONAL) on
page 52.
NOTE: The B480 M5 blade server requires UCS Manager (UCSM) to operate as part of
the UCS system.
■ The B480 M5 with the Intel® Xeon® Scalable processor family of CPUs requires
UCSM 3.2(2) or later release, and with the 2nd Generation Intel Scalable
Processors, UCSM 4.0(4b) is required.
Capability/Feature Description
Blade server chassis The full-width UCS B480 M5 Blade Server mounts in a Cisco UCS 5108 series
blade server chassis.
CPU Two or four Intel® Xeon® Scalable family CPUs or two or four 2nd Generation
Intel® Xeon® Scalable family CPUs
■ One rear mLOM slot supports the Cisco VIC 1440 only.
NOTE: The Cisco VIC 1400 Series (1440 and 1480) is compatible with
6200 and 6300 Series Fabric Interconnects as well as the
2204XP, 2208XP, 2304, and 2408XP Fabric Extenders.
■ Two rear mezzanine slots support Cisco adapters and Cisco UCS Storage
Accelerator or GPUs.
■ Two front mezzanine slots support GPUs, FlexStorage RAID controllers, and
the FlexStorage Passthrough module.
Capability/Feature Description
Interfaces One KVM console connector on the front panel (see STEP 13 CHOOSE LOCAL
KVM I/O CABLE* (OPTIONAL), page 52)
Video The Cisco Integrated Management Controller (CIMC) provides video using the
Matrox G200e video/graphics controller:
Power subsystem Integrated in the Cisco UCS 5108 blade server chassis.
Integrated The built-in Cisco Integrated Management Controller (CIMC) GUI or CLI
management interface enables you to monitor the server inventory, health, and system
processor event logs.
ACPI Advanced Configuration and Power Interface (ACPI) 4.0 Standard supported.
UCSB-B480-M5 UCS B480 M5 Blade Server with no CPU, memory, HDD, SSD, or adapter cards
UCSB-B480-M5-U UCS B480 M5 Blade Server without CPU, memory, drive bays, HDD, VIC adapter,
or mezzanine adapters (UPG) (ordered standalone)
UCSB-B480-M5-CH DISTI: UCS B480 M5 Blade Server without CPU, memory, drive bays, HDD, VIC
adapter, or mezzanine adapters
The base Cisco UCS B480 M5 blade server does not include the following components. They must
be selected during product ordering:
— CPUs
— DIMM memory
NOTE: Use the steps on the following pages to order servers with the
configurable components that you want configured in your servers.
NOTE:
The CPUs designated as Ix2xx are 2nd Generation Intel® Xeon® scalable processor
family CPUs.
Select CPUs
Highest DDR4
Clock Cache
Power UPI1 Links DIMM Clock Front Workload/
Product ID (PID) Freq Size Cores
(W) (GT/s) Support GPU3 Processor Type
(GHz) (MB)
(MHz)2
Cisco Recommended Processors4 (2nd Generation Intel® Xeon® Processors)
UCS-CPU-I8276 2.2 165 38.50 28 3 x 10.4 2933 Y Oracle, SAP
UCS-CPU-I8260 2.4 165 35.75 24 3 x 10.4 2933 Y Microsoft Azure Stack
UCS-CPU-I6262V 1.9 135 33.00 24 2 x 10.4 2400 Y Virtual Server
Infrastructure or VSI
UCS-CPU-I6248 2.5 150 27.50 20 3 x 10.4 2933 Y VDI, Oracle, SQL,
Microsoft Azure Stack
UCS-CPU-I6238 2.1 140 30.25 22 10.4 2933 Y SAP
UCS-CPU-I6234 3.3 130 24.75 8 10.4 2933 Y Oracle, SAP
UCS-CPU-I6230 2.1 125 27.50 20 3 x 10.4 2933 Y Big Data, Virtualization
UCS-CPU-I5220 2.2 125 24.75 18 3 x 10.4 2666 Y HCI
UCS-CPU-I5218 2.3 125 22.00 16 3 x 10.4 2666 Y Virtualization,
Microsoft Azure Stack,
Splunk, Data Protection
8000 Series Processor
UCS-CPU-I8280M 2.7 205 38.50 28 3 x 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I8280L 2.7 205 38.50 28 3 x 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I8280 2.7 205 38.50 28 3 x 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I8276M 2.2 165 38.50 28 3 x 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I8276L 2.2 165 38.50 28 3 x 10.4 2933 Y 2nd Gen Intel® Xeon®
Highest DDR4
Clock Cache
Power UPI1 Links DIMM Clock Front Workload/
Product ID (PID) Freq Size Cores
(W) (GT/s) Support GPU3 Processor Type
(GHz) (MB)
(MHz)2
UCS-CPU-I8276 2.2 165 38.50 28 3 x 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I8270 2.7 205 35.75 26 3 x 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I8268 2.9 205 35.75 24 3 x 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I8260Y 2.4 165 35.75 24/20 3 x 10.4 2933 N 2nd Gen Intel® Xeon®
/16
UCS-CPU-I8260M 2.4 165 35.75 24 3 x 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I8260L 2.3 165 35.75 24 3 x 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I8260 2.4 165 35.75 24 3 x 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-8180M 2.5 205 38.50 28 3 x 10.4 2666 N Intel® Xeon®
UCS-CPU-8176M 2.1 165 38.50 28 3 x 10.4 2666 Y Intel® Xeon®
UCS-CPU-8170M 2.1 165 35.75 26 3 x 10.4 2666 Y Intel® Xeon®
UCS-CPU-8160M 2.1 150 33.00 24 3 x 10.4 2666 Y Intel® Xeon®
UCS-CPU-8180 2.5 205 38.50 28 3 x 10.4 2666 N Intel® Xeon®
UCS-CPU-8176 2.1 165 38.50 28 3 x 10.4 2666 Y Intel® Xeon®
UCS-CPU-8170 2.1 165 35.75 26 3 x 10.4 2666 Y Intel® Xeon®
UCS-CPU-8168 2.7 205 33.00 24 3 x 10.4 2666 N Intel® Xeon®
UCS-CPU-8164 2.0 150 35.75 26 3 x 10.4 2666 Y Intel® Xeon®
UCS-CPU-8160 2.1 150 33.00 24 3 x 10.4 2666 Y Intel® Xeon®
UCS-CPU-8158 3.0 150 24.75 12 3 x 10.4 2666 Y Intel® Xeon®
UCS-CPU-8156 3.6 105 16.50 4 3 x 10.4 2666 Y Intel® Xeon®
UCS-CPU-8153 2.0 125 22.00 16 3 x 10.4 2666 Y Intel® Xeon®
6000 Series Processor
UCS-CPU-I6262V 1.9 135 33 24 10.4 2400 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6254 3.1 200 24.75 18 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I6252 2.1 150 35.75 24 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6248 2.5 150 27.50 20 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6246 3.3 165 24.75 12 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I6244 3.6 150 24.75 8 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I6242 2.8 150 22.00 16 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6240Y 2.6 150 24.75 18/ 10.4 2933 N 2nd Gen Intel® Xeon®
14/
8
UCS-CPU-I6240M 2.6 150 24.75 18 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6240L 2.6 150 24.75 18 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6240 2.6 150 24.75 18 10.4 2933 Y 2nd Gen Intel® Xeon®
Highest DDR4
Clock Cache
Power UPI1 Links DIMM Clock Front Workload/
Product ID (PID) Freq Size Cores
(W) (GT/s) Support GPU3 Processor Type
(GHz) (MB)
(MHz)2
UCS-CPU-I6238M 2.1 140 30.25 22 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6238L 2.1 140 30.25 22 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6238 2.1 140 30.25 22 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6234 3.3 130 24.75 8 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6230N 2.3 125 27.5 20 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I6230 2.1 125 27.50 20 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6226 2.7 125 19.25 12 10.4 2933 Y 2nd Gen Intel® Xeon®
UCS-CPU-I6222V 1.8 115 27.5 20 10.4 2400 Y 2nd Gen Intel® Xeon®
UCS-CPU-6142M 2.6 150 22.00 16 10.4 2666 Y Intel® Xeon®
UCS-CPU-6140M 2.3 140 24.75 18 10.4 2666 Y Intel® Xeon®
UCS-CPU-6134M 3.2 130 24.75 8 10.4 2666 Y Intel® Xeon®
UCS-CPU-6154 3.0 200 24.75 18 10.4 2666 N Intel® Xeon®
UCS-CPU-6152 2.1 140 30.25 22 10.4 2666 Y Intel® Xeon®
UCS-CPU-6150 2.7 165 24.75 18 10.4 2666 Y Intel® Xeon®
UCS-CPU-6148 2.4 150 27.50 20 10.4 2666 Y Intel® Xeon®
UCS-CPU-6146 3.2 165 24.75 12 10.4 2666 Y Intel® Xeon®
UCS-CPU-6144 3.5 150 24.75 8 10.4 2666 Y Intel® Xeon®
UCS-CPU-6142 2.6 150 22.00 16 10.4 2666 Y Intel® Xeon®
UCS-CPU-6140 2.3 140 24.75 18 10.4 2666 Y Intel® Xeon®
UCS-CPU-6138 2.0 125 27.50 20 10.4 2666 Y Intel® Xeon®
UCS-CPU-6136 3.0 150 24.75 12 10.4 2666 Y Intel® Xeon®
UCS-CPU-6134 3.2 130 24.75 8 10.4 2666 Y Intel® Xeon®
UCS-CPU-6132 2.6 140 19.25 14 10.4 2666 Y Intel® Xeon®
UCS-CPU-6130 2.1 125 22.00 16 10.4 2666 Y Intel® Xeon®
UCS-CPU-6128 3.4 115 19.25 6 10.4 2666 Y Intel® Xeon®
UCS-CPU-6126 2.6 125 19.25 12 10.4 2666 Y Intel® Xeon®
5000 Series Processor
UCS-CPU-I5222 3.8 125 16.50 4 10.4 2933 N 2nd Gen Intel® Xeon®
UCS-CPU-I5220S 2.6 125 19.25 18 10.4 2666 Y 2nd Gen Intel® Xeon®
UCS-CPU-I5220 2.2 125 24.75 18 10.4 2666 Y 2nd Gen Intel® Xeon®
UCS-CPU-I5218B 2.3 125 22 16 10.4 2666 Y 2nd Gen Intel® Xeon®
UCS-CPU-I5218 2.3 125 22.00 16 10.4 2666 Y 2nd Gen Intel® Xeon®
UCS-CPU-I5217 3.0 115 11.00 8 10.4 2666 Y 2nd Gen Intel® Xeon®
UCS-CPU-I5215M 2.5 85 13.75 10 10.4 2666 Y 2nd Gen Intel® Xeon®
UCS-CPU-I5215L 2.5 85 13.75 10 10.4 2666 Y 2nd Gen Intel® Xeon®
UCS-CPU-I5215 2.5 85 13.75 10 10.4 2666 Y 2nd Gen Intel® Xeon®
Highest DDR4
Clock Cache
Power UPI1 Links DIMM Clock Front Workload/
Product ID (PID) Freq Size Cores
(W) (GT/s) Support GPU3 Processor Type
(GHz) (MB)
(MHz)2
UCS-CPU-5122 3.6 105 16.50 4 10.4 2666 Y Intel® Xeon®
UCS-CPU-5120 2.2 105 19.25 14 10.4 2400 Y Intel® Xeon®
UCS-CPU-5118 2.3 105 16.50 12 10.4 2400 Y Intel® Xeon®
UCS-CPU-5117 2.0 105 19.25 14 10.4 2400 Y Intel® Xeon®
UCS-CPU-5115 2.4 85 13.75 10 10.4 2400 Y Intel® Xeon®
Notes
1. UPI = Ultra Path Interconnect.
2. If higher or lower speed DIMMs are selected than what is shown in the table for a given CPU, the DIMMs will be
clocked at the lowest common denominator of CPU clock and DIMM clock.
3. System performance may be reduced for greater than 165 W total dissipation power (TDP) processors if operating
with a fan fault or above 32°C (89.6°F) system air inlet temperature, depending upon the application load. No front
GPU is allowed when using CPUs that dissipate greater than 165 W.
4. For details on memory support for processor classes and CPU modes, see Memory Support for CPU Classes and CPU
Modes on page 59.
CAUTION:
When the following high-frequency Intel® Xeon® Scalable processors are installed in
the server or any CPUs over 165 W, the maximum operating temperature (air inlet
temperature) must be 32° C (89.6° F) or lower. In addition to the lower
temperature, they cannot be used front mezzanine GPU module.
—UCS-CPU-I8280M: Intel 8280M 2.7 GHz/205W
—UCS-CPU-I8280L: Intel 8280L 2.7 GHz/205W
—UCS-CPU-I8280: Intel 8280 2.7 GHz/205W
—UCS-CPU-I8260Y: Intel 8280Y 2.4 GHz/165W
—UCS-CPU-I8270: Intel 8270 2.7 GHz/205W
—UCS-CPU-I8268: Intel 8268 2.9 GHz/205W
—UCS-CPU-8180M: 8180M 2.5 GHz/205W
—UCS-CPU-8180: 8180 2.5 GHz/205W
—UCS-CPU-8168: 8168 2.7 GHz/205W
—UCS-CPU-I6254: Intel 6254 3.1 GHz/200W
—UCS-CPU-I6246: Intel 6246 3.3 GHz/165W
—UCS-CPU-I6244: Intel 6244 3.6 GHz/150W
—UCS-CPU-I6240Y: Intel 6240Y 2.6 GHz/150W
—UCS-CPU-I6230N: Intel 6230N 2.3GHz/125W
—UCS-CPU-6154: 6154 3.0 GHz/200W
—UCS-CPU-I5222: Intel 5222 3.8 GHz/125W
Supported Configurations
■ Choose two identical CPUs from any one of the rows of Table 3 on page 8. CPU1 and CPU2
will be populated.
■ Choose four identical CPUs from any one of the rows of Table 3 on page 8. CPU1, CPU2,
CPU3, and CPU4 will be populated.
Notes
■ For 2-CPU systems, 24 DIMM slots are functional. When 2 CPUs are installed, rear mezzanine
connectors 1 (slot 2) and 2 (slot 3) are not active.
■ For 2-CPU systems, only 24 DIMM slots are functional (channels A - F for CPU1 and G - N for
CPU2).
■ See Table 10 on page 24 for supported adapter combinations in 2-CPU systems.
■ For 4-CPU systems, all 48 DIMM slots are functional (channels A - F for CPU1, G - M for CPU2,
N - T for CPU3, and U - Z for CPU4).
Memory is organized with six memory channels per CPU, with up to two DIMMs per channel, as
shown in Figure 3.
Bank 1
Bank 2
A2 A1 G1 G2
Channel A Channel G
B2 B1 H1 H2
Channel B Channel H
C2 C1 J1 J2
Channel C Channel J
D2 D1 K1 K2
Channel D Channel K
E2 E1 L1 L2
Channel E Channel L
F2 F1 M1 M2
Channel M
Channel F
CPU 1 CPU 2
Bank 2
Bank 1
Bank 1
Bank 2
N2 N1
U1 U2
Channel N CPU 3 CPU 4 Channel U
P2 P1 V1 V2
Channel P Channel V
Q2 Q1 W1 W2
Channel Q Channel W
R2 R1 X1 X2
Channel R Channel X
S2 S1 Y1 Y2
Channel S Channel Y
T2 T1 Z1 Z2
Channel T Channel Z
48 DIMMS
6 memory channels per CPU
2 DIMMs per channel
Select the memory configuration and whether or not you want the memory mirroring option.
The supported memory DIMMs and the mirroring option are listed in Table 4.
Ranks
Product ID (PID) PID Description Voltage
/DIMM
2666-MHz DIMMs
UCS-MR-128G8RS-H 128 GB DDR4-2666-MHz TSV-RDIMM/8R/x4 1.2 V 8
UCS-MR-X64G4RS-H 64 GB DDR4-2666-MHz TSV-RDIMM/4R/x4 1.2 V 4
UCS-ML-X64G4RS-H 64 GB DDR4-2666-MHz LRDIMM/4R/x4 1.2 V 4
UCS-MR-X32G2RS-H 32 GB DDR4-2666-MHz RDIMM/2R/x4 1.2 V 2
UCS-ML-X32G2RS-H 32 GB DDR4-2666-MHz LRDIMM/2R/x4 1.2 V 2
UCS-MR-X16G1RS-H 16 GB DDR4-2666-MHz RDIMM/1R/x4 1.2 V 1
2933-MHz DIMMs
UCS-ML-256G8RT-H1 256 GB DDR4-2933-MHz LRDIMM/8Rx4/1.2v 1.2 V 8
UCS-ML-128G4RT-H 128 GB DDR4-2933-MHz LRDIMM/4Rx4 (16Gb) 1.2v 1.2 V 4
UCS-ML-X64G4RT-H 64 GB DDR4-2933-MHz LRDIMM/4Rx4 (8Gb) 1.2v 1.2 V 4
UCS-MR-X64G2RT-H 64 GB DDR4-2933-MHz RDIMM/2Rx4 (16Gb) 1.2v 1.2 V 2
UCS-MR-X32G2RT-H 32GB DDR4-2933-MHz RDIMM/2Rx4 (8Gb) 1.2v 1.2 V 2
UCS-MR-X16G1RT-H 16 GB DDR4-2933-MHz RDIMM/1Rx4 (8Gb) 1.2v 1.2 V 1
Intel®OptaneTM DC Persistent Memory Product
UCS-MP-128GS-A0 Intel® OptaneTM DC Persistent Memory, 128GB, 2666MHz
UCS-MP-256GS-A0 Intel® OptaneTM DC Persistent Memory, 256GB, 2666MHz
UCS-MP-512GS-A0 Intel® OptaneTM DC Persistent Memory, 512GB, 2666MHz
Intel® OptaneTM DC Persistent Memory Product Operational Modes
UCS-DCPMM-AD App Direct Mode
UCS-DCPMM-MM Memory Mode
Memory Mirroring Option
N01-MMIRROR Memory mirroring option
Notes
1. The UCS-ML-256G8RT-H LRDIMM can be used only with 2nd Generation Intel® Xeon® scalable processor family
CPUs, not with Intel® Xeon® scalable processor family CPUs.
12 (A1, A2, B1, B2, C1, C2); (D1, D2, E1, E2, F1, F2)
12 (G1, G2, H1, H2, J1, J2); (K1, K2, L1, L2, M1, M2)
12 (N1, N2, P1, P2, Q1, Q2); (R1, R2, S1, S2, T1, T2)
12 (U1, U2, V1, V2, W1, W2); (X1, X2, Y1, Y2, Z1, Z2)
The DIMMs will be placed by the factory as shown in the following tables.
CPU 1 CPU 2
8 (A1,B1); (D1,E1) (G1, H1); (K1, L1)
12 (A1, B1, C1); (D1, E1, F1) (G1, H1, J1); (K1, L1, M1)
16 (A1, A2, B1, B2); (D1, D2, E1, E2) (G1, G2, H1, H2); (K1, K2, L1, L2)
24 (A1, A2, B1, B2, C1, C2); (D1, D2, E1, E2, (G1, G2, H1, H2, J1, J2); (K1, K2, L1, L2, M1,
F1, F2) M2)
CPU 3 CPU 4
8 (N1,P1); (R1,S1) (U1, V1); (X1, Y1)
12 (N1, P1, Q1); (R1, S1, T1) (U1, V1, W1); (X1, Y1, Z1)
16 (N1, N2, P1, P2); (R1, R2, S1, S2) (U1, U2, V1, V2); (X1, X2, Y1, Y2)
24 (N1, N2, P1, P2, Q1, Q2); (R1, R2, S1, S2, (U1, U2, V1, V2, W1, W2); (X1, X2, Y1, Y2,
T1, T2) Z1, Z2)
NOTE: System performance is optimized when the DIMM type and quantity are equal
for both CPUs, and when all channels are filled equally across the CPUs in the server.
System Speeds
System speed is dependent on how many DIMMs are populated per channel and the CPU DIMM speed
support. See Table 5 for details.
Table 5 2666-MHz DIMM Memory Speeds with Different Intel® Xeon® Scalable Processors
TSV-
TSV-
DIMM and CPU RDIMM LRDIMM RDIMM LRDIMM
RDIMM
Frequencies DPC (8Rx4) - (4Rx4) - (2Rx4) - (2Rx4) -
(4Rx4) -
(MHz) 128 GB 64 GB (MHz) 32 GB (MHz) 32 GB (MHz)
64 GB (MHz)
(MHz)
Table 6 2933-MHz DIMM Memory Speeds with Different 2nd Generation Intel®Xeon® Scalable Processors
DIMM and CPU RDIMM RDIMM
LRDIMM
Frequencies DPC LRDIMM (4Rx4)- (2Rx4) - (2Rx4) -
(4Rx4) -
(MHz) 128 GB (MHz) 64 GB (MHz) 32 GB (MHz)
64 GB (MHz)
DIMM Guidelines
■ System speed is dependent on the CPU DIMM speed support. Refer to Table 4 on page 14 for
DIMM Speeds
■ The B480 M5 server supports four different memory reliability, availability, and
serviceability (RAS) modes:
— Independent Channel Mode
— Mirrored Channel Mode
— Lockstep Channel Mode
— Rank Sparing Mode
NOTE: System performance is optimized when the DIMM type and quantity are equal
for both CPUs, and when all channels are filled equally across the CPUs in the server.
■ Memory can be configured in any number of DIMMs as pairs, though for optimal
performance, refer to the B480 Memory Guide at Cisco.com.
DCPMM Guidelines
■ DCPMMs require second generation Intel Xeon Scalable Family processors. First generation Xeon
Scalable processors do not support DCPMMs.
■ All installed DCPMMs must be the same size. Mixing DCPMMs of different capacities is not supported.
■ The use of 1Rx8 DIMMs with DCPMMs is not supported.
■ DCPMMs and DIMMs must be populated as shown in Table 7 (6 DIMMs per CPU with 2, 4, or 6 DCPMMs
per CPU, as shown).
Table 7 2nd Generation Intel® Xeon® Scalable Processor DIMM and DCPMM Physical Configurations (quad
socket)
DIMM
to
CPU 1
DCPMM
Count
iMC1 iMC0
Channel 2 Channel 1 Channel 0 Channel 2 Channel 1 Channel 0
F2 F1 E2 E1 D2 D1 C2 C1 B2 B1 A2 A1
6 to 2 DIMM DIMM DCPMM DIMM DIMM DIMM DCPMM DIMM
6 to 4 DIMM DCPMM DIMM DCPMM DIMM DIMM DCPMM DIMM DCPMM DIMM
6 to 6 DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM
DIMM
to CPU 2
DCPMM
Count
iMC1 iMC0
Channel 2 Channel 1 Channel 0 Channel 2 Channel 1 Channel 0
M2 M1 L2 L1 K2 K1 J2 J1 H2 H1 G2 G1
6 to 2 DIMM DIMM DCPMM DIMM DIMM DIMM DCPMM DIMM
6 to 4 DIMM DCPMM DIMM DCPMM DIMM DIMM DCPMM DIMM DCPMM DIMM
6 to 6 DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM
DIMM
to
CPU 3
DCPMM
Count
iMC1 iMC0
Channel 2 Channel 1 Channel 0 Channel 2 Channel 1 Channel 0
T2 T1 S2 S1 R2 R1 Q2 Q1 P2 P1 N2 N1
6 to 2 DIMM DIMM DCPMM DIMM DIMM DIMM DCPMM DIMM
6 to 4 DIMM DCPMM DIMM DCPMM DIMM DIMM DCPMM DIMM DCPMM DIMM
6 to 6 DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM
Table 7 2nd Generation Intel® Xeon® Scalable Processor DIMM and DCPMM Physical Configurations (quad
socket)
DIMM
to CPU 4
DCPMM
Count
iMC1 iMC0
Channel 2 Channel 1 Channel 0 Channel 2 Channel 1 Channel 0
Z2 Z1 Y2 Y1 X2 X W2 W1 V2 V1 U2 U1
6 to 2 DIMM DIMM DCPMM DIMM DIMM DIMM DCPMM DIMM
6 to 4 DIMM DCPMM DIMM DCPMM DIMM DIMM DCPMM DIMM DCPMM DIMM
6 to 6 DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM DCPMM DIMM
App Direct Mode: DCPMM operates as a solid-state disk storage device. Data is saved and is
non-volatile. Both DCPMM and DIMM capacity counts towards CPU tiering
(both DCPMM and DIMM capacities count towards the CPU capacity limit)
Memory Mode:1 DCPMM operates as a 100% memory module. Data is volatile and DRAM acts
as a cache for DCPMMs. Only DCPMM capacity counts towards CPU tiering
(only the DCPMM capacity counts towards the CPU capacity limit). This is
the factory default mode.
Mix Mode: DRAM as cache. Only DCPMM capacity counts towards CPU tiering (only the
DCPMM capacity counts towards the CPU capacity limit).
Notes
1. For Memory Mode, the Intel-recommended DIMM to DCPMM capacity ratio in the same CPU socket is from 1:4 to
1:16.
■ For each memory channel with both a DCPMM and a DIMM installed, the DCPMM is installed in
channel slot 2 (closest) and the DIMM is installed in channel slot 1.
■ To maximize performance, balance all memory channels
■ In configurations with DCPMMs installed, memory mirroring is supported, with two restrictions:
■ Mirroring is only enabled on the DIMMs installed in the server; the DCPMMs themselves do
not support mirroring.
■ Only App Direct mode is supported. Memory mirroring cannot be enabled when DCPMMs are
in Memory Mode or Mixed Mode.
■ Memory sparing is not supported with DCPMMs installed
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B480M5/B480M5_cha
pter_011.html?bookSearch=true#concept_qtr_4xh_zgb
Storage Controller1,2,3
UCSB-MRAID12G1,4 Cisco FlexStorage 12G SAS RAID controller with drive bays
UCSB-MRAID12G-HE1,5 Cisco FlexStorage 12G SAS RAID controller with 2 GB flash-backed write
cache and drive bays
UCSB-LSTOR-BK6 Cisco FlexStorage blanking panel w/o controller, w/o drive bays
Front GPU
Notes
1. To support a minimum of two hard disk drives (HDD) or solid-state drives (SSD), a Cisco FlexStorage 12G SAS
RAID controller or Cisco FlexStorage 12G SAS RAID controller with 2 GB flash-backed write cache with drive
bays is required. If support for four HDD or SSD drives is required, select a Cisco FlexStorage 12G SAS RAID
controller and a Cisco FlexStorage Passthrough module.
2. To support a minimum of two NVMe drives, a Cisco FlexStorage Passthrough module with drive bays is required.
To support four NVMe drives, select two Cisco FlexStorage Passthrough modules.
3. The two RAID cards and Passthrough are three different cards and cannot be upgraded or downgraded.
4. The Cisco FlexStorage 12G SAS RAID Controller is based on the LSI 3108 ROC and runs the iMegaRAID software
stack. It provides 12 Gbps RAID functionality for SAS/SATA SSD/HDD and has RAID 0, 1, and JBOD support.
5. The Cisco FlexStorage 12G SAS RAID controller with 2 GB Flash-backed write cache is based on the LSI 3108 ROC
and runs the LSI MegaRAID Software Stack. It provides 12 Gbps RAID Functionality for SAS/SATA HDD/SSD and has
RAID 0, 1, 5 and 6 support.
The flash-backed write cache provides RAID controller cache protection using NAND flash memory and a
supercapacitor. In the event of a power or server failure, cached data is automatically transferred from the
RAID controller DRAM write cache to flash. Once power is restored, the data in the NAND flash is copied
back into the DRAM write cache until it can be flushed to the disk drives.
6. For servers that do not need local storage, and where no storage controllers are included, storage blanking
panels are auto-populated as a part of configuration rules. To ensure proper airflow drive blanks must be
installed when no drive bays and when GPUs are installed.
See Figure 4 on page 23 for a top view of the B480 M5 server with and without the Cisco
FlexStorage SAS RAID controller and drive bays. See Figure 5 on page 23 for a front view of
these configurations. When no drive bays are installed, blanking panels must be installed.
Figure 4 UCS B480 M5 With and Without Drive Bays (Top View)
RAID
Controller
(integrated Disk
in drive bay) Drive
Bays (4)
(a) Server with four drive bays (b) Server with no drive bays
Figure 5 UCS B480 M5 With and Without Drive Bays (Front View)
Blanking Panels
(a) Server with four drive bays (b) Server with no drive bays
Blanking Panel
Front Mezzanine Slot 1 Front Mezzanine Slot 2 Drives Supported RAID Level
Cisco FlexStorage 12G SAS RAID Cisco FlexStorage 4 x HDD/SSD or 2 0, 1, 10, 5, 6, 50,
controller Passthrough Module x HDD/SSD and 2 x 60 (depending on
NVMe number of drives
in RAID set)
Notes
1. Any empty drive bays must be fitted with blanking panels to maintain proper cooling airflow.
NOTE: For supported rear mezzanine configurations, see STEP 6 CHOOSE REAR
MEZZANINE CARDS, page 30.
Choose Drives
The drives supported on the UCS B480 M5 are listed in Table 11.
Drive
Product ID (PID) Description Speed Endurance Size
Type
HDD1
UCS-HD900G15K12G 900 GB 12G SAS 15K RPM SFF HDD SAS 15K RPM N/A 900 GB
UCS-HD600G15K12G 600 GB 12G SAS 15K RPM SFF HDD SAS 15K RPM N/A 600 GB
UCS-HD300G15K12G 300 GB 12G SAS 15K RPM SFF HDD SAS 15K RPM N/A 300 GB
UCS-HD24TB10KS4K 2.4 TB 12G SAS 10K RPM SFF HDD (4K) SAS 10K RPM N/A 2.4 TB
UCS-HD18TB10KS4K 1.8 TB 12G SAS 10K RPM SFF HDD (4K)2 SAS 10K RPM N/A 1.8 TB
UCS-HD12TB10K12G 1.2 TB 12G SAS 10K RPM SFF HDD SAS 10K RPM N/A 1.2 TB
UCS-HD600G10K12G 600 GB 12G SAS 10K RPM SFF HDD SAS 10K RPM N/A 600 GB
UCS-HD300G10K12G 300 GB 12G SAS 10K RPM SFF HDD SAS 10K RPM N/A 300 GB
SSD1
UCS-SD400G12S4-EP 400 GB 2.5 inch Ent Performance 12G SAS 12G Ent. Perf 400 GB
SAS SSD (10X endurance) (SanDisk 10X
Lightning II)
UCS-SD800G12S4-EP 800 GB 2.5 inch Ent. Performance 12G SAS 12G Ent. Perf 800 GB
SAS SSD (10X endurance) (Samsung 10X
1635)
UCS-SD16TB12S4-EP 1.6 TB 2.5 inch Ent. Performance 12G SAS 12G Ent. Perf 1.6 TB
SAS SSD (10X endurance) (SanDisk 10X
Lightning II)
UCS-SD800GSAS3-EP 800GB 2.5 inch Enterprise performance SAS 12G Ent. Perf 800 GB
12G SAS SSD(3X endurance) (Toshiba 3X
PX05)
UCS-SD16TSASS3-EP 1.6TB 2.5 inch Enterprise performance SAS 12G Ent. Perf 1.6 TB
12G SAS SSD(3X endurance) (Toshiba 3X
PX05)
UCS-SD400GH3-EP 400GB 2.5 inch Enterprise performance SAS 12G Ent. Perf 400 Gb
12G SAS SSD (3X DWPD) 3X
UCS-SD800GH3-EP 800GB 2.5 inch Enterprise performance SAS 12G Ent. Perf 800 GB
12G SAS SSD (3X DWPD) 3X
UCS-SD16TH3-EP 1.6TB 2.5 inch Enterprise performance SAS 12G Ent. Perf 1.6 TB
12G SAS SSD (3X DWPD) 3X
UCS-SD32TH3-EP 3.2TB 2.5in Enterprise performance SAS 12G Ent. Perf 3.2 TB
12G SAS SSD (3X DWPD) 3X
UCS-SD16TB12S3-EP 1.6TB 2.5 inch Ent. Performance SATA 6G Ent. Value 1.6 TB
6GSATA SSD(3X endurance) 3X
UCS-SD800G12S3-EP 800GB 2.5 inch Enterprise performance SATA 6G Ent. Value 800 GB
6G SATA SSD(3 FWPD)
UCS-SD200G12S3-EP 200GB 2.5 inch Enterprise performance SATA 6G Ent. Value 200 GB
6G SATA SSD(3 FWPD)
UCS-SD480GSAS-EV 480 GB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 480 GB
SAS SSD (Toshiba PX05)
UCS-SD960GSAS-EV 960GB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 960 GB
SAS SSD (Toshiba PX05)
UCS-SD19TSAS-EV 1.9TB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 1.9 TB
SAS SSD
UCS-SD38TSAS-EV 3.8TB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 3.8 TB
SAS SSD (Toshiba PX05)
UCS-SD480GH1-EV 480GB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 480 GB
SAS SSD (1X endurance)
UCS-SD960GH1-EV 960GB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 960 GB
SAS SSD (1X endurance)
UCS-SD19TH1-EV 1.9TB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 1.9 TB
SAS SSD (1X endurance)
UCS-SD38TH1-EV 3.8TB 2.5 inch Enterprise Value 12G SAS 12G Ent. Value 3.8 TB
SAS SSD (1X endurance)
UCS-SD960GBKS4-EV 960GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 960 GB
SATA SSD (Samsung PM863A)
UCS-SD38TBKS4-EV 3.8TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 3.8 TB
SATA SSD (Samsung PM863A)
UCS-SD76TBMS4-EV 7.6TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 7.6 TB
SATA SSD (Micron 5100)
UCS-SD38TBMS4-EV 3.8TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 3.8 TB
SATA SSD (Micron 5100)
UCS-SD19TBMS4-EV 1.9TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 1.9 TB
SATA SSD (Micron 5100)
UCS-SD16TBMS4-EV 1.6TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 1.6 TB
SATA SSD (Micron 5100)
UCS-SD960GBMS4-EV 960GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 960 GB
SATA SSD (Micron 5100)
UCS-SD480GBMS4-EV 480 GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 480 GB
SATA SSD (Micron 5100)
UCS-SD240GBMS4-EV 240GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 240 GB
SATA SSD (Micron 5100)
UCS-SD120GBMS4-EV 120 GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 120 GB
SATA SSD (Micron 5100)
UCS-SD480GBIS6-EV 480GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 480 GB
SATA SSD (Intel S4500)
UCS-SD960GBIS6-EV 960GB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 960 GB
SATA SSD (Intel S4500)
UCS-SD38TBIS6-EV 3.8TB 2.5 inch Enterprise Value 6G SATA 6G Ent. Value 3.8 TB
SATA SSD (Intel S4500)
UCS-SD480GBHBNK9 480GB Enterprise value SAS SSD (1X SAS Ent. Value 480 GB
FWPD, SED)
UCS-SD960GBHBNK9 960GB Enterprise value SAS SSD (1X SAS Ent. Value 960 GB
FWPD, SED)
UCS-SD38TBHBNK9 3.8TB Enterprise value SAS SSD (1X SAS Ent. Value 3.8 TB
FWPD, SED)
UCS-HD600G15K9 600GB 12G SAS 15K RPM SFF HDD (SED) SAS 15K RPM N/A 600 GB
UCS-HD12G10K9 1.2 TB 12G SAS 10K RPM SFF HDD (SED) SAS 10K RPM N/A 1.2 TB
UCS-HD18G10K9 1.8TB 12G SAS 10K RPM SFF HDD (4K SAS 10K RPM N/A 1.8 TB
format, SED)
UCS-HD24T10BNK9 2.4 TB 12G SAS 10K RPM SFF HDD (4K) SAS 10K RPM N/A 2.4 TB
SED
UCSB-NVMEHW-H7680 7.7TB 2.5in U.2 HGST SN200 NVMe High NVMe High High 7.7 TB
Perf. Value Endurance Perf Endurance
UCSB-NVMEHW-H6400 Cisco 2.5" U.2 6.4TB HGST SN200 NVMe NVMe High High 6.4 TB
High Perf. High Endurance Perf Endurance
UCSB-NVMEHW-H3200 Cisco 2.5" U.2 3.2TB HGST SN200 NVMe NVMe High High 3.2 TB
High Perf. High Endurance Perf Endurance
UCSB-NVMEHW-H1600 Cisco 2.5" U.2 1.6TB HGST SN200 NVMe NVMe High High 1.6 TB
High Perf. High Endurance Perf Endurance
UCSB-NVMEHW-H800 800GB 2.5in U.2 HGST SN200 NVMe NVMe High High 800 GB
High Perf. High Endurance Perf Endurance
UCSB-NVMEHW-I3200 Cisco 2.5" U.2 3.2TB Intel P4600 NVMe NVMe High Value 3.2 TB
High Perf. High Endurance Perf Endurance
UCSB-NVMEHW-I2000 Cisco 2.5" U.2 2TB Intel P4600 NVMe NVMe High Value 2.0 TB
High Perf. High Endurance Perf Endurance
UCSB-NVMEHW-I1600 Cisco 2.5" U.2 1.6TB Intel P4600 NVMe NVMe High Value 1.6 TB
High Perf. High Endurance Perf Endurance
UCSB-NVMEHW-I1000 Cisco 2.5" U.2 1TB Intel P4500 NVMe NVMe High Value 1.0 TB
High Perf. Value Endurance Perf Endurance
UCSB-NVMEHW-I2TBV Cisco 2.5" U.2 2TB Intel P4500 NVMe NVMe High Value 2.0 TB
High Perf. Value Endurance Perf Endurance
UCSB-NVMEHW-I4000 Cisco 2.5" U.2 4TB Intel P4500 NVMe NVMe High Value 4.0 TB
High Perf. Value Endurance Perf Endurance
UCSB-NVMEHW-I8000 Cisco 8TB 2.5in U.2 Intel P4500 NVMe NVMe High Value 8.0 TB
High Perf Value Endurance Perf Endurance
UCSB-NVMEXPB-I375 Cisco 2.5in U.2 375GB Intel P4800 NVMe Med Endurance 375 GB
NVMe Med. Perf Perf
UCSB-NVMEXP-I750 750GB 2.5in Intel Optane NVMe NVMe Extrm Endurance 750 GB
Extreme Perf Perf
UCSB-NVME2H-I1000 Cisco 2.5" U.2 1,0 TB Intel P4510 NVMe NVMe High Endurance 1 TB
High Perf. Value Endu Perf
NOTE: Cisco uses solid state drives (SSDs) from a number of vendors. All solid state drives (SSDs) are subject to physical
write limits and have varying maximum usage limitation specifications set by the manufacturer. Cisco will not replace any
solid state drives (SSDs) that have exceeded any maximum usage specifications set by Cisco or the manufacturer, as
determined solely by Cisco.
Notes
1. HDDs and SSDs require either of the following storage controllers in the front mezzanine slot
UCSB-MRAID12G
UCSB-MRAID12G-HE
2. Drive format is 4K Native.
4K Native drives are nut supported with VMware.
4K Native drives require UEFI Boot.
3. NVMe drives require the following storage controller in the front mezzanine slot: UCSB-LSTOR-PT.
4. If HDDs or SSDs are going to be in a RAID group, two identical drives are required.
5. If HDD or SSD are in JBOD Mode, drives can be different.
Supported Configurations
■ Select up to 4 of the drives listed in Table 11. Selection is dependent on the number of drive
bays selected in CHOOSE OPTIONAL FRONT MEZZANINE: Storage Controller or GPU on page
22.
Notes
■ When creating a RAID volume, mixing different capacity drives causes the system to use the
lowest-capacity drive.
■ Mixing of drive types is supported, but performance may be impacted.
■ Multiple RAID volumes are supported. RAID volumes should use the same media type.
NOTE: The mezzanine cards listed in this section are installed in the three rear
mezzanine connectors.
■ Mezzanine slot 1 the rear mLOM connector
■ Mezzanine slot 2 is rear mezzanine connector 1
■ Mezzanine slot 2 is rear mezzanine connector 2
See System Board on page 53 for the locations of these connectors.
Cisco has developed 1300 and1400 series Virtual Interface Cards (VICs) to provide flexibility to
create multiple NIC and HBA devices.The VIC features are listed here:
— 1300 and 1400 Series VICs include enhancements including network overlay offload
support for NVGRE and VXLAN, and RoCE services.1300 Series VICs support PCIe Gen
3.0 for greater bandwidth.
— Two Converged Network Adapter (CNA) ports, supporting both Ethernet and FCoE.
— Delivers up to 80 Gbps total I/O throughput to the server.
— VICs support either a maximum of dual 4x10Gbps Unified I/O ports or 2x40 (native)
Gbps Unified I/O ports.
— Supports up to 256 fully functional unique and independent PCIe adapters and
interfaces.
— Provides virtual machine visibility from the physical network and a consistent
network operations model for physical and virtual servers.
— Supports customer requirements for a wide range of operating systems and
hypervisors.
For additional capabilities please refer to the following VIC data sheets link:
https://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-virtual-
interface-card/tsd-products-support-series-home.html?dtid=osscdc000283
Cisco UCS Storage Accelerators are designed specifically for the Cisco UCS B-series M5
blade servers and integrate seamlessly to allow improvement in performance and relief
of I/O bottlenecks.
■ GPUs
NVIDIA GPUs are designed specifically for the Cisco UCS B Series M5 blade servers and
provide outstanding processing power to offload the CPUs.
To help ensure that your operating system is compatible with the cards you have selected,
please check the Hardware Compatibility List at this link:
http://www.cisco.com/en/US/products/ps10477/prod_technical_reference_list.html
The supported rear mezzanine adapters in the UCS B480 M5 are listed in Table 12.
UCSB-MLOM-40G-031 UCS VIC 1340 modular LOM for blade servers mLOM
UCSB-VIC-M84-4P Cisco UCS VIC 1480 mezzanine adapter Rear
Mezzanine
UCSB-VIC-M83-8P Cisco UCS VIC 1380 mezzanine adapter Rear
Mezzanine
Cisco Storage Accelerators (rear mezzanine)
UCSB-F-H-5607 UCS Blade PCIe/NVMe Storage Mezz 560 GB High Endurance Rear
Mezzanine
UCSB-F-H-32003 UCS Blade PCIe/NVMe Storage Mezz 3200 GB Medium Endurance Rear
Mezzanine
Port Expander Card for VIC Option
UCSB-MLOM-PT-012 Cisco UCS Port Expander Card for VIC. This is a hardware option Rear
to enable an additional 4 ports of the VIC 1340 or 1440, bringing Mezzanine
the total capability of the VIC 1340 or 1440 to a dual native 40G
interface or a dual 4x10GbE port-channeled interface.
GPUs
UCSB-GPU-P6-R NVIDIA GRID P6 Rear Mezzanine Rear
Mezzanine
Notes
1. The VIC 1440/1480 are supported with the 6200/6300/6400 Series Fabric Interconnects.
2. The Port Expander is not supported with the 2408XP Fabric Extender
Supported Configurations
Table 13 and Table 14 on page 33 show the supported adapter combinations. The
configuration rules are summarized as follows:
NOTE: CPU1 controls mezzanine slot 1 (mLOM), CPU 3 controls mezzanine slot 2, and
CPU 4 controls mezzanine slot 3.
Fabric
Adapter Configurations IO Module Aggregate Bandwidth (Gb/s) Inter-
connects
Mezzanine Mezzanine Slot 22 Mezzanine Slot 33 2 x 2408XP 2 x 23044 2 x 2208XP 2 x 2204XP
Slot 11
GPU or Not GPU or Not 40 405 402 20
Populated Populated
Port Expander GPU or Not Not 807 808 402
Populated Supported
GPU or Not 1380 40 1209 12010 6011
Populated 62xx, 63xx
VIC 1340
Port Expander 1380 Not 16012 16013 8014 or 64546
Supported
Cisco UCS Storage GPU or Not 40 8015 8012 4016
Accelerator Populated
GPU or Not Cisco UCS Storage 40 402 402 20
Populated Accelerator
GPU or Not GPU or Not 40 402 402 20
Populated Populated
GPU or Not 40 804,17 Not Not
Port Expander
Populated Supported Supported
GPU or Not 14803 40 1206 1207 608 62xx,
Populated 6332,6332
VIC 1440
14803 Not 1609,18 Not Not -16UP or
Port Expander
Supported Supported Supported 64543
Cisco UCS Storage GPU or Not 40 8019 402 20
Accelerator Populated
GPU or Not Cisco UCS Storage 40 402 402 20
Populated Accelerator
Note: for the configuration in the following rows, slot 2 and 3 must have identical types of storage cards.
VIC 1340 Cisco UCS Storage 40 8012 8012 40 62xx,
Cisco UCS Storage
Accelerator 63xx, or
Accelerator
64543
VIC 14403 Cisco UCS Storage 40 8016 402 20 62xx,
Cisco UCS Storage Accelerator 6332,6332
Accelerator -16UP or
64543
Notes
1. Rear mLOM connector (see System Board on page 53)
2. Rear mezzanine connector 1 (see System Board on page 53)
3. Rear mezzanine connector 2 (see System Board on page 53)
4. The 2304 is only supported with the FI-6300
5. This is two 2x10Gbps port-channeled
6. The FI-6454 and/or VIC 1400 requires UCSM 4.0(1) or greater
7. This is a dual native 40G interface
8. This is four 4x10Gbps port-channeled
9. Mezz slot 1 (mLOM) is two 2x10Gbps port-channeled, and VIC 1x80 is a dual native 40G interface
Notes
1. The SD modular adapter (PID UCS-MSTOR-SD) is auto-included in CCW and is not selectable.
Supported Configurations
(1) Select one or two Cisco Flexible Flash secure digital cards (do not mix SD cards).
(2) If you select SDHC cards, you cannot select M.2 SATA SSDs
UCS-M2-240GB 240 GB M.2 SATA SSD Supported via SW RAID on Intel PCH
UCS-M2-960GB 960 GB M.2 SATA SSD Supported via SW RAID on Intel PCH
Notes
1. The M.2 modular adapter (PID UCS-MSTOR-M2) is auto-included in CCW and is not selectable.
2. Select the UCS-M2-HWRAID boot-optimized RAID controller modular adapter if you want to implement
hardware RAID across the SATA SSD drives. The Boot-Optimized RAID controller supports VMWare, Windows,
and Linux operating systems.
Caveats
(1) Select either one or two M.2 SATA SSD cards. Do not mix M.2 card types.
(2) If you select M.2 SATA SSDs, you cannot select SDHC cards.
(3) VMware does not support software RAID. In this case, an M.2 Drive can still be used as a boot
device without mirroring.
(4) Select the UCS-M2-HWRAID modular adapter to support SATA SSD hardware RAID. Hardware
RAID is supported only on the 240 GB M.2 SATA SSDs.
NOTE:
■ The controller supports RAID 1 and JBOD mode
■ The controller is available only with 240GB M.2 SSDs.
■ (CIMC/UCSM) is supported for configuring of volumes and monitoring of the controller
and installed SATA M.2
■ The minimum version of Cisco IMC and Cisco UCS Manager that support this controller
is 4.0(4) and later. The name of the RAID controller in the software is MSTOR-RAID
■ The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.
■ Hot-plug replacement is not supported. The server must be powered off.
NOTE: A clearance of 0.950 inches (24.1 mm) is required for the USB device to be
inserted and removed (see Figure 6)
Select
UCS Director
CUIC-PHY-SERV-BM-U Cisco UCS Director Resource Lic - 1 Phy Sevr node bare metal
CUIC-PHY-SERV-U Cisco UCS Director Resource Lic - One physical Server node
N1K-VSG-UCS-BUN Nexus 1000V Adv Edition for vSphere Paper License Qty 1
VMware vCenter
StorMagic
UCS-STORM-6TB-1S StorMagic SvSAN Software License - 6TB, 1 Node; 1Yr SnS Reqd
UCS-STORM-6TB-3S StorMagic SvSAN Software License - 6TB, 1 Node; 3Yr SnS Reqd
UCS-STORM-6TB-5S StorMagic SvSAN Software License - 6TB, 1 Node; 5Yr SnS Reqd
UCS-STORM-12TB-1S StorMagic SvSAN Software License - 12TB, 1Node; 1Yr SnS Reqd
UCS-STORM-12TB-3S StorMagic SvSAN Software License - 12TB, 1Node; 3Yr SnS Reqd
UCS-STORM-12TB-5S StorMagic SvSAN Software License - 12TB, 1Node; 5Yr SnS Reqd
UCS-STORM-2TA-1S StorMagic SvSAN Adv SW License - 2TB, 1 Node; 1Yr SnS Reqd
UCS-STORM-2TA-3S StorMagic SvSAN Adv SW License - 2TB, 1 Node; 3Yr SnS Reqd
UCS-STORM-2TA-5S StorMagic SvSAN Adv SW License - 2TB, 1 Node; 5Yr SnS Reqd
UCS-STORM-6TA-1S StorMagic SvSAN Adv SW License - 6TB, 1 Node; 1Yr SnS Reqd
UCS-STORM-6TA-3S StorMagic SvSAN Adv SW License - 6TB, 1 Node; 3Yr SnS Reqd
UCS-STORM-6TA-5S StorMagic SvSAN Adv SW License - 6TB, 1 Node; 5Yr SnS Reqd
UCS-STORM-12TA-1S StorMagic SvSAN Adv SW License - 12TB, 1 Node; 1Yr SnS Reqd
UCS-STORM-12TA-3S StorMagic SvSAN Adv SW License - 12TB, 1 Node; 3Yr SnS Reqd
UCS-STORM-12TA-5S StorMagic SvSAN Adv SW License - 12TB, 1 Node; 5Yr SnS Reqd
UCS-STORM-UTA-1S StorMagic SvSAN Adv SW Lic - Unlim TB, 1Node; 1Yr SnS Reqd
UCS-STORM-UTA-3S StorMagic SvSAN Adv SW Lic - Unlim TB, 1Node; 3Yr SnS Reqd
UCS-STORM-UTA-5S StorMagic SvSAN Adv SW Lic - Unlim TB, 1Node; 5Yr SnS Reqd
MSWS-16-ST16C-NS Windows Server 2016 Standard (16 Cores/2 VMs) - No Cisco SVC
MSWS-16-ST24C-NS Windows Server 2016 Standard (24 Cores/2 VMs) - No Cisco SVC
MSWS-19-ST16C-NS Windows Server 2019 Standard (16 Cores/2 VMs) - No Cisco SVC
Red Hat
RHEL-2S2V-1A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 1-Yr Support Req
RHEL-2S2V-3A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 3-Yr Support Req
RHEL-2S2V-5A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 5-Yr Support Req
RHEL-VDC-2SUV-1A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr Supp Req
RHEL-VDC-2SUV-3A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr Supp Req
RHEL-VDC-2SUV-5A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 5 Yr Supp Req
RHEL-2S2V-1S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 1-Yr SnS
RHEL-2S2V-3S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 3-Yr SnS
RHEL-2S-SFS-1S RHEL Scalable File System (1-2 CPU); Premium 1-yr SnS
RHEL-2S-SFS-3S RHEL Scalable File System (1-2 CPU); Premium 3-yr SnS
RHEL-VDC-2SUV-1S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr SnS Reqd
RHEL-VDC-2SUV-3S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr SnS Reqd
RHEL-SAP-2S2V-1S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 1-Yr SnS
RHEL-SAP-2S2V-3S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 3-Yr SnS
RHEL-SAPH-2S2V-1S RHEL for SAP Hana (1-2 CPU, 1-2 VN); Prem 1-Yr SnS
RHEL-SAPH-2S2V-3S RHEL for SAP Hana (1-2 CPU, 1-2 VN); Prem 3-Yr SnS
RHEL-SAPHHAS2S-1S RHEL for SAP Hana,HA,SmartM (1-2 CPU & VN); Std 1Yr SnS Reqd
RHEL-SAPHHAS2S-3S RHEL for SAP Hana,HA,SmartM (1-2 CPU & VN); Std 3Yr SnS Reqd
RHEL-SAPHHAP2S-1S RHEL for SAP Hana,HA,SmartM (1-2 CPU &VN); Prem 1Yr SnS Reqd
RHEL-SAPHHAP2S-3S RHEL for SAP Hana,HA,SmartM (1-2 CPU &VN); Prem 3Yr SnS Reqd
VMware
SUSE
SLES-2S2V-1A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 1-Yr Support Req
SLES-2SUV-1A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); 1-Yr Support Req
SLES-2S2V-3A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 3-Yr Support Req
SLES-2SUV-3A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); 3-Yr Support Req
SLES-2S2V-5A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 5-Yr Support Req
SLES-2SUV-5A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); 5-Yr Support Req
SLES-2S2V-1S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 1-Yr SnS
SLES-2SUV-1S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); Prio 1-Yr SnS
SLES-2S2V-3S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 3-Yr SnS
SLES-2SUV-3S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); Prio 3-Yr SnS
SLES-2S2V-5S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 5-Yr SnS
SLES-2SUV-5S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); Prio 5-Yr SnS
SLES-2S-HA-1S SUSE Linux High Availability Ext (1-2 CPU); 1yr SnS
SLES-2S-HA-3S SUSE Linux High Availability Ext (1-2 CPU); 3yr SnS
SLES-2S-HA-5S SUSE Linux High Availability Ext (1-2 CPU); 5yr SnS
SLES-2S-GC-1S SUSE Linux GEO Clustering for HA (1-2 CPU); 1yr Sns
SLES-2S-GC-3S SUSE Linux GEO Clustering for HA (1-2 CPU); 3yr SnS
SLES-2S-GC-5S SUSE Linux GEO Clustering for HA (1-2 CPU); 5yr SnS
SLES-2S-LP-1S SUSE Linux Live Patching Add-on (1-2 CPU); 1yr SnS Required
SLES-2S-LP-3S SUSE Linux Live Patching Add-on (1-2 CPU); 3yr SnS Required
SLES-2S-LP-1A SUSE Linux Live Patching Add-on (1-2 CPU); 1yr Support Req
SLES-2S-LP-3A SUSE Linux Live Patching Add-on (1-2 CPU); 3yr Support Req
SLES-SAP-2S2V-1A SLES for SAP Apps (1-2 CPU, 1-2 VM); 1-Yr Support Reqd
SLES-SAP-2SUV-1A SLES for SAP Apps (1-2 CPU, Unl VM); 1-Yr Support Reqd
SLES-SAP-2S2V-3A SLES for SAP Apps (1-2 CPU, 1-2 VM); 3-Yr Support Reqd
SLES-SAP-2SUV-3A SLES for SAP Apps (1-2 CPU, Unl VM); 3-Yr Support Reqd
SLES-SAP-2S2V-5A SLES for SAP Apps (1-2 CPU, 1-2 VM); 5-Yr Support Reqd
SLES-SAP-2SUV-5A SLES for SAP Apps (1-2 CPU, Unl VM); 5-Yr Support Reqd
SLES-SAP-2S2V-1S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 1-Yr SnS
SLES-SAP-2SUV-1S SLES for SAP Apps (1-2 CPU, Unl VM); Priority 1-Yr SnS
SLES-SAP-2S2V-3S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 3-Yr SnS
SLES-SAP-2SUV-3S SLES for SAP Apps (1-2 CPU, Unl VM); Priority 3-Yr SnS
SLES-SAP-2S2V-5S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 5-Yr SnS
SLES-SAP-2SUV-5S SLES for SAP Apps (1-2 CPU, Unl VM); Priority 5-Yr SnS
Table 22 OS Media
MSWS-16-ST16C-RM Windows Server 2016 Stan (16 Cores/2 VMs) - Recovery Media
MSWS-16-ST24C-RM Windows Server 2016 Stan (24 Cores/2 VMs) - Recovery Media
MSWS-19-ST16C-RM Windows Server 2019 Stan (16 Cores/2 VMs) Rec Media DVD Only
MSWS-19-DC16C-RM Windows Server 2019 DC (16Cores/Unlim VM) Rec Media DVD Only
For systems that include Unified Computing System Manager, the support service includes
downloads of UCSM upgrades. The Cisco Smart Net Total Care for UCS Service includes flexible
hardware replacement options, including replacement in as little as two hours. There is also
access to Cisco's extensive online technical resources to help maintain optimal efficiency and
uptime of the unified computing environment. For more information please refer to the following
url: http://www.cisco.com/c/en/us/services/technical/smart-net-total-care.html?stickynav=1
Note: For PID UCSB-B480-M5-U, select Service SKU with BB480M5U suffix (Example: CON-PREM- BB480M5U)
For PID UCSB-B480-M5-CH, select Service SKU with B480M5CH suffix (Example: CON-PREM- B480M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 51)
**Includes Local Language Support (see Local Language Technical Support for UCS on page 51) – Only
available in China and Japan
***Includes Local Language Support and Drive Retention – Only available in China and Japan
Smart Net Total Care for Cisco UCS Onsite Troubleshooting Service
An enhanced offer over traditional Smart Net Total Care which provides onsite troubleshooting
expertise to aid in the diagnostics and isolation of hardware issue within our customers’ Cisco
Unified Computing System (UCS) environment. It is delivered by a Cisco Certified field engineer
(FE) in collaboration with remote TAC engineer and Virtual Internet working Support Engineer
(VISE). You can choose a desired service listed in Table 24
Table 24 SNTC for Cisco UCS Onsite Troubleshooting Service (PID UCSB-B480-M5)
Table 24 SNTC for Cisco UCS Onsite Troubleshooting Service (PID UCSB-B480-M5)
Note: For PID UCSB-B480-M5-U, select Service SKU with BB480M5U suffix (Example: CON-PREM- BB480M5U)
For PID UCSB-B480-M5-CH, select Service SKU with B480M5CH suffix (Example: CON-PREM- B480M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 51)
**Includes Local Language Support (see Local Language Technical Support for UCS on page 51) – Only
***Includes Local Language Support and Drive Retention – Only available in China and Japan
This service centralizes support across your multivendor Cisco environment for both our
products and solution partner products you’ve deployed in your ecosystem. Whether there is an
issue with a Cisco or solution partner product, just call us. Our experts are the primary point of
contact and own the case from first call to resolution. For more information please refer to the
following url:http://www.cisco.com/c/en/us/services/technical/solution-support.html?stickynav=1
Note: For PID UCSB-B480-M5-U, select Service SKU with BB480M5U suffix (Example: CON-PREM- BB480M5U)
For PID UCSB-B480-M5-CH, select Service SKU with B480M5CH suffix (Example: CON-PREM- B480M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 51)
Note: For PID UCSB-B480-M5-U, select Service SKU with BB480M5U suffix (Example: CON-PREM-B480M5U)
For PID UCSB-B480-M5-CH, select Service SKU with B480M5CH suffix (Example: CON-PREM- B480M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 51)
**Includes Local Language Support (see Local Language Technical Support for UCS on page 51) – Only
***Includes Local Language Support and Drive Retention – Only available in China and Japan
customers. Cisco PSS provides partners with access to Cisco's support infrastructure and assets to
help them:
• Expand their service portfolios to support the most complex network environments
• Lower delivery costs
• Deliver services that increase customer loyalty
PSS options enable eligible Cisco partners to develop and consistently deliver high-value
technical support that capitalizes on Cisco intellectual assets. This helps partners to realize
higher margins and expand their practice.
PSS is available to all Cisco PSS partners.
PSS for UCS provides hardware and software support, including triage support for third party
software, backed by Cisco technical resources and level three support. You can choose a desired
service listed in Table 27.
Note: For PID UCSB-B480-M5-U, select Service SKU with BB480M5U suffix (Example: CON-PREM- BB480M5U)
For PID UCSB-B480-M5-CH, select Service SKU with B480M5CH suffix (Example: CON-PREM- B480M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 51)
Note: For PID UCSB-B480-M5-U, select Service SKU with BB480M5U suffix (Example: CON-PREM- BB480M5U)
For PID UCSB-B480-M5-CH, select Service SKU with B480M5CH suffix (Example: CON-PREM- B480M5CH)
*Includes Drive Retention (see UCS Drive Retention Service on page 51)
Note: For PID UCSB-B480-M5-U, select Service SKU with BB480M5U suffix (Example: CON-PREM- BB480M5U)
For PID UCSB-B480-M5-CH, select Service SKU with B480M5CH suffix (Example: CON-PREM- B480M5CH)
Sophisticated data recovery techniques have made classified, proprietary, and confidential
information vulnerable, even on malfunctioning disk drives. The Drive Retention service enables
you to retain your drives and ensures that the sensitive data on those drives is not compromised,
which reduces the risk of any potential liabilities. This service also enables you to comply with
regulatory, local, and federal requirements.
If your company has a need to control confidential, classified, sensitive, or proprietary data, you
might want to consider one of the Drive Retention Services listed in the above tables (where
available)
NOTE: Cisco does not offer a certified drive destruction service as part of this
service.
For a complete listing of available services for Cisco Unified Computing System, see the following
URL: http://www.cisco.com/en/US/products/ps10312/serv_group_home.html
The local KVM I/O cable ordering information is listed in Table 30.
N20-BKVM= Local KVM I/O cable for UCS servers console port
4
2 3
1
192621
1 Connector (to server front panel) 3 VGA connector (for a monitor)
NOTE: The blade chassis ships with the local KVM I/O cable.
SUPPLEMENTAL MATERIAL
System Board
A top view of the UCS B480 M5 system board is shown in Figure 8.
1 Front CPUs (CPU1 and CPU2) sockets (populated) 7 USB connector (populated)
2 Rear CPUs (CPU3 and CPU4) sockets (not populated) 8 mLOM connector
(rear mezzanine slot 1)
Figure 10 shows how channels are physically laid out on the blade server. The DIMM slots are contiguous to
their associated CPU.
Bank 2
Bank 1
Bank 1
Bank 2
A2 A1 G1 G2
Channel A Channel G
B2 B1 H1 H2
Channel B Channel H
C2 C1 J1 J2
Channel C Channel J
D2 D1 K1 K2
Channel D Channel K
E2 E1 L1 L2
Channel E Channel L
F2 F1 M1 M2
Channel M
Channel F
CPU 1 CPU 2
Bank 2
Bank 1
Bank 1
Bank 2
N2 N1
U1 U2
Channel N CPU 3 CPU 4 Channel U
P2 P1 V1 V2
Channel P Channel V
Q2 Q1 W1 W2
Channel Q Channel W
R2 R1 X1 X2
Channel R Channel X
S2 S1 Y1 Y2
Channel S Channel Y
T2 T1 Z1 Z2
Channel T Channel Z
48 DIMMS
6 memory channels per CPU
2 DIMMs per channel
NOTE: All non-populated DIMM slots must have a DIMM blank (UCS-DIMM-BLK)
installed.
Each CPU controls six memory channels and 12 DIMM slots, as follows:
■ Each channel has two DIMM slots (for example, channel A = slots A1 and A2).
— A channel can operate with one or two DIMMs installed.
■ Populate the DIMM slots of each CPU identically.
■ Any DIMM installed in a DIMM socket for which the CPU is absent is not recognized.
■ Observe the DIMM mixing rules shown in Table 31
DIMM Parameter DIMMs in the Same Channel DIMM in the Same Slot1
DIMM Capacity
RDIMM = 16, 32, or 64 GB DIMMs in the same channel (for For best performance, DIMMs in the
example, A1 and A2) can have same slot (for example, A1, B1, C1,
LRDIMM = 32, 64, or 128 GB
different capacities. D1, E1, F1) should have the same
TSV-RDIMM = 64 or 128 GB capacity.
Do not mix TSV-RDIMMS with Do not mix TSV-RDIMMS with LRDIMMs
LRDIMMs nor RDIMMs nor RDIMMs
DIMM Speed
2666 MHz or 2933 MHz DIMMs will run at the lowest speed DIMMs will run at the lowest speed of
of the CPU installed the CPU installed
DIMM Type
TSV-RDIMMS, RDIMMs, or Do not mix DIMM types in a channel Do not mix DIMM types in a slot
LRDIMMs
1 DPC or 2 DPC
DIMMs per Channel (DPC)
See CHOOSE MEMORY on page 13 for
valid LRDIMM and RDIMM 1 DPC and 2 DPC memory configurations
Notes
1. Although different DIMM capacities can exist in the same slot, this will result in less than optimal performance.
For optimal performance, all DIMMs in the same slot should be identical.
For more details, see the document titled “Cisco UCS B480 M5 Memory Guide,” available at
https://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-b-series-blade-servers/datashee
t-listing.html
NETWORK CONNECTIVITY
For network connectivity, details please review the Cisco UCS 5108 Blade Server Chassis Spec Sheet.
SPARE PARTS
This section lists the upgrade and service-related parts for the UCS B480 M5 server. Some of these parts are
configured with every server or with every UCS 5108 blade server chassis.
Rack
RACK-BLANK-001= Blanking panels (qty 12), 1U, plastic, toolless.Recommended to ensure proper
airflow. Fill all empty RU spaces in the front of the rack. Because each blanking
panel PID includes 12 panels, use the following calculation: 42RU - occupied RU =
available RU. Divide available RU by 12 to determine PID order quantity.
RACK-CBLMGT-001= Cable mgt D rings (qty 10), metal.Use the D rings to bundle system cables to ensure
proper airflow.
RACK-CBLMGT-003= Brush strip (qty 1), 1 U.The brush strip promotes proper airflow while allowing
cables to be passed from the front to the rear of the rack.
RACK-CBLMGT-011= Cable mgt straps (qty 10), Velcro.Use the Velcro straps to bundle system cables to
ensure proper airflow.
RACK2-JOIN-001= Rack joining kit.Use the kit to connect adjacent racks within a row. Order 1 unit
less than the number of racks in the row.
CPU Accessories
UCSB-HS-M5-F CPU Heat Sink for UCS B-Series M5 CPU socket (Front)
UCSB-HS-M5-R CPU Heat Sink for UCS B-Series M5 CPU socket (Rear)
UCS-CPU-TIM= Single CPU thermal interface material syringe for M5 server HS seal
UCSX-HSCK= UCS Processor Heat Sink Cleaning Kit (when replacing a CPU)2
Memory
UCS-MR-X32G2RTHM= 32GB RDIMM DRx4 2933 (8Gb), Micron only (not on B480 Spec sheet)
UCS-MR-X16G1RTHM= 16GB RDIMM SRx4 2933 (8Gb), Micron only (not on B480 Spec sheet)
Storage Controller
UCSB-MRAID12G= Cisco FlexStorage 12G SAS RAID controller with drive bays
UCSB-MRAID12G-HE= Cisco FlexStorage 12G SAS RAID controller with 2 GB flash-backed write cache and
drive bays
UCSB-LSTOR-BK= Cisco FlexStorage blanking panel w/o controller, w/o drive bays
Drives
HDDs
Enterprise Performance SSDs (High endurance, supports up to 10X or 3X DWPD (drive writes per day))
SAS SSDs
UCS-SD800GSAS3-EP= 800GB 2.5 inch Enterprise performance 12G SAS SSD(3X endurance) (Toshiba PX05)
UCS-SD16TSASS3-EP= 1.6TB 2.5 inch Enterprise performance 12G SAS SSD(3X endurance) (Toshiba PX05)
UCS-SD400GH3-EP= 400GB 2.5 inch Enterprise performance 12G SAS SSD (3X DWPD)
UCS-SD800GH3-EP= 800GB 2.5 inch Enterprise performance 12G SAS SSD (3X DWPD)
UCS-SD16TH3-EP= 1.6TB 2.5 inch Enterprise performance 12G SAS SSD (3X DWPD)
UCS-SD32TH3-EP= 3.2TB 2.5in Enterprise performance 12G SAS SSD (3X DWPD)
SATA SSDs
UCS-SD480GIS3-EP= 480GB 2.5in Enterprise performance 6G SATA SSD(3X endurance) (Intel S4600)
UCS-SD960GIS3-EP= 960GB 2.5in Enterprise performance 6G SATA SSD(3X endurance) (Intel S4600)
UCS-SD19TIS3-EP= 1.9TB 2.5in Enterprise performance 6G SATA SSD(3X endurance) (Intel S4600)
UCS-SD76TBMS4-EV= 7.6TB 2.5 inch Enterprise Value 6G SATA SSD (Micron 5100)
UCS-SD480GBIS6-EV= 480GB 2.5 inch Enterprise Value 6G SATA SSD (Intel S4500)
UCS-SD960GBIS6-EV= 960GB 2.5 inch Enterprise Value 6G SATA SSD (Intel S4500)
UCS-SD38TBIS6-EV= 3.8TB 2.5 inch Enterprise Value 6G SATA SSD (Intel S4500)
UCSB-MLOM-40G-04= UCS VIC 1440 modular LOM for blade servers mLOM
UCSB-MLOM-40G-03= UCS VIC 1340 modular LOM for blade servers mLOM
GPUs
Power Cables
Software/Firmware
IMC Supervisor
CIMC-SUP-B01= IMC Supervisor-Branch Mgt SW for C-Series & E-Series up to 100 Svrs
CIMC-SUP-B02= IMC Supervisor- Branch Mgt SW for C & E-Series up to 250 Svrs
CIMC-SUP-A01= IMC Supervisor Adv-Branch Mgt SW for C & E-Series 100 Svrs
CIMC-SUP-A02= IMC Supervisor Adv-Branch Mgt SW for C & E-Series 250 Svrs
CIMC-SUP-A10= IMC Supervisor Adv-Branch Mgt SW for C & E-Series 1000 Svrs
CIMC-SUP-A25= IMC Supervisor Adv-Branch Mgt SW for C & E-Series 250 Svrs
NOTE: IF you must order quantity greater than 1 of UCS-MDMGR-1S, you need to reference the UCS Central Per
Server Data Sheet to order the standalone PIDs: UCS-MDMGR-LIC= or UCS-MDMGR-1DMN=
VMware vCenter
MSWS-16-ST16C-RM= Windows Server 2016 Stan (16 Cores/2 VMs) Rec Media DVD Only
MSWS-16-DC16C-RM= Windows Server 2016 DC (16Cores/Unlim VM) Rec Media DVD Only
MSWS-16-DC24C-RM= Windows Server 2016 DC (24Cores/Unlim VM) Rec Media DVD Only
MSWS-16-ST24C-RM= Windows Server 2016 Stan (24 Cores/2 VMs) - Recovery Media
Red Hat
VMware
VMW-VSP-EPL-1S= VMware vSphere 6 Ent Plus (1 CPU), 1-yr Vmware SnS Reqd
VMW-VSP-EPL-3S= VMware vSphere 6 Ent Plus (1 CPU), 3-yr Vmware SnS Reqd
SUSE
StorMagic
(1) Have the following tools and materials available for the procedure:
(3) Carefully remove and replace the CPU and heatsink in accordance with the instructions
found in “Cisco UCS B480 M5 Blade Server Installation and Service Note,” found at:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B480M5/B
480M5_chapter_011.html#task_xhs_hcz_tz.
(1) Have the following tools and materials available for the procedure:
(3) Order one heat sink for each new CPU. Order PID UCSB-HS-M5-F= for front CPUs and PID
UCSB-HS-M5-F= for rear CPUs.
(4) Carefully install the CPU and heatsink in accordance with the instructions found in “Cisco
UCS B480 M5 Blade Server Installation and Service Note,” found at:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B480M5/B
480M5_chapter_011.html#task_xhs_hcz_tz.
(1) Order new DIMMs or DCPMMs as needed from Table 4 on page 14.
(3) Open both connector latches and remove and replace the DIMM/DCPMM or blank as needed.
3
5
4
4
3
306040
(4) Press evenly on both ends of the DIMM/DCPMM until it clicks into place in its slot.
NOTE: Ensure that the notch in the DIMM/DCPMM aligns with the slot. If the notch is
misaligned, it is possible to damage the DIMM/DCPMM, the slot, or both.
(5) Press the connector latches inward slightly to seat them fully.
(6) Populate all slots with a DIMM, DCPMM, or DIMM blank. A slot cannot be empty.
For additional details on replacing or upgrading DIMMs and DCPMMs, see “Cisco UCS C240 M5
Server Installation and Service Guide,” found at these links:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B480M5/B
480M5_chapter_011.html#concept_a3p_vfl_kz
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B480M5/B
480M5_chapter_011.html#concept_qtr_4xh_zgb
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Parameter Value
Power Specifications
For configuration-specific power specifications, use the Cisco UCS Power Calculator at:
http://ucspowercalc.cisco.com
NOTE: When using 256 GB DDR DIMMs (UCS-ML-256G8RT-H) in this server, the blade-level power capping
must be set to 1300 W. For information about blade-level power capping, see the Power Capping and Power
Management chapter in the Cisco UCS Manager Server Management Guide for your release: Cisco UCS
Manager Configuration Guides.