Professional Documents
Culture Documents
Cisco UCS Design - Deployment PDF
Cisco UCS Design - Deployment PDF
Cisco UCS Design - Deployment PDF
CCIE#18612 Security
roxadiaz@cisco.com
© 2010 Cisco and/or its affiliates. All rights reserved. 1
Agenda
Unified Computing
UCS Overview
Summary
Evolution of the Mini-Rack Architecture
Traditional Duplicate Infrastructure
Rack for Every 16 Servers
Divide into Mini-Racks
Blade Mini-Rack 1
(16 blade servers)
Blade Mini-Rack 2
(16 blade servers)
Multiple Management
Modules
© 2010 Cisco and/or its affiliates. All rights reserved. 4
Cisco UCS—Reducing Complexity
MGMT MGMT
SAN SAN
LAN LAN
Additional
LAN and SAN • Embed management
Connections
• Remove
unnecessary:
Additional
Management Switches
Connections
Adapters
Management modules
Multiple Ethernet • Unify the Fabric
Connections
Network, Storage,
Mgmt
• Power and cooling
Multiple
SAN Connections 1/3rd less
infrastructure
Lower power
Unified Fabric
Management
Fibre Chanel
Ethernet
Fabric
Rack Extender One Network
Switch Architecture One Layer
Blade
Switch
Virtual
Switch
Cisco® Fabric
Extender
Architecture
Cisco Fabric
Interconnects
Cisco Fabric
Fabric Extenders
Extender One Network
Architecture One Layer
Cisco Virtual
Interface Cards
Sandy
Merom Penryn Nehalem Westmere Bridge
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 11
Agenda
Unified Computing
UCS Overview
Summary
How is Cisco UCS doing ?
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 13
Cisco UCS Performance-63 World Records
A History of World Record Performance on Industry Standard Benchmarks
Oracle E-Business Suite Oracle E-Business Suite Oracle E-Business Suite Oracle E-Business Suite TPC-C TPC-H 1000GB TPC-H 300GB
Best Ex-large Model Payroll Medium Model Xtra Large Model Payroll Xtra Large Model Payroll Oracle DB 11g & OEL Microsoft SQL Server VectorWise
Enterprise Batch B200 M2 Order-to-Cash B200 M2 Batch B230 M2 B200 M3 C250 M2 C460 M2 C250 M2
Application Oracle E-Business Suite Oracle E-Business Suite Oracle E-Business Suite
SPECjEnterprise2010 SPECjEnteprise2010
TPC-H 100GB
Medium Model Payroll Medium Model Payroll Large Model VectorWise
Performance Batch B200 M2 Batch B200 M2 Order-to-Cash B200 M3
Overall B440 M1 2-node B440 M2
C250 M2
SPECjbb2005
Best SPECjAppServer2004
1-node 2-socket C250 M2
SPECjbb2005
X86 2-socket B200 M2
SPECjbb2005
X86 4-socket C460 M1
SPECjAppServer2004
2-node B230 M1
SPECjbb2005
X86 2-socket B230 M1
X86 2-socket C220
Enterprise M3
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco UCS Benchmarks that held world record performance records as of date of publication 14
Gartner Magic Quadrant
600
500
400
300 No Customers
Repeat Customers
200
100
0
FY10 FY11 FY12 (to date) Total
© 2010 Cisco and/or its affiliates. All rights reserved. 16
Integrated Solutions
Power of the Ecosystem
Enterprise Apps Databases Business Analytics Virtual Desktop RISC Migration
HANA &
Applications
BWA
Unified Computing
OS / Hypervisor
Management
Information
Applications
Operating Systems
Virtualization
Cisco UCS Cisco Nexus® NetApp FAS
Infrastructure B-Series Family Switches 10 GE & FCoE
Compute Cisco UCS Complete
Manager Bundle
Network
6100 and 6200 Series Virtual Adapters 2100 and 2200 Series
Fabric Interconnects § Consolidates multiple NICs Fabric Extenders
§ High performance scalability and HBAs § Data center network
§ Low latency multi-purpose § VM-FEX : VM Aware convergence
Ethernet-based Fabric Networking § Simplified Connectivity
§ Data center network § Pass Through Switching & § Exceptional Bandwidth
convergence. Hypervisor Bypass
G G S S G G
Fabric
A
Fabric
A
• UCS Manager
Interconnect Interconnect
G G G G G G Embedded device manager for family of UCS
UCS Manager
components
Chassis
Compute Chassis
Compute Chassis
Fabric Extender
R I C C Fabric
I Extender
R
• Chassis
Up to 8 half width blades or 4 full width blades
M
Adapter B
P
Adapter B
P
Adapter • Fabric Extender
Up to 160Gbs Flexible bandwidth allocation
X X X X X X
G
Fabric
G S S G
Fabric
G • Cisco UCS
A A
Interconnect Interconnect
G G G G G G UCS Manager
UCS Manager
Chassis
Compute Chassis
Interconnect
Compute Chassis
Fabric Extender C Fabric Extender
Fabric Extender
R I C I R
Chassis
M
Adapter B
P
Adapter B
P
Adapter
Compute Node(s)
IO Adapter(s)
X X X X X X
• Embedded device
manager for family of
UCS components
• Enables stateless
computing via Service
Profiles
• Efficient scale: Same
effort for 1 to 160
blades
• APIs for integration
with new and existing
data center
infrastructure
Unified Management at
© 2010 Cisco and/or its affiliates. All rights reserved.
Scale 23
Programmatic Infrastructure
• Comprehensive XML API, standards-based interfaces
• Bi-Directional access to physical & logical internals
XML API
System Status
Physical Inventory
Logical Inventory
OS and Software
Application Stack Management Third Party Management
Service Orchestration
Provisioning and Configuration
Monitoring and Analysis
Mgmt 0
L1
Clustering
Clustering
L2
UCS Cluster
ISO/IMG
Organizations
UCS Manager
© 2010 Cisco and/or its affiliates. All rights reserved. 33
UCS Orgs, Locales, and User Roles…
Create Orgs Create Locales from Orgs
1. Create
Organizations
2. Create Locale
(collection of 1+
Organizations)
3. Create User
4. Assign to Locale(s)
5. Assign User Role(s)
• Organizational Boundaries
G G S S G G • Cisco UCS
Fabric Fabric
UCS Manager
A A
Interconnect Interconnect
G G G G G G
Interconnect
UCS Manager
Chassis
Compute Chassis
Compute Chassis
Fabric Extender
R I C C Fabric
I Extender
R
Fabric Extender
Chassis
M P P
Compute Node(s)
Adapter B Adapter B Adapter
IO Adapter(s)
X X X X X X
20G per Chassis 40G per Chassis 80G per Chassis 160G per Chassis
Bac
kpla
n e A:
1 0 -8
0 Gb
CNA
CNA
CNA
CNA
vHBA Eth0 Eth1 vHBA vHBA Eth0 Eth1 vHBA vHBA Eth0 Eth1 vHBA vHBA Eth0 Eth1 vHBA
OS OS OS OS
UCS Blade Server 1 UCS Blade Server 2 UCS Blade Server 159 UCS Blade Server 160
© 2010 Cisco and/or its affiliates. All rights reserved. 44
10G CNA options
SCSI
FCP
Less Overhead
iSCSI than
FCIP or iSCSI
FCIP
FC
TCP
IP FCoE
Ethernet
Physical Wire
SCSI iSCSI FCIP FCoE FC
© 2010 Cisco and/or its affiliates. All rights reserved. 46
FCoE Exceeds FC Performance by over 25%
Testing done with Nexus 5000 and 2nd Generation CNAs
OneConnect
QLE 8200 Throughput
Throughput UCNA
1,200
1,200
1,000
1,000
Throughput
(MB/s)
800 800
Throughput
(MB/s)
600 8G
FC
Reads 8G
FC
W rites
600
400 8G
FC
Writes 8G
FC
Reads
400 10G
FCoE
W rites
200 10G
FCoE
Reads
10G
FCoE
Reads
0 10G
FCoE
Writes 200
2K
4K
8K
K
16
32
64
0
2K 4K 8K 16K 32K 64K
1 2 … 32 1 2 … 8 1 2 … 116 1 2 … 112
Management Plane
1 2
FEX
… 32
Integration
Rack Server
Rack Server
Rack Server
Rack Server Blade Server
Rack Server Blade Server
Rack Server
2 1
Blade Server
Rack Server
FEX
Blade Server
FEX
…
Rack Server
Hypervisor
…
Blade Server
Rack Server 1 2 116
Virtual Machine
…
Rack Server Blade Server
FEX Manager
OS
1 2 11
8
UCS-FI-E16UP
• 16 “Unified Ports”
• Ports can be configured as either
Ethernet or Native FC Ports
• Ethernet operations at 1/10 Gigabit
Ethernet
• Fibre Channel operations at 8/4/2/1G
• Uses existing Ethernet SFP+ and
Cisco 8/4/2G and 4/2/1G FC Optics
LAN
Spanning
• Server vNIC pinned to an Uplink port
Tree
• No Spanning Tree Protocol
• Reduces CPU load on upstream switches
• Reduces Control Plane load on 6100
• Simplified upstream connectivity
• UCS connects to the LAN like a Server, not like a
6100
A
MAC
Switch
Learning
• Maintains MAC table for Servers only
vEth 3 vEth 1 • Eases MAC Table sizing in the Access Layer
VLAN 10 MAC
• Allows Multiple Active Uplinks per VLAN
Fabric
A
Learning
• Doubles effective bandwidth vs STP
Root
LAN • Fabric Interconnect behaves like a normal
Layer 2 switch
Server 2 Server 1
© 2010 Cisco and/or its affiliates. All rights reserved. 53
Local Switching: One consideration
• However, for servers in the same VLAN and whose vNICs are
routed through different switches (one server goes to Switch A, the
other goes to Switch B) switching involves the external
infrastructure
6100A 6100B
G
Fabric
A
G S S G
Fabric
A
G
• Cisco UCS
Interconnect Interconnect
G G G G G G
UCS Manager
UCS Manager
Chassis
Compute Chassis
Compute Chassis
Interconnect
Fabric Extender
R I C C Fabric
I Extender
R
Fabric Extender
Chassis
M
Adapter B
P
Adapter B
P
Adapter
Compute Node(s)
X X X X X X
IO Adapter(s)
x86 Computer x86 Computer
UCS 6296 FI
UCS 6248 FI
§ 2TB switching
Cisco UCS 6140/ 6120 § 1TB switching throughput
UCS Fabric throughput
§ 96 ports in 2RU
Interconnects
§ 48 ports in 1RU § Unified Ports
Forward compatible
with Second Generation § Unified Ports § Investment protection
I/O Modules § Investment protection
40 Port I/O Module
Host ports 8 32 16
Network ports 4 8 4
Bandwidth
Classes of service 4 (3 enabled) 8 8
Resiliency EtherChannels HI > NI only 4 ports Both directions 8 ports Both directions 8 ports
Adapter redundancy 1 mLOM only mLOM and mezzanine mLOM and mezzanine
40 GIG
PER 80 GIG
FABRIC PER
FABRIC
2104XP 2208XP
FAN1
FAN1
PS1
PS1
FAN FAN FAN FAN STAT STAT STAT STAT
FAN1
FAN1
PS1
PS1
STAT STAT STAT STAT FAIL FAIL FAIL FAIL
STAT
STAT
FAIL FAIL FAIL FAIL
FAN2
FAN2
STAT
STAT
FAN2
FAN2
OK OK OK OK
OK OK OK OK
PS2
PS2
PS2
PS2
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W
SLOT SLOT
SLOT SLOT
1 2
1 2
SLOT
3
Slot 3 Slot 4 SLOT
4
SLOT
3
Slot 3 Slot 4 SLOT
4
SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
7
Slot 7 Slot 8 SLOT
8
SLOT
7
Slot 7 Slot 8 SLOT
8
§ HIFs
are
staDcally
pinned
by
the
system
to
individual
fabric
ports.
§ Only
1,2,4
and
8
links
are
supported
-‐
3,5,6,7
are
not
valid
configuraDon.
© 2010 Cisco and/or its affiliates. All rights reserved. 63
Discrete Links
Static Pinning (IOM-FI)
Fabric Interconnect
• Static Pinning done by the
Fabric Ports
system dependent on number
of fabric ports
Server Ports
• Applicable to both 6100 /
6200 and 2104XP/2208XP
Blade 1
Blade 2
Blade 3
Blade 4
Blade 5
Blade 6
Blade 7
Blade 8
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 64
Discrete Links
Fabric Port Failure
Fabric Interconnect
• Pinned HIFs are brought down
Fabric Ports
• Other blades unaffected
IOM
Server Ports
Blade 1
Blade 2
Blade 3
Blade 4
Blade 5
Blade 6
Blade 7
Blade 8
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 65
Port-Channel (IOM-FI)
Fabric Ports
• HIFs pinned to port-channel Port Channel
Server Ports
L2 DA/SA, VLAN, L3 DA/SA, L4 DP/SP
FCoE
L2 SA/DA, L2 VLAN, FC SID/DID, FC-OXID
Blade 4
Blade 5
Blade 6
Blade 8
Blade 1
Blade 2
Blade 3
Blade7
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 66
Port-Channel (IOM-FI)
Link Failure
Fabric Interconnect
6200
§ Blades still pinned to Port-
Fabric Ports
channel on a link failure
Port Channel
Server Ports
Blade 4
Blade 5
Blade 6
Blade 8
Blade 1
Blade 2
Blade 3
Blade7
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 67
Fabric Ports: Discrete vs. Port Channel Mode
160 Gb (Discrete Mode) FAN FAN FAN FAN
160 Gb (Port Channel Mode) FAN FAN FAN FAN
FAN1
FAN1
FAN1
FAN1
PS1
PS1
PS1
PS1
STAT STAT STAT STAT STAT STAT STAT STAT
FAIL FAIL FAIL FAIL FAIL FAIL FAIL FAIL
STAT
STAT
STAT
STAT
FAN2
FAN2
FAN2
FAN2
OK OK OK OK OK OK OK OK
PS2
PS2
PS2
PS2
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W
SLOT
Slot 3 Slot 4 SLOT SLOT
Slot 3 Slot 4 SLOT
3 4 3 4
SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
Slot 7 Slot 8 SLOT SLOT
Slot 7 Slot 8 SLOT
7 8 7 8
§ Servers can only use a single 10GE IOM uplink § Servers can utilize all 8x 10GE IOM uplinks
§ Bandwidth range per blade - 0 to 20 Gb § Bandwidth range per blade – 0 to 160 Gb
§ A blade is pinned to a discrete 10 Gb uplink § A blade is pinned to a logical interface of 80 Gbps
§ FabricFailover if a single uplink goes down § FabricFailover if all uplinks on same side go down
§ Per blade traffic distribution , same as Balboa § Per flow traffic distribution with-in a port-channel
§ Suitable for traffic engineering use case § Recommended with VIC 1280 – Suitable for most
environments
G
Fabric
G S S G
Fabric
G • Cisco UCS
A A
Interconnect Interconnect
G G G G G G UCS Manager
UCS Manager
Chassis
Compute Chassis
Interconnect
Compute Chassis
Fabric Extender
Fabric Extender
R I C C Fabric
I Extender
R
Chassis
M
Adapter
P
Adapter
P
Adapter
Compute Node(s)
B B
IO Adapter(s)
X X X X X X
Chassis
§ 6 RU / 32” deep
§ Up to 8 half slot blades
§ Up to 4 full slot blades
§ 8x fans
§ 2x Chassis IO Module
§ All devices hot-pluggable
Power Supplies
§ 4x 2,500W hot-plug power
supplies
§ 90+% efficient
§ N+N redundancy
§ Single Phase 220V
1U or 2U Fabric Switch
6U Enclosure
1U or 2U Fabric
Switch
Redundant,
Hot swap fan
module
Redundant, Hot
swap Fabric
extender/IOM
6U Enclosure
I/O Modules
63% Open
Blade Connectors
PSU Connectors
G
Fabric
A
G S S G
Fabric
A
G
• Cisco UCS
Interconnect Interconnect
G G G
UCS Manager
G G G
UCS Manager
Chassis
Compute Chassis
Compute Chassis
Interconnect
Fabric Extender
R I C C Fabric
I Extender
R
Fabric Extender
Chassis
M
Adapter B
P
Adapter B
P
Adapter
Compute Node(s)
X X X X X X
IO Adapter(s)
x86 Computer x86 Computer
Mezz Slot
No
Mezz
Adapter
0
1
Cota,
mLOM
vNIC1
vNIC2
vNIC3
vNIC4
eth0
eth1
eth2
eth3
HBA
0
HBA
1
Host
2 x 1Gb + 2 x 2 x 1Gb +
LoM 2 x 1Gb 2 x 1Gb 2 x 1Gb 4 x 1Gb
10Gb
2 x 10Gb
2 x 10Gb
G
Fabric
A
G S S G
Fabric
A
G
• Cisco UCS
Interconnect Interconnect
G G G G G G
UCS Manager
UCS Manager
Chassis
Compute Chassis
Compute Chassis
Interconnect
Fabric Extender
R I C C Fabric
I Extender
R
Fabric Extender
Chassis
M
Adapter B
P
Adapter B
P
Adapter
Compute Node(s)
X X X X X X
IO Adapter(s)
x86 Computer x86 Computer
FEX FEX
Fabric Fabric
A B
10G 10G
E E
CNA
vmnic vmnic
vhba0 vhba1
0 1
FEX FEX
CNA
vmnic vmnic
vhba0 vhba1
0 1
FEX FEX
Fabric Fabric
A B
10G 10G
E E
Cisco
VIC
vHB vHB
A
vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
A 58
vhba vmni vmni vmni vmni vmni vmni vmni vmni vhba
0 c0 c1 c2 c3 c4 c5 c6 c7 1
FEX A FEX B
Cisco VIC
vhba vmni vmni vmni vmni vmni vmni vmni vmni vhba
0 c0 c1 c2 c3 c4 c5 c6 c7 1
• Adapter Failover feature: in case of failure on the primary path, the vNIC is mapped to the standby port transparently
to the OS
• Security and scalability improvements: no need of trunking all VLANs to the server interface
Support matrix at Nexus 5500 (NX-OS 5.1(3)N1(1)) and UCS C-Series Servers
Nexus-5548(config)# int
veth6
Nexus-5548(config-if)#
shut
Nexus-5548(config-if)# no
86
shut
© 2010 Cisco and/or its affiliates. All rights reserved.
UCS 1280 Virtual Interface Card
FI-A FI-B
Up to 2 x 80Gb
UCS 5108
• 2nd Generation VIC
Chassis backplane
• Dual 4x10 Gbps connectivity into fabric A B
• PCIe x16 GEN2 host interface
IO
• Capable of 256 PCIe devices M
IO
M
OS dependent Up to 2x 40Gb
Physical Port 1 Physical Port 2 Physical Port 3 Physical Port 4 Physical Port 5 Physical Port 6 Physical Port 7 Physical Port 8
vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
5 6 7 8 9 10 59 66 73 80 87 94 101 108 109 110
vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
11 12 13 14 15 16 17 18 60 67 74 81 88 95 102 111
Server Blade
vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
19 20 21 22 23 24 25 26 61 68 75 82 89 96 103 112
vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
27 28 29 30 31 32 33 34 62 69 76 83 90 97 104 113
vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
35 36 37 38 39 40 41 42 63 70 77 84 91 98 105 114
vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
43 44 45 46 47 48 49 50 64 71 78 85 92 99 106 115
vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
51 52 53 54 55 56 57 58 65 72 79 86 93 100 107 116
OS
© 2010 Cisco and/or its affiliates. All rights reserved. 88
UCS Topology Designs for Max
Bandwidth
Choose the Fabric Interconnect + IO Module + VIC adapter combo for your
needs UCS
6248UP
UCS
6248UP
UCS
6248UP
or
UCS
6100
or
UCS
6100
UCS
6248UP
or
UCS
6100
UCS
6248UP
UCS 2104 IOM UCS 2208 IOM UCS 2208 IOM UCS 2208 IOM
UCS 2208 IOM
1280 VIC or M81KR M81KR M81KR UCS 1280 VIC UCS 1280 VIC
• Shared
IOM
uplink
• Dedicated
IOM
uplink
• Shared
IOM
Port
Channel
• Dedicated
IOM
uplink
• Shared
IOM
Port
bandwidth
of
10Gbps
bandwidth
of
10Gbps
bandwidth
of
20-‐80Gbps
bandwidth
of
10Gbps
Channel
bandwidth
of
20-‐80Gbps
• vNIC
Burst
up
to
10Gbps
• vNIC
Burst
up
to
10Gb
• vNIC
Burst
up
to
10Gb
• vNIC
Burst
up
to
10Gbps
*(IOM
uplink
limitaDon)
• vNIC
Burst
up
to
40Gbps
*(PCIe
Gen
2
limitaDon
of
64)
• Dedicated
IOM
Uplink
• Shared
IOM
Port-‐Channel
• Dedicated
IOM
Uplink
• Shared
IOM
Uplink
with
1,
3,
or
7
other
servers
with
8
servers
• Shared
IOM
Port-‐
Channel
with
8
servers
• Host
port
pinned
to
a
• Host
port
pinned
to
a
• Host
port-‐channel
• Host
port
pinned
to
a
discrete
IOM
uplink
discrete
IOM
port-‐channel
pinned
to
discrete
IOM
• Host
port-‐channel
discrete
IOM
uplink
uplink
pinned
to
the
IOM
port-‐
© 2010 Cisco and/or its affiliates. All rights reserved. channel
89
89
UCS Fabric-Based NIC Teaming
Fabric Failover Enhances Multi Hypervisor Clouds
Port 1
CNA
• After failover:
UCS Blade Server
CNA
Transmit GARP
vNIC 1 vNIC 2
Local Area
Connection 0
Local Area
Connection 1
Multicast Group reRegistration
1.1.1.1 2.2.2.2
SERVER
IDENTITY
NIC MACs
HBA WWNs
Server UUID
VLAN Assignments
VLAN Tagging
FC Fabrics Assignments
FC Boot Parameters SAN
Quantity of NICs
Boot Order
PXE Settings
UCS IPMI Settings
QoS VLANs
and and
NIC to Security VSANs
Policies MACs
Switch and
port WWNs
Mappings
Server Name
Subject Matter Experts
1 Define Policies
UUID, MAC, WWN
Boot Information
LAN, SAN Config
Firmware Policy
Storage Server Network
SME SME SME
Server Name
UUID, MAC, WWN
Boot Information
LAN, SAN Config
Firmware Policy
2
Policies Used Server Name
to Create UUID, MAC, WWN
Service Profile Boot Information
Server Policy… LAN, SAN Config
Templates Firmware Policy
Storage Policy…
4
Network Policy…
Server Name
UUID, MAC,
3
Associating Service
WWN Profiles with Hardware
Virtualization Policy… Boot Information Service Profile
Templates Configures Servers
LAN, SAN Config
Create Service Profiles Automatically
Application Profiles… Firmware Policy
Physical Server B
**ANIMATED SLIDE**
Serve st Serve
r1 Serv r3 OS Image
er 2 VLAN 4
(Boot from SAN)
VLAN 1 VLAN 3
Fabric A Fabric A Fabric B
VM Migration
Host Server 1
Hypervisor Hypervisor (Windows, ESX, Linux, etc)
Host
Host OSA
Server Host
Host OSB
Server
BIOS:<settings>
BIOS: <settings> BIOS: <settings>
BIOS: <settings> BIOS: <settings> Firmware: Version 2B
Firmware: Version 2B Firmware: Version 1C
Firmware: Version 1A Firmware: Version 2B UUID: 1122AB
UCSMACs:
UUID: A1 &5566EF UUID: 3344CD
UUID: 1122AB UUID: 3344CD B1
UCSWWNs:
MACs:C1 A3 & B3
& D1 MACs: A2 & B2
MACs: A1 & B1 MACs: A2 & B2
UCS WWNs: C3 & D3 WWNs: C2 & D2
WWNs: C1 & D1 WWNs: C2 & D2
FC Fabrics FC Fabrics
UCS Service Profile Migration
FC Fabric A
A&B A&B VLANs 1 & 2
VLANs 1-5 VLANs 1-5 UCS Fabric Interconnect
FC Fabrics A& B; VLANs 1- 5
External Network Fabric
© 2010 Cisco and/or its affiliates. All rights reserved. External Network Fabric 95
95
A UCS Service Profile:
VLAN Tagging Settings per NIC Disk Scrub Actions
NIC Transmit Rate Limiting Thresholds for Monitored Statistics
NIC MAC Address Assignment Number of vHBAs on Server
120+ server settings in a single object… NIC Maximum Transmission Unit (MTU)Local HDD RAID Configuration
Number of NIC Transmit Queues Define Number of iSCSI interfaces
NIC Transmit Queue Ring Size on Server
NIC Receive Queues Define Number of vHBAs on
BIOS: Quiet Boot BIOS: PCI Memory Mapped IO Above NIC Receive Queue Ring Size Server
BIOS: Post Error Pause 4Gb Config Distribution FC Switch Uplink
NIC Completion Queues
BIOS: Resume A/C on Power Loss BIOS: Boot Option Retry Assignment Per HBA (Pin Group)
NIC Interrupts
BIOS: Front Panel Lockout BIOS: UCSM Boot Order Rule Control HBA World Wide Port Name
NIC Transmit Checksum Offload
BIOS: Turbo Boost BIOS: Intel Entry SAS RAID NIC Receive Checksum Offload (WWPN) Assignment
BIOS: ACPI10 Support BIOS: Intel Entry SAS RAID Module HBA Fiber Channel SAN
NIC TCP Segmentation Offload
BIOS: Enhanced Intel Speedstep BIOS: Assert Nmi on Serr Membership
NIC TCP Large Receive Offload
BIOS: Hyper Threading BIOS: Assert Nmi on Perr Fiber Channel Boot Parameters
NIC Receive Side Scaling (RSS)
BIOS: Core Multi Processing BIOS: OS Boot Watchdog Timer HBA-to-Distribution FC Switch
NIC Failback Timeout
BIOS: Virtualization Technology (VT) BIOS: Console Redirection NIC Interrupt Mode Uplink Persistent Binding
BIOS: Execute Disabled Bit BIOS: Console Flow Control NIC Interrupt Coalescing HBA Maximum Data Field Size
BIOS: Direct Cache Access BIOS: Console BAUD rate (MTU)
NIC Interrupt Timer
BIOS: Processor C State BIOS: Console Terminal Type HBA Transmit Queue Ring Size
NIC QoS Host Control Option
BIOS: Processor C1E BIOS: Console Legacy OS Redirect HBA Receive Queue Ring Size
Enable/Disable Cisco Discovery
BIOS: Processor C3 Report Server Boot Order (HDD, CD-ROM, Protocol for VMware vSwitch HBA SCSI I/O Queues
BIOS: Processor C6 Report SAN, USB, Floppy, PXE) MAC Security per NIC HBA SCSI I/O Queue Ring Size
BIOS: CPU Performance Server BIOS Firmware HBA FCP Error Recovery
QoS settings per NIC
BIOS: Max Variable MTRR Setting Ethernet Adapter Firmware HBA Flogi Retries
NIC action on Switch uplink failure
BIOS: VT for Directed IO Fiber Channel Adapter Firmware HBA Flogi Timeout
Distribution Enet Switch Uplink
BIOS: Interrupt Remap HBA Option ROM Firmware Assignment Per NIC (Pin Group) HBA Plogi Retries
BIOS: Coherency Support Storage Controller Firmware Server Pool Assignment HBA Plogi Timeout
BIOS: ATS Support Remote Management Controller (e.g. HP Maintenance Policy HBA Port Down Timeout
BIOS: Passthrough DMA Support iLO) Firmware HBA Port Down IO Retry
IPMI Usernames & Passwords
BIOS: Memory RAS Config Server UUID IPMI User Roles HBA Link Down Timeout
BIOS: NUMA Virtual Server Serial Number Server Management IP Address HBA IO Throttle Count
BIOS: Low Voltage DDR Mode Define Number of vNICs on Server Serial over LAN Configuration HBA Max LUNs Per Target
BIOS: Serial Port A state Define Number of Dynamic vNICs (for HBA Interrupt Mode
Power Control Policy Capping and
BIOS: USB Make Device Non Bootable VMware Pass-through) HBA QoS Priority
Priority
BIOS: USB System Idle Power Settable vNIC/FlexNIC Speed (reflected PCIe Bus Device Scan Order for NICs/ HBA QoS Burst size
Optimizing Setting in OS) HBAs HBA QoS Rate limit
BIOS: USB Front Panel Access Lock PXE Boot Setting PCIe Virtual Device Slot Placement for HBA QoS Host Control Option
BIOS: PCI Max Memory Below 4G Fabric Failover (NIC Teaming) Settings NIC/HBA HBA World Wide Node Name
© 2010 Cisco and/or its affiliates. All rights reserved. 96
VLAN Assignment per NIC BIOS Scrub Actions (WWNN) Assignment
96
Server Availability
Physical Inventory
Physical Inventory
Policy Inventory
Service Profile: Default 1
Service Profile: HR-App1
Performance optimized
Integrated Design for any type of workload