Cisco UCS Design - Deployment PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 104

Cisco Unified Computing and Virtualization:

Architecture, Design and Deployment


Recommendation
Roxana Diaz
Consulting Systems Engineer – Mexico

CCIE#18612 Security

roxadiaz@cisco.com
© 2010 Cisco and/or its affiliates. All rights reserved. 1
Agenda

Unified Computing

UCS Overview

Summary
Evolution of the Mini-Rack Architecture
Traditional Duplicate Infrastructure
Rack for Every 16 Servers
Divide into Mini-Racks

Blade Mini-Rack 1
(16 blade servers)

Blade Mini-Rack 2
(16 blade servers)

© 2010 Cisco and/or its affiliates. All rights reserved. 3


Legacy Blade Architecture
MGMT MGMT
SAN SAN
LAN LAN
Additional
Additional
LAN andLAN
SAN and
SAN Connections
Connections Over the Past 10 Years
§  An evolution of size, not thinking
Additional §  More servers and switches than ever
Management
Connections
Connections §  More switches per server
§  Management applied, not integrated
Multiple Ethernet
Connections
An Accidental
Architecture
Multiple §  Result: Complexity
SAN Connections
§  More points of management
§  More difficult to maintain policy
coherence
Separate Remote
Management §  More difficult to secure
per Chassis
§  More difficult to scale

Multiple Management
Modules
© 2010 Cisco and/or its affiliates. All rights reserved. 4
Cisco UCS—Reducing Complexity
MGMT MGMT
SAN SAN
LAN LAN
Additional
LAN and SAN •  Embed management
Connections
•  Remove
unnecessary:
Additional
Management Switches
Connections
Adapters
Management modules
Multiple Ethernet •  Unify the Fabric
Connections
Network, Storage,
Mgmt
•  Power and cooling
Multiple
SAN Connections 1/3rd less
infrastructure
Lower power

Separate Remote •  Built for


Management virtualization
per Chassis
Processor density
VM/host ratio

Multiple Management I/O improvement


Modules
© 2010 Cisco and/or its affiliates. All rights reserved.
Extended memory 5
Cisco Unified Computing System Simplicity

Form Factor Independence and Cloud Scale

LAN SAN A SAN B


Mgmt Any IEEE Compliant LAN Any ANSI T11
Compliant SAN Any ANSI T11
Compliant SAN

One Logical Chassis to Manage*

§  LAN §  Rack Servers


Connectivity §  Server Identity
§  SAN Management
Networking §  Monitoring,
§  Blade Chassis’ Troubleshooting
§  Server Blades §  Etc.

*architectural limit of 320 servers with 160 servers supported as of 2.0

Cloud Scale Increases Mobility, Utilisation and Availability


© 2010 Cisco and/or its affiliates. All rights reserved. 6
Cisco Unified Computing System
A Single System: Compute, Network, Virtualization, Storage Access

Single Unified Pre-integrated infrastructure designed


System as a whole

Self integrating components and policy-


Unified Management
based automation

Bare metal abstraction and API design for


Intelligent
automation & orchestration through
Infrastructure
industry standard tools

Unified Virtualization awareness and scalability


Fabric without complexity

Server Industry-standard, x86-architecture


Innovations servers with Cisco innovations

© 2010 Cisco and/or its affiliates. All rights reserved. 7


Fabric Computing

Definition of Fabric Computing


A set of compute, storage,
memory and I/O components
joined through a fabric
interconnect and the software to
configure and manage them.

•  The ability to reconfigure all system


components in sync
Server, Network, Storage, Specialty
Engines

•  The flexibility to provide resources


within the fabric to workloads as
needed
•  The capability to manage systems
from a more holistic standpoint

© 2010 Cisco and/or its affiliates. All rights reserved. 8


Unified Fabric in UCS
Radically Simplified Network Access for Blades and VMs

Unified Cisco® Fabric


Fabric Extender
Architecture

Unified Fabric
Management
Fibre Chanel
Ethernet

Fabric
Rack Extender One Network
Switch Architecture One Layer
Blade
Switch

Virtual
Switch

© 2010 Cisco and/or its affiliates. All rights reserved. 9


Unified Fabric in UCS
Physical Servers and VM’s Connect Directly to the Network

Cisco® Fabric
Extender
Architecture

Cisco Fabric
Interconnects

Cisco Fabric
Fabric Extenders
Extender One Network
Architecture One Layer
Cisco Virtual
Interface Cards

Blade Virtual Rack-Mount


Server Machines Server

© 2010 Cisco and/or its affiliates. All rights reserved. 10


What’s New in Volume Servers?

Intel’s Tick-Tock Development Model:


Sustained Microprocessor Leadership

Intel® Core™ Microarchitecture Intel® Microarchitecture Future Intel®


Codename Nehalem Microarchitecture

Sandy
Merom Penryn Nehalem Westmere Bridge

NEW NEW NEW NEW NEW


Microarchitecture Process Technology Microarchitecture Process Technology Microarchitecture

65nm 45nm 45nm 32nm 32nm


TOCK TICK TOCK TICK TOCK
Forecast

All dates, product descriptions, availability, and plans are


forecasts and subject to change without notice.

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 11
Agenda

Unified Computing

UCS Overview

Summary
How is Cisco UCS doing ?

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 13
Cisco UCS Performance-63 World Records
A History of World Record Performance on Industry Standard Benchmarks

SPECint_rate_base2006 SPECfp_rate_base20 SPECfp_base2006


SPECfp_rate_base2006 SPECint_rate_base2006 SPECfp_rate_base2006 X86 2-socket SPECfp_rate_base2006 06 X86 2-socket C220 X86 2-socket
X86 2-socket B200 M1 X86 2-socket B200 M1 X86 2-socket B200 M2 X86 4-socket C460 M1
Best CPU B200 M2 M3 C220 M3

Performance SPECint_rate_base2006 SPECfp_rate_base2006 SPECint_rate_base2006 SPECint_rate2006


SPECint_rate_base2006
SPECint_rate_base20
X86 2-socket 2-socket 2-socket X86 4-socket 06 X86 2-socket C220
X86 4-socket C460 M1
B200 M2 C260 M2 C260 M2 C460 M2 M3

VMmark 1.x VMmark 1.x VMmark 1.x VMmark 1.x


Best 2-socket B200 M1 2-socket B200 M1 2-socket B250 M2 Overall C460 M1
Virtualization
Performance VMmark 1.x VMmark 1.x VMmark 1.x
Overall C460 M1 Blade Server B440 M1 2 –socket Blade B230 M1

VMmark 2.1 VMmark 2.1


VMmark 2.1 Two–node 4-socket
Best Cloud 2-socket Blade B200 M2 4-socket C460 M2
C460 M2
Computing
Performance VMmark 2.0 VMmark 2.1
Overall B200 M2 Overall C460 M2

Oracle E-Business Suite Oracle E-Business Suite Oracle E-Business Suite Oracle E-Business Suite TPC-C TPC-H 1000GB TPC-H 300GB
Best Ex-large Model Payroll Medium Model Xtra Large Model Payroll Xtra Large Model Payroll Oracle DB 11g & OEL Microsoft SQL Server VectorWise
Enterprise Batch B200 M2 Order-to-Cash B200 M2 Batch B230 M2 B200 M3 C250 M2 C460 M2 C250 M2

Application Oracle E-Business Suite Oracle E-Business Suite Oracle E-Business Suite
SPECjEnterprise2010 SPECjEnteprise2010
TPC-H 100GB
Medium Model Payroll Medium Model Payroll Large Model VectorWise
Performance Batch B200 M2 Batch B200 M2 Order-to-Cash B200 M3
Overall B440 M1 2-node B440 M2
C250 M2

SPECjbb2005
Best SPECjAppServer2004
1-node 2-socket C250 M2
SPECjbb2005
X86 2-socket B200 M2
SPECjbb2005
X86 4-socket C460 M1
SPECjAppServer2004
2-node B230 M1
SPECjbb2005
X86 2-socket B230 M1
X86 2-socket C220
Enterprise M3

Middleware SPECjbb2005 SPECjbb2005 SPECjbb2005 SPECjbb2005 SPECjbb2005


Performance X86 2-socket B230 M1 2-socket C260 M2 2-socket B230 M2 2-socket B230 M2 4-socket B440 M2

SPECompMbase2001 SPECompLbase2001 LinPack LS-Dyna SPECompMbase2001 SPECompMbase2001 SPECompMbase2001


2-socket B200 M2 2-socket B200 M2 2-socket B200 M2 4-socket C460 M1 4-socket C460 M1 4-socket C460 M1 2-socket C240 M3
Best HPC
Performance SPECompMbase2001 SPECompLbase2001 SPECompMbase2001 SPECompLbase2001 SPECompMbase2001 SPECompMbase2001 SPECompLbase2001
2-socket B200 M2 2-socket B200 M2 2-socket B230 M2 2-socket B230 M2 4-socket C460 M2 4-socket C460 M2 2-socket C220 M3

© 2010 Cisco and/or its affiliates. All rights reserved. Cisco UCS Benchmarks that held world record performance records as of date of publication 14
Gartner Magic Quadrant

© 2010 Cisco and/or its affiliates. All rights reserved. 15


Cisco UCS Latin America

Two years in the market.


•  UCS Customers in Latin America
•  FY10 81
•  FY11 339 90
•  Q3FY12 to date 282 (564 projected) 132
•  Total UCS Customers 478 224

•  166% y/y growth self integrates

600

500

400

300 No Customers
Repeat Customers
200

100

0
FY10 FY11 FY12 (to date) Total
© 2010 Cisco and/or its affiliates. All rights reserved. 16
Integrated Solutions
Power of the Ecosystem
Enterprise Apps Databases Business Analytics Virtual Desktop RISC Migration

HANA &
Applications
BWA

Unified Computing

OS / Hypervisor

Management

Information

Applications

Operating Systems

Virtualization
Cisco UCS Cisco Nexus® NetApp FAS
Infrastructure B-Series Family Switches 10 GE & FCoE
Compute Cisco UCS Complete
Manager Bundle
Network

VBLOCK FLEXPOD STANDARD CONFIGURATIONS

© 2010 Cisco and/or its affiliates. All rights reserved. 17


Cisco Unified Computing System Product Portfolio

Cisco UCS Blade Cisco UCS Rack UCS Manager


Servers Servers §  Single Management Domain
§ Best-of-Breed innovations § Industry leading performance §  Dynamic provisioning of
§ Exceptional scalability § Choice of UCS form factor server, storage and network
§  Hardware State Abstraction §  “Stateless” computing with
(Service profiles) service profiles

6100 and 6200 Series Virtual Adapters 2100 and 2200 Series
Fabric Interconnects § Consolidates multiple NICs Fabric Extenders
§  High performance scalability and HBAs § Data center network
§  Low latency multi-purpose §  VM-FEX : VM Aware convergence
Ethernet-based Fabric Networking §  Simplified Connectivity
§  Data center network §  Pass Through Switching & §  Exceptional Bandwidth
convergence. Hypervisor Bypass

© 2010 Cisco and/or its affiliates. All rights reserved. 18


System Components
•  Fabric Interconnect
SAN LAN MGMT SAN
Up to 96 unified ports: Ports can be configured as
either Ethernet or Native FC Ports

G G S S G G
Fabric
A
Fabric
A
•  UCS Manager
Interconnect Interconnect
G G G G G G Embedded device manager for family of UCS
UCS Manager
components

Chassis
Compute Chassis
Compute Chassis
Fabric Extender
R I C C Fabric
I Extender
R
•  Chassis
Up to 8 half width blades or 4 full width blades

M
Adapter B
P
Adapter B
P
Adapter •  Fabric Extender
Up to 160Gbs Flexible bandwidth allocation
X X X X X X

x86 Computer x86 Computer


•  I/O Adapter(s)
Virtualized adapter for single OS and hypervisor
Compute Node Compute Node systems
(Half slot) (Full slot)

Compute: Blade or Rack Mount Server


© 2010 Cisco and/or its affiliates. All rights reserved. 19
Cisco UCS Components
SAN LAN MGMT SAN

G
Fabric
G S S G
Fabric
G •  Cisco UCS
A A
Interconnect Interconnect
G G G G G G UCS Manager
UCS Manager

Chassis
Compute Chassis
Interconnect
Compute Chassis
Fabric Extender C Fabric Extender
Fabric Extender
R I C I R

Chassis
M
Adapter B
P
Adapter B
P
Adapter
Compute Node(s)
IO Adapter(s)
X X X X X X

x86 Computer x86 Computer

Compute Node Compute Node


(Half slot) (Full slot)

© 2010 Cisco and/or its affiliates. All rights reserved. 20


Unified Computing System Manager
Single Pain of Glass for BMaaS

•  Embedded device
manager for family of
UCS components
•  Enables stateless
computing via Service
Profiles
•  Efficient scale: Same
effort for 1 to 160
blades
•  APIs for integration
with new and existing
data center
infrastructure

© 2010 Cisco and/or its affiliates. All rights reserved. 21


UCS Manager
User Developed Portal,
GUI
Tools, Utilities •  Embedded device manager
Discovery, Inventory, Monitoring,
Packaged Systems
CLI Management
Diagnostics, Statistics Collection,
Software Configuration
•  Unifies many UCS HW components
UCS Manager into a single, cohesive, system
Adapters, blades, chassis, fabric
extenders, fabric interconnects
•  APIs for integration with new and
existing data center infrastructure
SMASH-CLP, IPMI, SNMP
•  XML SDK for commercial and
custom implementations

© 2010 Cisco and/or its affiliates. All rights reserved. 22


Unified Management for Multi-UCS
Environments
“UCS
Central”
Com
i
in 2 ng
H
201
2

Data Center 1 Data Center 2 Data Center 3


UCS Manager UCS Manager UCS Manager UCS Manager UCS Manager

•  Unifies management of multi UCS domains pools and templates


•  Leverages UCS Manager technology •  Foundation for high availability, disaster recovery and workload
•  Simplify global operations with centralized inventory, faults, logs mobility
and server consoles •  Model based API for large scale automation
•  Delivers global policies, service profiles, ID

Unified Management at
© 2010 Cisco and/or its affiliates. All rights reserved.
Scale 23
Programmatic Infrastructure
•  Comprehensive XML API, standards-based interfaces
•  Bi-Directional access to physical & logical internals

Self Serve portals


Management Tools
Auditing Tools

Direct UCS CLI UCS GUI 3rd Party Customer

XML API

System Status
Physical Inventory
Logical Inventory

•  Broad 3rd party integration support


•  Faster custom integration for customer use cases
So open, it can now even be managed by your
•  Consistent data and views across ALL interfaces

iPhone or iPad or Blackberry or … !!


© 2010 Cisco and/or its affiliates. All rights reserved. 24
UCS Management Ecosystem Overview
Manage UCS with Industry Standard Tools

OS and Software
Application Stack Management Third Party Management

Service Orchestration
Provisioning and Configuration
Monitoring and Analysis

UCS Visibility Cisco UCS Manager


and Control
Unified Control API
Service Profiles
Cisco UCS Pools

© 2010 Cisco and/or its affiliates. All rights reserved. 25


Describe Port Functions on the UCS 6100
Series

Mgmt 0

L1
Clustering

Clustering
L2

Mgmt 1 Console Port


Unused
© 2010 Cisco and/or its affiliates. All rights reserved. 26
UCS Manager High-availability

§  UCS Manager accessible via cluster IP address


–  Floating IP address for automatic failover
–  Management port on both Fabric Interconnects must be connected
§  UCS Manager runs as two instances (Primary and Subordinate)
–  Database and state information replicated over cluster links
–  Split brains scenarios are prevented by architecture itself
–  Automatic process restart upon failure

UCS Cluster

CLI Custom Portal


or Tools

© 2010 Cisco and/or its affiliates. All rights reserved. 27


UCS Manager Layout

Equipment: Physical ports between UCS


components and Northbound SAN and LAN

Servers: Manage Service Profiles


LAN: Manage VLANs
SAN: Manage SANs
VM: Manage Virtual Machine Service Profiles
Admin: Authentication, users, logs, SNMP

© 2010 Cisco and/or its affiliates. All rights reserved. 28


UCS KVM

© 2010 Cisco and/or its affiliates. All rights reserved. 29


: KVM Console and Virtual Media

ISO/IMG

© 2010 Cisco and/or its affiliates. All rights reserved. 30


KVM: Virtual Media Manager

© 2010 Cisco and/or its affiliates. All rights reserved. 31


Discuss the Purpose of the UCS Dongles
for Blade and IOM Access

© 2010 Cisco and/or its affiliates. All rights reserved. 32


Multi-Tenancy Features of UCS Manager
Device Management User Management
LAN SAN Server
Admin Admin Admin
User User User
Organizations,
Local, TACACS+ Locales,
RADIUS, LDAP
Object Rights,
Customizable User Roles Customizable
QoS
Statistics
Thresholds User Roles,
Initial Updating Serial
Over LAN
Server
Pools
Server SAN LAN Pod and multiple
Service Service Rights Rights Rights Rights
Templates Templates
Bios
Scrub
Disk
Scrub
authentication
VLANs Users
Disk
(RAID)
KVM
Firmware
Profiles vSANs
Blade HW WWNs
Stats Stats
MACs
QoS
Orgs
Stats
methods
BIOS Policies Policies Policies Timezone
IPMI Etc. Etc
Firmware Etc. Etc.

Service Profile Adapter


Boot
Settings
Templates Policies
Locales

Organizations

XML HTTP HTTPS SNMP Telnet SSH

UCS Manager
© 2010 Cisco and/or its affiliates. All rights reserved. 33
UCS Orgs, Locales, and User Roles…
Create Orgs Create Locales from Orgs

Create Users and Assign Roles & Locales

1.  Create
Organizations
2.  Create Locale
(collection of 1+
Organizations)
3.  Create User
4.  Assign to Locale(s)
5.  Assign User Role(s)

© 2010 Cisco and/or its affiliates. All rights reserved. 34


34
Cloud Multi Tenancy and Security in UCS
What do Organizations Define?

•  Organizational Boundaries

•  Example; Server Admin attributes:


Server Pools
Server Pool Qualifications
Service Profile Templates
vHBA Templates
vNIC Templates
WWNN Pools
WWPN Pools
MAC Pools
•  Fundamental multi-tenancy unit QoS Policy
is an ‘organization’ Network Control Policy
Flow Control Policy
•  Organizations are logical divisions Dynamic vNIC Connection Policy
of resources and policy UUID Pools
Ethernet Adapter Policy
FC vHBA Policy
•  Org’s and Sub-Orgs can be 5 Boot Policy
levels deep Host Firmware Policy
Management Firmware Policy
•  Users are Defined against Roles. Granular Roles exist Local Disk RAID Policy
to define privileges (default or custom) Scrub Policy
BIOS Settings Policy
BIOS Defaults Policy
•  Remote “enterprise” authentication

•  Pinning, Pruning, VLAN, VSAN, WWN, L3 uplinks, etc

•  Encrypted Management, PVLAN, DHCP Snooping, etc

© 2010 Cisco and/or its affiliates. All rights reserved. 35


2010 35
Simplify Connectivity and
Management of Rack Optimized Servers

UCS 6100 or 6200 UCS 6100 or 6200

Nexus 2232 Nexus 2232

• Reduce cost of deployment


GE LOM PCIe Adapter • Scale and performance
Mgmt Traffic • Choose from 5 servers
Data Traffic CIMC OS or Hypervisor • Choose from 4 adapters
• Including Cisco VIC
• Scale to 160 C-Series per domain

© 2010 Cisco and/or its affiliates. All rights reserved. 36


What if I don’t have UCS blade servers?

© 2010 Cisco and/or its affiliates. All rights reserved. 37


UCS Components
SAN LAN MGMT SAN

G G S S G G •  Cisco UCS
Fabric Fabric

UCS Manager
A A
Interconnect Interconnect
G G G G G G

Interconnect
UCS Manager

Chassis
Compute Chassis
Compute Chassis
Fabric Extender
R I C C Fabric
I Extender
R
Fabric Extender
Chassis
M P P
Compute Node(s)
Adapter B Adapter B Adapter

IO Adapter(s)
X X X X X X

x86 Computer x86 Computer

Compute Node Compute Node


(Half slot) (Full slot)

© 2010 Cisco and/or its affiliates. All rights reserved. 38


I/O Consolidation with FCoE
Fewer Network Adapters per Server

2/4G FC HBA Storage Traffic


FC
2/4G FC HBA Storage Traffic 10GB CNA
&
1GB NIC LAN1 Traffic Ethernet
10GB CNA
1GB NIC LAN1 Traffic

1GB NIC LAN2 Traffic


1GB NIC LAN2 Traffic
1GB NIC LAN3 Traffic
1GB NIC LAN3 Traffic

© 2010 Cisco and/or its affiliates. All rights reserved. 39


Unified Fabric

•  Single network technology supports all I/


Traditional O in the system
Systems •  Standards-based, high-bandwidth, low-
More latency, lossless Ethernet and Fibre
components Channel over Ethernet
and cables (FCoE) network

•  160 Gbps of bandwidth available per


blade server chassis

•  System is wired once, and I/O


configurations are managed through
software

•  Simpler server I/O configuration,


Cisco UCS™
cabling, and upstream switching
Fewer
components •  Lower infrastructure costs
Unified IO
and cables •  Greater agility

© 2010 Cisco and/or its affiliates. All rights reserved. 40


Example: Embedded FCoE at
Cisco UCS
Cisco
UCS

From  ad  hoc  and   …to  structured,  but  


inconsistent…     siloed,  complicated  and   …to  simple,  op,mized  and  
costly…   automated  
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 41
UCS and FET options

§  Cost-effective transceiver for 10G FEX connections


§  Supported for connections between UCS 6200 and UCS 2200 ONLY
§  SFP+ form-factor
§  Multimode fiber (MMF)
§  Reach: 25m (OM2) 100m (OM3)
§  Approximately 1 watt (W) per transceiver
§  Cisco Proprietary; Incompatible with SR optics
§  FET Optics available as cost effective bundles

IO Module SKU Description List Price


UCS-IOM-2208XP UCS 2208XP Fabric Extender/8 external 10Gb ports $ 10,000
UCS-IOM2208-16FET UCS 2208 with 16 FET Optics $ 15,600
FET-10G= FET Optics Spare (Non Bundled) $ 1,495
© 2010 Cisco and/or its affiliates. All rights reserved. 42
UCS Fabric Topologies:
Chassis Bandwidth Options

20G per Chassis 40G per Chassis 80G per Chassis 160G per Chassis

1st Generation UCS Fabric


2nd Generation UCS Fabric

§  Wire once architecture


§  All links active
© 2010 Cisco and/or its affiliates. All rights reserved. 43
Cisco UCS FCoE Diagram
Physical View of Network and Storage Connectivity for 160 Cisco Blade
Servers
LAN
SAN SAN
A B

Bac
kpla
n e A:
1 0 -8
0 Gb

UCS Blade Chassis 20


UCS Blade Chassis 1

Port 1 Port 2 Port 1 Port 2 Port 1 Port 2 Port 1 Port 2

CNA
CNA
CNA
CNA

vHBA Eth0 Eth1 vHBA vHBA Eth0 Eth1 vHBA vHBA Eth0 Eth1 vHBA vHBA Eth0 Eth1 vHBA

OS OS OS OS
UCS Blade Server 1 UCS Blade Server 2 UCS Blade Server 159 UCS Blade Server 160
© 2010 Cisco and/or its affiliates. All rights reserved. 44
10G CNA options

© 2010 Cisco and/or its affiliates. All rights reserved. 45


Network Stack Comparison

SCSI

FCP
Less Overhead
iSCSI than
FCIP or iSCSI
FCIP
FC
TCP

IP FCoE

Ethernet

Physical Wire
SCSI iSCSI FCIP FCoE FC
© 2010 Cisco and/or its affiliates. All rights reserved. 46
FCoE Exceeds FC Performance by over 25%
Testing done with Nexus 5000 and 2nd Generation CNAs

OneConnect
QLE 8200 Throughput
Throughput UCNA
1,200

1,200
1,000
1,000
Throughput  (MB/s)

800 800

Throughput  (MB/s)
600 8G  FC  Reads 8G  FC  W rites
600
400 8G  FC  Writes 8G  FC  Reads
400 10G  FCoE  W rites
200 10G  FCoE  Reads
10G  FCoE  Reads
0 10G  FCoE  Writes 200
2K

4K

8K

K
16

32

64

0
2K 4K 8K 16K 32K 64K

Block  Size  (KB) Block  Size  (KB)

•  Demonstrated 400,000 IOPS

•  Better throughput for large block transfers

•  Superior for large file transfers and high transaction workloads

•  Database, Back-up, Data Warehousing, Video, Graphics, Animation

© 2010 Cisco and/or its affiliates. All rights reserved. 47


4 Deployments of Cisco’s FEX Technology
#1 #2 #3 #4
Rack FEX Chassis FEX Adapter FEX VM-FEX
Nexus 5k + Nexus 2k UCS FI + UCS IOM UCS FI or Nexus UCS FI or Nexus 5500 + VIC + VM Mgmt
5500 + VIC Link

1 2 … 32 1 2 … 8 1 2 … 116 1 2 … 112

Switch Switch Switch Switch


UCS Manager

Management Plane
1 2
FEX
… 32

Integration
Rack Server
Rack Server
Rack Server
Rack Server Blade Server
Rack Server Blade Server
Rack Server
2 1

Blade Server
Rack Server
FEX

Blade Server
FEX

Rack Server
Hypervisor

Blade Server
Rack Server 1 2 116
Virtual Machine

Rack Server Blade Server
FEX Manager
OS

1 2 11
8

Rack Server Blade Server

Rack Server Blade Server VM VM 2


VM
Bare Metal Blade/
Server Rack
© 2010 Cisco and/or its affiliates. All rights reserved.
Blade Server Chassis RackServer VM Host
48
UCS 6200 Fabric Interconnect
Already here….!

•  Performance for improved Workload Density


UCS-FI-6296UP o  High Density 96 Ports in 2RU
o  Increased 2Tbps Switching Performance
•  Flexibility to defer port usage type and number at
design time rather than purchase time
o  Flexibility to configure any port at Ethernet
(1/10 Gigabit with SFP+) or FCoE or Native
FC Ports (8/4/2/1G with FC Optics)
o  All Ports usable as uplinks/ downlinks
•  Latency Lowered to 2us within Switch
3x UCS-FI-E16UP •  Power Optimized with 80 PLUS Gold Efficiency
•  Investment Protection with Backward and
Forward Compatibility

FLEXIBILITY, UTILIZATION AND BETTER APP. PERFORMANCE


© 2010 Cisco and/or its affiliates. All rights reserved. 49
UCS 6200 Expansion Module

UCS-FI-E16UP
•  16 “Unified Ports”
•  Ports can be configured as either
Ethernet or Native FC Ports
•  Ethernet operations at 1/10 Gigabit
Ethernet
•  Fibre Channel operations at 8/4/2/1G
•  Uses existing Ethernet SFP+ and
Cisco 8/4/2G and 4/2/1G FC Optics

© 2010 Cisco and/or its affiliates. All rights reserved. 50


Unified Port Management

© 2010 Cisco and/or its affiliates. All rights reserved. 51


Connectivity options: End host mode

LAN
Spanning   •  Server vNIC pinned to an Uplink port
Tree   •  No Spanning Tree Protocol
•  Reduces CPU load on upstream switches
•  Reduces Control Plane load on 6100
•  Simplified upstream connectivity
•  UCS connects to the LAN like a Server, not like a
6100  A   MAC   Switch
Learning  
•  Maintains MAC table for Servers only
vEth 3 vEth 1 •  Eases MAC Table sizing in the Access Layer
VLAN 10 MAC   •  Allows Multiple Active Uplinks per VLAN
Fabric  A   Learning  
•  Doubles effective bandwidth vs STP

L2   •  Prevents Loops by preventing Uplink-to-Uplink


Switching   switching
•  Upstream VSS/vPC optional
•  Completely transparent to upstream LAN
VNIC 0 VNIC 0
•  Traffic on same VLAN switched locally
•  Recommended method for most implementations
Server 2 Server 1

© 2010 Cisco and/or its affiliates. All rights reserved. 52


Connectivity options: Switch mode

Root
LAN •  Fabric Interconnect behaves like a normal
Layer 2 switch

•  Server vNIC traffic follows VLAN forwarding

•  Spanning tree protocol is run on the uplink


ports per VLAN—Rapid PVST+
6100  A   MAC  
Learning   •  Configuration of STP parameters (bridge
priority, Hello Timers etc) not supported
vEth 3 vEth 1
VLAN 10 •  VTP is not supported currently
Fabric  A  
•  MAC learning/aging happens on both the
L2   server and uplink ports like in a typical Layer 2
Switching   switch

•  Upstream links are blocked per VLAN via


VNIC 0 VNIC 0 Spanning Tree logic

Server 2 Server 1
© 2010 Cisco and/or its affiliates. All rights reserved. 53
Local Switching: One consideration

•  However, for servers in the same VLAN and whose vNICs are
routed through different switches (one server goes to Switch A, the
other goes to Switch B) switching involves the external
infrastructure

6100A 6100B

© 2010 Cisco and/or its affiliates. All rights reserved. 54


Connectivity: SAN End host NPV mode
•  Fabric Interconnect operates in N_Port Proxy mode
SAN A (not FC Switch mode)
SAN B
Simplifies multi-vendor interoperation

FLOGI Simplifies management


FDISC
NPIV NPIV
•  SAN switch sees the 6100 as an FC End Host with
F_Port F_Port
many N_Ports and many FC IDs assigned
VSAN 1 VSAN 1
•  Server facing ports function as F-proxy ports
N_Proxy N_Proxy
•  Server vHBA pinned to an FC uplink in the same
6100-A 6100-B VSAN. Round Robin selection.
vFC 1 vFC 2 vFC 1 vFC 2
•  Provides multiple FC end nodes to one F_Port off an
F_Proxy F_Proxy
FC Switch

•  Eliminates need for FC domain on UCS Fabric


Interconnect
N_Port N_Port
•  All zoning on the NPIV switch, not UCS.
vHBA 0 vHBA 1 vHBA 0 vHBA 1

•  One VSAN per F_port (multi-vendor)

Server 1 Server 2 •  F_Port Trunking and Channeling with MDS, 5K


VSAN 1 VSAN 1

© 2010 Cisco and/or its affiliates. All rights reserved. 55


Connectivity: SAN FC Switch mode
Directly attach FC and FCoE targets

•  UCS behaves like an FC fabric switch


FC FCoE SAN
•  Storage ports can be FC or FCoE Optional

•  Light subset of FC Switching features MDS MDS


Select Storage ports N_Port
VSAN 1 VSAN 2
Set VSAN on Storage ports TE_Port
Default zoning per VSAN
F_Port
•  No zoning configuration inputs in UCSM
6100-A FC Switch 6100-B FC Switch
•  If connection to MDS is present:
vFC 1 vFC 2 vFC 1 vFC 2
Zoning configured and pushed to UCS from MDS
F_Port
•  If not connected to MDS:
Default zoning
N_Port
Access control via LUN Masking on Storage
array vHBA 0 vHBA 1 vHBA 0 vHBA 1

•  Fabric Interconnect uses a FC Domain ID


Server 1 Server 2
•  Recommended as TEST mode VSAN 1 VSAN 2

© 2010 Cisco and/or its affiliates. All rights reserved. 56


Cisco UCS Components
SAN LAN MGMT SAN

G
Fabric
A
G S S G
Fabric
A
G
•  Cisco UCS
Interconnect Interconnect
G G G G G G
UCS Manager
UCS Manager

Chassis
Compute Chassis
Compute Chassis
Interconnect
Fabric Extender
R I C C Fabric
I Extender
R

Fabric Extender
Chassis
M
Adapter B
P
Adapter B
P
Adapter
Compute Node(s)
X X X X X X
IO Adapter(s)
x86 Computer x86 Computer

Compute Node Compute Node


(Half slot) (Full slot)

© 2010 Cisco and/or its affiliates. All rights reserved. 57


UCS Fabric Infrastructure Portfolio
Cisco UCS™ 6200 and 2200 with Unified Ports
High End Deployments
Typical Deployments New!
Cisco UCS™ 6100 and 2100 96 Port Fabric Interconnect
48 Port Fabric Interconnect
At UCS Launch

UCS 6296 FI
UCS 6248 FI
§  2TB switching
Cisco UCS 6140/ 6120 §  1TB switching throughput
UCS Fabric throughput
§  96 ports in 2RU
Interconnects
§  48 ports in 1RU §  Unified Ports
Forward compatible
with Second Generation §  Unified Ports §  Investment protection
I/O Modules §  Investment protection
40 Port I/O Module

20 Port I/O Module


New!
UCS 2208 IOM
UCS I/O
Modules Cisco UCS 2104 UCS 2204 IOM §  160G per chassis
I/O Module
§  80G per chassis §  40G to the Blade
§  20G to the Blade §  Lower Latency
Forward compatible
with Second Generation §  Entry point pricing §  Port Channel Capable
Interconnects
§  Port Channel Capable

© 2010 Cisco and/or its affiliates. All rights reserved. 58


Cisco UCS I/O Modules
Cisco 2100 and 2200 Series Fabric Extenders: Generation Comparison
Cisco UCS™
Feature Cisco UCS 2208 Cisco UCS 2204
2104XP
QoS Simple register ACL based ACL based

Host ports 8 32 16

Network ports 4 8 4

Bandwidth
Classes of service 4 (3 enabled) 8 8

Port speed 1/10-GB fixed location 1/10-GB anywhere 1/10-GB anywhere

Resiliency EtherChannels HI > NI only 4 ports Both directions 8 ports Both directions 8 ports

Policers None 64 per 8 ports 64 per 8 ports

IEEE 1588 support No Yes Yes

Latency ~800 nanoseconds ~500 nanoseconds ~500 nanoseconds

Adapter redundancy 1 mLOM only mLOM and mezzanine mLOM and mezzanine

© 2010 Cisco and/or its affiliates. All rights reserved. 59


Discuss Connectivity from Blade to UCS
6100 Series

© 2010 Cisco and/or its affiliates. All rights reserved. 60


Double Chassis Throughput
80 GBPS 160 GBPS

40 GIG
PER 80 GIG
FABRIC PER
FABRIC

2104XP 2208XP

© 2010 Cisco and/or its affiliates. All rights reserved. 61


IOM to Fabric Interconnect Port Pinning
Server-to-Fabric Port Pinning Configurations
160 Gb (Discrete Mode) 160 Gb (Port Channel Mode)
FAN FAN FAN FAN

FAN1

FAN1
PS1

PS1
FAN FAN FAN FAN STAT STAT STAT STAT

FAN1

FAN1
PS1

PS1
STAT STAT STAT STAT FAIL FAIL FAIL FAIL

STAT

STAT
FAIL FAIL FAIL FAIL

FAN2

FAN2
STAT

STAT
FAN2

FAN2
OK OK OK OK
OK OK OK OK

PS2

PS2
PS2

PS2
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W

Slot 1 Slot 2 Slot 1 Slot 2


U C S  5 1 0 8
U C S  5 1 0 8

SLOT SLOT
SLOT SLOT
1 2
1 2

SLOT
3
Slot 3 Slot 4 SLOT
4
SLOT
3
Slot 3 Slot 4 SLOT
4

SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
5
Slot 5 Slot 6 SLOT
6

SLOT
7
Slot 7 Slot 8 SLOT
8
SLOT
7
Slot 7 Slot 8 SLOT
8

OK FAIL OK FAIL OK FAIL OK FAIL


OK FAIL OK FAIL OK FAIL OK FAIL

•  6100 to 2208 •  6200 to 2208


•  6200 to 2208

© 2010 Cisco and/or its affiliates. All rights reserved. 62


Discrete Links - Pinning
Number of Active Fabric Links Blades pinned to fabric link

1-Link All the HIF ports pinned to the active link


2-Link 1,3,5,7 to link-1
2,4,6,8 to link-2
4-Link 1,5 to link-1
2,6 to link-2
3,7 to link-3
4,8 to link-4
8-Link (Applies only to 2208XP) 1 to link-1
2 to link-2
3 to link-3
4 to link-4
5 to link-5
6 to link-6
7 to link-7
8 to link-8

§  HIFs  are  staDcally  pinned  by  the  system  to  individual  fabric  ports.  
§  Only  1,2,4  and  8  links  are  supported  -­‐  3,5,6,7  are  not  valid  configuraDon.  
© 2010 Cisco and/or its affiliates. All rights reserved. 63
Discrete Links
Static Pinning (IOM-FI)
Fabric Interconnect
•  Static Pinning done by the

Fabric Ports
system dependent on number
of fabric ports

•  1,2 4, 8 (2^x) are valid links IOM

for initial pinning

Server Ports
•  Applicable to both 6100 /
6200 and 2104XP/2208XP

Blade 1

Blade 2

Blade 3

Blade 4

Blade 5

Blade 6

Blade 7

Blade 8
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 64
Discrete Links
Fabric Port Failure

Fabric Interconnect
•  Pinned HIFs are brought down

Fabric Ports
•  Other blades unaffected

IOM

Server Ports
Blade 1

Blade 2

Blade 3

Blade 4

Blade 5

Blade 6

Blade 7

Blade 8
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 65
Port-Channel (IOM-FI)

• Only possible between Fabric Interconnect


6200-2208XP 6200

Fabric Ports
• HIFs pinned to port-channel Port Channel

• Port-Channel Hash IOM


IOM – 2208XP
IP

Server Ports
L2 DA/SA, VLAN, L3 DA/SA, L4 DP/SP

FCoE
L2 SA/DA, L2 VLAN, FC SID/DID, FC-OXID

Blade 4

Blade 5

Blade 6

Blade 8
Blade 1

Blade 2

Blade 3

Blade7
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 66
Port-Channel (IOM-FI)
Link Failure
Fabric Interconnect
6200
§  Blades still pinned to Port-

Fabric Ports
channel on a link failure
Port Channel

§  HIF’s not brought down till all IOM


members fail IOM – 2208XP

Server Ports

Blade 4

Blade 5

Blade 6

Blade 8
Blade 1

Blade 2

Blade 3

Blade7
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 67
Fabric Ports: Discrete vs. Port Channel Mode
160 Gb (Discrete Mode) FAN FAN FAN FAN
160 Gb (Port Channel Mode) FAN FAN FAN FAN

FAN1

FAN1

FAN1

FAN1
PS1

PS1

PS1

PS1
STAT STAT STAT STAT STAT STAT STAT STAT
FAIL FAIL FAIL FAIL FAIL FAIL FAIL FAIL

STAT

STAT

STAT

STAT
FAN2

FAN2

FAN2

FAN2
OK OK OK OK OK OK OK OK

PS2

PS2

PS2

PS2
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W

Slot 1 Slot 2 Slot 1 Slot 2


U C S  5 1 0 8 U C S  5 1 0 8

SLOT SLOT SLOT SLOT


1 2 1 2

SLOT
Slot 3 Slot 4 SLOT SLOT
Slot 3 Slot 4 SLOT
3 4 3 4

SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
5
Slot 5 Slot 6 SLOT
6

SLOT
Slot 7 Slot 8 SLOT SLOT
Slot 7 Slot 8 SLOT
7 8 7 8

OK FAIL OK FAIL OK FAIL OK FAIL OK FAIL OK FAIL OK FAIL OK FAIL

§  Servers can only use a single 10GE IOM uplink §  Servers can utilize all 8x 10GE IOM uplinks
§  Bandwidth range per blade - 0 to 20 Gb §  Bandwidth range per blade – 0 to 160 Gb
§  A blade is pinned to a discrete 10 Gb uplink §  A blade is pinned to a logical interface of 80 Gbps
§  FabricFailover if a single uplink goes down §  FabricFailover if all uplinks on same side go down
§  Per blade traffic distribution , same as Balboa §  Per flow traffic distribution with-in a port-channel
§  Suitable for traffic engineering use case §  Recommended with VIC 1280 – Suitable for most
environments

© 2010 Cisco and/or its affiliates. All rights reserved. 68


Cisco UCS Components
SAN LAN MGMT SAN

G
Fabric
G S S G
Fabric
G •  Cisco UCS
A A
Interconnect Interconnect
G G G G G G UCS Manager
UCS Manager

Chassis
Compute Chassis
Interconnect
Compute Chassis

Fabric Extender
Fabric Extender
R I C C Fabric
I Extender
R

Chassis
M
Adapter
P
Adapter
P
Adapter
Compute Node(s)
B B

IO Adapter(s)
X X X X X X

x86 Computer x86 Computer

Compute Node Compute Node


(Half slot) (Full slot)

© 2010 Cisco and/or its affiliates. All rights reserved. 69


Cisco UCS 5108 Blade Chassis

Chassis
§  6 RU / 32” deep
§  Up to 8 half slot blades
§  Up to 4 full slot blades
§  8x fans
§  2x Chassis IO Module
§  All devices hot-pluggable

Power Supplies
§  4x 2,500W hot-plug power
supplies
§  90+% efficient
§  N+N redundancy
§  Single Phase 220V

© 2010 Cisco and/or its affiliates. All rights reserved. 70


Cisco UCS Chassis (Front) with FI
Redundant, Hot Swap Power Supply Redundant, Hot Swap Fan

1U or 2U Fabric Switch

Half width server blade


Up to eight per enclosure

Full width server blade


Up to four per enclosure

6U Enclosure

Hot Swap Power Supplies: N+1, N+N,


SAS drive (optional)
© 2010 Cisco and/or its affiliates. All rights reserved.
Grid Redundant, and Hot Swap 71
Cisco UCS Chassis (Rear) with FI
10GigE Ports Expansion Bay

1U or 2U Fabric
Switch

Redundant,
Hot swap fan
module
Redundant, Hot
swap Fabric
extender/IOM
6U Enclosure

Power Expansion Module


© 2010 Cisco and/or its affiliates. All rights reserved. 72
Chassis middleplane

I/O Modules

63% Open

Blade Connectors
PSU Connectors

Redundant data and management paths


© 2010 Cisco and/or its affiliates. All rights reserved. 73
SAN LAN MGMT SAN

G
Fabric
A
G S S G
Fabric
A
G
•  Cisco UCS
Interconnect Interconnect
G G G

UCS Manager
G G G
UCS Manager
Chassis
Compute Chassis
Compute Chassis
Interconnect
Fabric Extender
R I C C Fabric
I Extender
R

Fabric Extender
Chassis
M
Adapter B
P
Adapter B
P
Adapter
Compute Node(s)
X X X X X X
IO Adapter(s)
x86 Computer x86 Computer

Compute Node Compute Node


(Half slot) (Full slot)

© 2010 Cisco and/or its affiliates. All rights reserved. 74


UCS new M3 Servers
Rack form-factor

B22 M3 B200 M3 B230 M2 B420 M3 B440 M2


Slots 1 1 1 2 2
Cores 16 16 20 32 40
DIMMs 12 24 32 48 32
Max GB 384GB 768GB 512GB 1.5TB 512GB
Disk 2 x 2.5 2 x 2.5 2 SSD 4 x 2.5 4 x 2.5
Raid 0/1 0/1 0/1 0/1/5/6 0/1/5/6
Integrated I/O Dual 10Gb Dual 20Gb No Dual 20Gb No
Mezz 1 1 1 2 2

© 2010 Cisco and/or its affiliates. All rights reserved. 75


2204 with no Mezz card (1240)
8 x 2 a/s @ 10Gbps interfaces per server (1x2 standby for redundant access)
What is the oversubscription ratio with each scenario? (1,2,4 cables per FEX)?
1   2   3   4   Fabric  Ports   1   2   3   4   Fabric  Ports  

2204 IOM (A) 2204 IOM (B)


1   2   16   Downstream   1   2   16   Downstream  
Ports   Ports  

2204:    1  link  to  Mezz  slot,  1  to  mLOM  

Mezz  Slot  

No  Mezz  Adapter  
0   1  

Cota,  
mLOM  
vNIC1   vNIC2   vNIC3   vNIC4  
eth0   eth1   eth2   eth3  
HBA  0   HBA  1  
Host  

© 2010 Cisco and/or its affiliates. All rights reserved. 76


UCS Rack Server

C22 M3 C24 M3 C220 M3 C240 M3 C260 M2 C420 M3 C460 M2


RU 1 2 1 2 2 2 4
Cores 16 16 16 16 20 32 40
DIMMs 12 12 16 24 64 48 64

Max GB 192GB 192GB 512GB 768GB 1TB 1.5TB 512GB

8 x 2.5 or 24 x 2.5 or 8 x 2.5 or 24 x 2.5 or 16 x 2.5 or


Disk 16 x 2.5 16 x 2.5
4 x 3.5 12 x 3.5 4 x 3.5 12 x 3.5 32 x SSD

2 x 1Gb + 2 x 2 x 1Gb +
LoM 2 x 1Gb 2 x 1Gb 2 x 1Gb 4 x 1Gb
10Gb
2 x 10Gb
2 x 10Gb

PCIe 2 x PCIe 5 x PCIe 2 x PCIe 4 x PCIe


6 x PCIe 2.0 6 x PCIe 3.0 10 x PCIe 2.0
Slots 3.0 3.0

Internal USB Port USB Port USB Port USB Port


USB Port USB Port eUSB
Storage FlexFlash FlexFlash FlexFlash FlexFlash

© 2010 Cisco and/or its affiliates. All rights reserved. 77


Cisco UCS: Application level performance

World record performance across the new line:


#1 position on 11 results

“Best in Class” single node results

Infrastructure requirements lowered by 80%

30% greater application throughput

76% greater consolidation

65% better client performance

© 2010 Cisco and/or its affiliates. All rights reserved. 78


Cisco UCS Components
SAN LAN MGMT SAN

G
Fabric
A
G S S G
Fabric
A
G
•  Cisco UCS
Interconnect Interconnect
G G G G G G
UCS Manager
UCS Manager

Chassis
Compute Chassis
Compute Chassis
Interconnect
Fabric Extender
R I C C Fabric
I Extender
R

Fabric Extender
Chassis
M
Adapter B
P
Adapter B
P
Adapter
Compute Node(s)
X X X X X X
IO Adapter(s)
x86 Computer x86 Computer

Compute Node Compute Node


(Half slot) (Full slot)

© 2010 Cisco and/or its affiliates. All rights reserved. 79


Cisco UCS Mezzanine adapter options

Virtualization Compatibility Cost


VM I/O Virtualization Minimal Disruption High Speed
and Consolidation Using Existing Ethernet
Driver Stacks Connectivity

M81KR CNA M71KR (failover) M51KR


VIC 1280 CNA M72KR (no fabric failover) M61KR
Support for VN-Link
No fabric failover
© 2010 Cisco and/or its affiliates. All rights reserved. 80
Emulex/Qlogic CNA

FEX   FEX  

Fabric Fabric
A B

10G 10G
E E

CNA  

vHBA vNIC vNIC vHBA

vmnic vmnic
vhba0 vhba1
0 1

© 2010 Cisco and/or its affiliates. All rights reserved.


vSwitch / Nexus 1000V 81
81
Emulex/Qlogic CNA logical view

Fabric  Interconnect  A   Fabric  Interconnect  B  

vfc vEth vEth vfc


1 1 2 2

FEX   FEX  

CNA  

vmnic vmnic
vhba0 vhba1
0 1

© 2010 Cisco and/or its affiliates. All rights reserved.


vSwitch / Nexus 1000V 82
82
Cisco Virtual Interface Controller (VIC)

FEX   FEX  

Fabric Fabric
A B

10G 10G
E E
Cisco  VIC  

vHB vHB
A
vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
A 58  

vhba vmni vmni vmni vmni vmni vmni vmni vmni vhba
0 c0 c1 c2 c3 c4 c5 c6 c7 1

© 2010 Cisco and/or its affiliates. All rights reserved.


vSwitch / Nexus 1000V 83
83
Cisco VIC logical view

Fabric  Interconnect  A   Fabric  Interconnect  B  


vfc vEth vEth vEth vEth vEth vEth vEth vEth vfc
1 1 2 3 4 5 6 7 8 2

FEX  A   FEX  B  

Cisco  VIC  

vhba vmni vmni vmni vmni vmni vmni vmni vmni vhba
0 c0 c1 c2 c3 c4 c5 c6 c7 1

© 2010 Cisco and/or its affiliates. All rights reserved.


vSwitch / Nexus 1000V 84
84
Cisco UCS C-Series Adapter-FEX
UCS P81E Virtual Interface Card

•  It supports NIC partitioning to the OS and 802.1BR to the switch


In Adapter-FEX mode: support for up to 16 Eth vNIC and 2 FC vHBA
In VM-FEX mode: support for up to 96 vNics

•  Adapter Failover feature: in case of failure on the primary path, the vNIC is mapped to the standby port transparently
to the OS

•  Security and scalability improvements: no need of trunking all VLANs to the server interface

© 2010 Cisco and/or its affiliates. All rights reserved. 85


Adapter-FEX at UCS C-Series Servers
Network admin can control the veth configuration and, as a result, the server
network adapter

Support matrix at Nexus 5500 (NX-OS 5.1(3)N1(1)) and UCS C-Series Servers

Nexus-5548(config)# int
veth6
Nexus-5548(config-if)#
shut
Nexus-5548(config-if)# no
86
shut
© 2010 Cisco and/or its affiliates. All rights reserved.
UCS 1280 Virtual Interface Card

2nd Generation Mezzanine Adapter

FI-A FI-B
Up to 2 x 80Gb

UCS 5108
•  2nd Generation VIC
Chassis backplane
•  Dual 4x10 Gbps connectivity into fabric A B
•  PCIe x16 GEN2 host interface
IO
•  Capable of 256 PCIe devices M
IO
M
OS dependent Up to 2x 40Gb

Current maximum of 116 virtual interfaces


•  Same host side drivers as VIC
•  Retains VIC features with
1280-VIC
enhancements
iSCSI boot, Fabric Failover
•  SR-IOV capable device

© 2010 Cisco and/or its affiliates. All rights reserved. 87


Cisco 1280 VIC Adapter
Presents up to 116 Interfaces to the OS—NICs or HBAs
UCS 6248 UCS 6248
Fabric Interconnect A Fabric Interconnect B

IOM (2208) IOM (2208)

Physical Port 1 Physical Port 2 Physical Port 3 Physical Port 4 Physical Port 5 Physical Port 6 Physical Port 7 Physical Port 8

1280 VIC Mezzanine Adapter


vHBA vNIC vNIC vHBA
1 3 4 2
UCS Blade Chassis

vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
5 6 7 8 9 10 59 66 73 80 87 94 101 108 109 110

vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
11 12 13 14 15 16 17 18 60 67 74 81 88 95 102 111
Server Blade

vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
19 20 21 22 23 24 25 26 61 68 75 82 89 96 103 112

vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
27 28 29 30 31 32 33 34 62 69 76 83 90 97 104 113

vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
35 36 37 38 39 40 41 42 63 70 77 84 91 98 105 114

vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
43 44 45 46 47 48 49 50 64 71 78 85 92 99 106 115

vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC vNIC
51 52 53 54 55 56 57 58 65 72 79 86 93 100 107 116

OS
© 2010 Cisco and/or its affiliates. All rights reserved. 88
UCS Topology Designs for Max
Bandwidth
Choose the Fabric Interconnect + IO Module + VIC adapter combo for your
needs UCS  6248UP   UCS  6248UP  
UCS  6248UP  
or  UCS  6100   or  UCS  6100   UCS  6248UP   or  UCS  6100   UCS  6248UP  

UCS 2104 IOM UCS 2208 IOM UCS 2208 IOM UCS 2208 IOM
UCS 2208 IOM

Side A Side B Side A Side B Side A Side B


Side A Side B Side A Side B

1280 VIC or M81KR M81KR M81KR UCS 1280 VIC UCS 1280 VIC

•  Shared  IOM  uplink   •  Dedicated    IOM  uplink   •  Shared  IOM  Port  Channel   •  Dedicated  IOM  uplink   •  Shared  IOM  Port  
bandwidth  of  10Gbps   bandwidth  of  10Gbps   bandwidth  of  20-­‐80Gbps   bandwidth  of  10Gbps   Channel  bandwidth  of  
20-­‐80Gbps  
•  vNIC  Burst  up  to  10Gbps   •  vNIC  Burst  up  to  10Gb   •  vNIC  Burst  up  to  10Gb   •  vNIC  Burst  up  to  10Gbps  
      *(IOM  uplink  limitaDon)  
•  vNIC  Burst  up  to  40Gbps  
*(PCIe  Gen  2  limitaDon  of  64)  
•  Dedicated  IOM  Uplink   •  Shared  IOM  Port-­‐Channel   •  Dedicated  IOM  Uplink  
•  Shared  IOM  Uplink  with  
1,  3,  or  7  other  servers   with  8  servers   •  Shared  IOM  Port-­‐
Channel  with  8  servers  
•  Host  port  pinned  to  a   •  Host  port  pinned  to  a   •  Host  port-­‐channel  
•  Host  port  pinned  to  a  
discrete  IOM  uplink   discrete  IOM  port-­‐channel   pinned  to  discrete  IOM   •  Host  port-­‐channel  
discrete  IOM  uplink    
uplink   pinned  to  the  IOM  port-­‐
© 2010 Cisco and/or its affiliates. All rights reserved. channel   89
89
UCS Fabric-Based NIC Teaming
Fabric Failover Enhances Multi Hypervisor Clouds

•  Chassis backplane (or Fabric)


LAN provides redundant path for each
vNIC
•  Failures detected on border ports
Fabric Interconnect
A
Fabric Interconnect
B
or fabric ports
•  Transparent to Operating System

IOM-A IOM-B •  Unlike OS NIC Teaming,


redundancy provided with single
Port 2 interface
UCS Blade Chassis

Port 1
CNA
•  After failover:
UCS Blade Server

CNA

Transmit GARP
vNIC 1 vNIC 2
Local Area
Connection 0
Local Area
Connection 1
Multicast Group reRegistration
1.1.1.1 2.2.2.2

•  VIC1280 offers up to 256 vNIC’s


OS

© 2010 Cisco and/or its affiliates. All rights reserved. 90


Stateless Computing: Service
Profiles
They’re like “Software Defined Computing”…

SERVER
IDENTITY
NIC MACs
HBA WWNs
Server UUID
VLAN Assignments
VLAN Tagging
FC Fabrics Assignments
FC Boot Parameters SAN
Quantity of NICs
Boot Order
PXE Settings
UCS IPMI Settings

Service Quantity of HBAs


QoS Settings
LAN
Profile Call Home
Statistic Thresholds
System Firmware
Adapter Firmware
CIMC Firmware
RAID Settings
NIC Teaming in HW
Adds: BIOS Settings
etc., etc., etc.
•  Portability
•  More flexibility
•  Improved uptime

© 2010 Cisco and/or its affiliates. All rights reserved. 91


Unifying Configuration Settings to
Deliver a “Server as a Service”

QoS VLANs
and and
NIC to Security VSANs
Policies MACs
Switch and
port WWNs
Mappings

Unify These Settings


“as a Service” Service Profile
BIOS
Firmware Settings

Storage NIC and


Settings HBA
Settings

© 2010 Cisco and/or its affiliates. All rights reserved. 92


Traditional Element Configuration
LAN SAN

Storage Server Network


SME SME SME •  QoS settings
•  Border port assignment
per vNIC
•  NIC Transmit/Receive •  FC Fabric assignments for
Rate Limiting HBAs

•  VLAN assignments for NICs


•  VLAN tagging config for NICs
•  Number of vHBAs
•  HBA WWN
•  Subject matter experts •  Number of vNICs assignments
consumed by manual •  PXE settings\ •  FC Boot Parameters
•  HBA firmware
configuration chores •  NIC firmware
•  Advanced feature settings

•  Serial processes and •  Remote KVM IP settings


multiple touches inhibit •  Call Home behavior
•  Remote KVM firmware
provisioning speed
•  RAID settings
•  Disk scrub actions
•  Server UUID
•  Configuration drift and •  Serial over LAN settings
maintenance challenges •  Boot order
•  IPMI settings
•  BIOS scrub actions
•  BIOS firmware
•  BIOS Settings

© 2010 Cisco and/or its affiliates. All rights reserved. 93


Unified, Embedded Management
Server Name
Unified UUID, MAC, WWN

Management Boot Information


LAN, SAN Config
Firmware Policy

Server Name
Subject Matter Experts
1 Define Policies
UUID, MAC, WWN
Boot Information
LAN, SAN Config
Firmware Policy
Storage Server Network
SME SME SME

Server Name
UUID, MAC, WWN
Boot Information
LAN, SAN Config
Firmware Policy
2
Policies Used Server Name
to Create UUID, MAC, WWN
Service Profile Boot Information
Server Policy… LAN, SAN Config
Templates Firmware Policy
Storage Policy…
4
Network Policy…
Server Name
UUID, MAC,
3
Associating Service
WWN Profiles with Hardware
Virtualization Policy… Boot Information Service Profile
Templates Configures Servers
LAN, SAN Config
Create Service Profiles Automatically
Application Profiles… Firmware Policy

© 2010 Cisco and/or its affiliates. All rights reserved. 94


VM Migration vs. UCS Service
Profile Migration
Live Migration of a VM vs. a Cold Migration of a Bare Metal Server

Physical UCS Server B


Physical Server A

Physical Server B

Physical UCS Server A


OS OS OS
Image Image Image

VIEW IN SLIDE SHOW MODE


Guest Gue Guest

**ANIMATED SLIDE**
Serve st Serve
r1 Serv r3 OS Image
er 2 VLAN 4
(Boot from SAN)
VLAN 1 VLAN 3
Fabric A Fabric A Fabric B

VM Migration
Host Server 1
Hypervisor Hypervisor (Windows, ESX, Linux, etc)

Host
Host OSA
Server Host
Host OSB
Server

BIOS:<settings>
BIOS: <settings> BIOS: <settings>
BIOS: <settings> BIOS: <settings> Firmware: Version 2B
Firmware: Version 2B Firmware: Version 1C
Firmware: Version 1A Firmware: Version 2B UUID: 1122AB
UCSMACs:
UUID: A1 &5566EF UUID: 3344CD
UUID: 1122AB UUID: 3344CD B1
UCSWWNs:
MACs:C1 A3 & B3
& D1 MACs: A2 & B2
MACs: A1 & B1 MACs: A2 & B2
UCS WWNs: C3 & D3 WWNs: C2 & D2
WWNs: C1 & D1 WWNs: C2 & D2

FC Fabrics FC Fabrics
UCS Service Profile Migration
FC Fabric A
A&B A&B VLANs 1 & 2
VLANs 1-5 VLANs 1-5 UCS Fabric Interconnect
FC Fabrics A& B; VLANs 1- 5
External Network Fabric
© 2010 Cisco and/or its affiliates. All rights reserved. External Network Fabric 95

95
A UCS Service Profile:
VLAN Tagging Settings per NIC Disk Scrub Actions
NIC Transmit Rate Limiting Thresholds for Monitored Statistics
NIC MAC Address Assignment Number of vHBAs on Server

120+ server settings in a single object… NIC Maximum Transmission Unit (MTU)Local HDD RAID Configuration
Number of NIC Transmit Queues Define Number of iSCSI interfaces
NIC Transmit Queue Ring Size on Server
NIC Receive Queues Define Number of vHBAs on
BIOS: Quiet Boot BIOS: PCI Memory Mapped IO Above NIC Receive Queue Ring Size Server
BIOS: Post Error Pause 4Gb Config Distribution FC Switch Uplink
NIC Completion Queues
BIOS: Resume A/C on Power Loss BIOS: Boot Option Retry Assignment Per HBA (Pin Group)
NIC Interrupts
BIOS: Front Panel Lockout BIOS: UCSM Boot Order Rule Control HBA World Wide Port Name
NIC Transmit Checksum Offload
BIOS: Turbo Boost BIOS: Intel Entry SAS RAID NIC Receive Checksum Offload (WWPN) Assignment
BIOS: ACPI10 Support BIOS: Intel Entry SAS RAID Module HBA Fiber Channel SAN
NIC TCP Segmentation Offload
BIOS: Enhanced Intel Speedstep BIOS: Assert Nmi on Serr Membership
NIC TCP Large Receive Offload
BIOS: Hyper Threading BIOS: Assert Nmi on Perr Fiber Channel Boot Parameters
NIC Receive Side Scaling (RSS)
BIOS: Core Multi Processing BIOS: OS Boot Watchdog Timer HBA-to-Distribution FC Switch
NIC Failback Timeout
BIOS: Virtualization Technology (VT) BIOS: Console Redirection NIC Interrupt Mode Uplink Persistent Binding
BIOS: Execute Disabled Bit BIOS: Console Flow Control NIC Interrupt Coalescing HBA Maximum Data Field Size
BIOS: Direct Cache Access BIOS: Console BAUD rate (MTU)
NIC Interrupt Timer
BIOS: Processor C State BIOS: Console Terminal Type HBA Transmit Queue Ring Size
NIC QoS Host Control Option
BIOS: Processor C1E BIOS: Console Legacy OS Redirect HBA Receive Queue Ring Size
Enable/Disable Cisco Discovery
BIOS: Processor C3 Report Server Boot Order (HDD, CD-ROM, Protocol for VMware vSwitch HBA SCSI I/O Queues
BIOS: Processor C6 Report SAN, USB, Floppy, PXE) MAC Security per NIC HBA SCSI I/O Queue Ring Size
BIOS: CPU Performance Server BIOS Firmware HBA FCP Error Recovery
QoS settings per NIC
BIOS: Max Variable MTRR Setting Ethernet Adapter Firmware HBA Flogi Retries
NIC action on Switch uplink failure
BIOS: VT for Directed IO Fiber Channel Adapter Firmware HBA Flogi Timeout
Distribution Enet Switch Uplink
BIOS: Interrupt Remap HBA Option ROM Firmware Assignment Per NIC (Pin Group) HBA Plogi Retries
BIOS: Coherency Support Storage Controller Firmware Server Pool Assignment HBA Plogi Timeout
BIOS: ATS Support Remote Management Controller (e.g. HP Maintenance Policy HBA Port Down Timeout
BIOS: Passthrough DMA Support iLO) Firmware HBA Port Down IO Retry
IPMI Usernames & Passwords
BIOS: Memory RAS Config Server UUID IPMI User Roles HBA Link Down Timeout
BIOS: NUMA Virtual Server Serial Number Server Management IP Address HBA IO Throttle Count
BIOS: Low Voltage DDR Mode Define Number of vNICs on Server Serial over LAN Configuration HBA Max LUNs Per Target
BIOS: Serial Port A state Define Number of Dynamic vNICs (for HBA Interrupt Mode
Power Control Policy Capping and
BIOS: USB Make Device Non Bootable VMware Pass-through) HBA QoS Priority
Priority
BIOS: USB System Idle Power Settable vNIC/FlexNIC Speed (reflected PCIe Bus Device Scan Order for NICs/ HBA QoS Burst size
Optimizing Setting in OS) HBAs HBA QoS Rate limit
BIOS: USB Front Panel Access Lock PXE Boot Setting PCIe Virtual Device Slot Placement for HBA QoS Host Control Option
BIOS: PCI Max Memory Below 4G Fabric Failover (NIC Teaming) Settings NIC/HBA HBA World Wide Node Name
© 2010 Cisco and/or its affiliates. All rights reserved. 96
VLAN Assignment per NIC BIOS Scrub Actions (WWNN) Assignment
96
Server Availability

Oracle Web VMware •  Today’s Deployment:


Blade Blade Blade
Provisioned for peak capacity
Blade Blade Blade
Blade Blade Blade
Spare node per workload
Blade Blade Blade
Burst capacity
Blade Blade Blade HA spare

Oracle Web VMware


Blade Blade Blade
§ With Service Profiles:
Blade Blade Blade
Blade Blade Blade – Resources provisioned as
needed
Blade
– Same availability with fewer
Blade spares

© 2010 Cisco and/or its affiliates. All rights reserved. 97


Zero touch integration

Physical Inventory

•  Increase capacity, not


Name: UCS 12
Class: System
ID: 77449-32
Chassis: 1
- IOM 1: UCS 2104
- IOM 2: UCS 2104
-  Blade slots occupied: 8
complexity
Chassis: 2
- IOM 1: UCS 2104
- IOM 2: UCS 2104
-  Blade slots occupied: 8
•  New equipment self integrates
Chassis: 3
- IOM 1: UCS 2104
- IOM 2: UCS 2104
-  Blade slots occupied: 8
Chassis: 4
- IOM 1: UCS 2104
- IOM 2: UCS 2104
-  Blade slots occupied: 8
Chassis: 5
- IOM 1: UCS 2104
- IOM 2: UCS 2104
-  Blade slots occupied: 8

Name: UCS 5108


2104
Class: Chassis
FEX
ID: 234222-33
IOM 1: UCS 2104
IOM 2: UCS 2104
Blade slot occupied: 8
Fans: 8

© 2010 Cisco and/or its affiliates. All rights reserved. 98


Zero touch integration

Physical Inventory

•  Increase capacity, not


Name: UCS 12
Class: System
ID: 77449-32
Chassis: 1
- IOM 1: UCS 2104
- IOM 2: UCS 2104
-  Blade slots occupied: 8
complexity
Chassis: 2
- IOM 1: UCS 2104
- IOM 2: UCS 2104
-  Blade slots occupied: 8
•  New equipment self integrates
Chassis: 3

•  Inventory & status updated


- IOM 1: UCS 2104
- IOM 2: UCS 2104
-  Blade slots occupied: 8
Chassis: 4
- IOM 1: UCS 2104
- IOM 2: UCS 2104
Policy Inventory
-  Blade slots occupied: 8
Service
Chassis: 5Profile: Default 1
- IOM 1: UCS 2104
Service
- IOM 2: UCSProfile:
2104 HR-App1
-  Blade slots occupied: 8

© 2010 Cisco and/or its affiliates. All rights reserved. 99


Zero touch integration

•  Increase capacity, not complexity


•  New equipment self integrates
•  Inventory & status updated
•  Immediately apply existing policies

Policy Inventory
Service Profile: Default 1
Service Profile: HR-App1

© 2010 Cisco and/or its affiliates. All rights reserved. 100


Q&A

© 2010 Cisco and/or its affiliates. All rights reserved. 101


Summary

© 2010 Cisco and/or its affiliates. All rights reserved. 102


Unified Computing System Innovation

Performance optimized
Integrated Design for any type of workload

Agility and Reduced time


Service Profiles to deploy and provision
applications
Role Based Management,
UCS Manager Automation, Ease of
Integration
Centralized Multi Domain
UCS Central Management, Alerting and
Visibility

Unified Fabric Simplified Infrastructure

Security Isolation per


Virtualized I/O application, Scale, Improved
Performance
Supports both blades and rack
Form Factor mount servers in a single
Independence domain
Cost Effective Application
Extended Memory Performance, Scale

© 2010 Cisco and/or its affiliates. All rights reserved. 103


© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 104

You might also like