Download as pdf or txt
Download as pdf or txt
You are on page 1of 302

Implemmenting

g HP Bla
adeSysstem
Solutio
ons

ly
on
y
er
liv
de
TT
rT

Stude
ent Guide
G
Fo

Volume 1

Rev. 12.31
1
Fo
rT
TT
de
liv
er
y
on
ly
Implemmenting
g HP Bla
adeSysstem
Solutio
ons

ly
on
y
er
liv
de
TT
rT

Stude
ent Guide
G
Fo

Use of this mmaterial to deliveer training withou


ut
Rev. 12.31
1 prior writtenn permission fromm HP is prohibitedd.
ly
on
 Copyright 2012 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP
products and services are set forth in the express warranty statements accompanying such products
and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.

y
This is an HP copyrighted work that may not be reproduced without the written permission of HP.

er
You may not use these materials to deliver training to any person outside of your organization
without the written permission of HP.
Printed in USA
liv
Implementing HP BladeSystem Solutions
Student guide
July 2012
de
TT
rT
Fo
Contents

Volume 1
Module 1 — Portfolio Introduction
Objectives ...................................................................................................... 1

ly
HP BladeSystem positioning .............................................................................. 2
BladeSystem evolution................................................................................ 3

on
Transitioning to the ProLiant Gen8 servers .................................................... 4
Key Gen8 technologies ....................................................................... 4
BladeSystem portfolio ....................................................................................... 6
BladeSystem enclosures.............................................................................. 7

y
BladeSystem c3000 enclosure .............................................................. 7
BladeSystem c7000 enclosure .............................................................. 7

er
BladeSystem server blades ......................................................................... 8
HP ProLiant Blade Workstation Solutions ...................................................... 9
liv
HP ProLiant WS460c G6 Blade Workstation ........................................ 10
HP ProLiant xw2x220c Blade Workstation ........................................... 10
BladeSystem storage and expansion ........................................................... 11
de

HP storage blades ............................................................................. 11


Ultrium Tape Blades ...........................................................................12
PCI Expansion Blade ..........................................................................13
Ethernet interconnects ...............................................................................14
TT

Ethernet mezzanine cards ...................................................................15


Storage interconnects ...............................................................................16
Storage mezzanine cards ...................................................................17
Integrity NonStop BladeSystem ..................................................................18
rT

NonStop NB 54000c and NB5000c BladeSystems .............................. 19


Integrity Superdome 2 ............................................................................. 20
Virtual Connect technology ............................................................................. 22
Virtual Connect FlexFabric ........................................................................ 23
Fo

Virtual Connect FlexFabric ................................................................. 24


Virtual Connect Flex-10 technology ............................................................ 24
How Flex-10 works ............................................................................ 25
Virtual Connect modules .......................................................................... 30
Virtual Connect environment with BladeSystem enclosure ........................31
Virtual Connect environment—Three key components .............................31
Virtual Connect Ethernet modules........................................................ 32
HP BladeSystem 10Gb KR Ethernet ..................................................... 33

Rev. 12.31 i
Implementing HP BladeSystem Solutions

Management and deployment tools ................................................................. 34


ProLiant Onboard Administrator ................................................................ 34
Onboard Administrator modules ............................................................... 35
Insight Display ........................................................................................ 37
Main Menu ...................................................................................... 37
Enclosure Settings Menu .................................................................... 38
iLO Management Engine ......................................................................... 39
HP Insight Control ................................................................................... 40
HP Systems Insight Manager..................................................................... 42
Advantages of HP SIM ...................................................................... 43

ly
Learning check .............................................................................................. 44

Module 2 — BladeSystem Enclosures

on
Objectives ...................................................................................................... 1
BladeSystem enclosure family ............................................................................ 2
BladeSystem enclosure features ................................................................... 3

y
BladeSystem enclosure comparison ............................................................. 4

er
BladeSystem c7000 enclosure .................................................................... 6
BladeSystem c3000 enclosure .................................................................... 7
BladeSystem c3000 enclosure — Rear view ........................................... 8
liv
BladeSystem enclosure management hardware and software ............................... 9
HP Onboard Administrator ......................................................................... 9
Onboard Administrator module components............................................... 10
de

Redundant Onboard Administrator modules .......................................... 11


Dual Onboard Administrator tray ........................................................12
Onboard Administrator link module .....................................................13
HP Insight Display ....................................................................................14
TT

iLO Management Engine ..........................................................................16


Agentless Management ......................................................................17
Active Health System ..........................................................................18
HP Intelligent Provisioning .................................................................. 20
rT

Communication between iLO and server blades ................................... 21


HP iLO Advanced for HP BladeSystem ................................................. 22
Fo

ii Rev. 12.31
Contents

BladeSystem power and cooling ...................................................................... 26


BladeSystem enclosure design challenges ................................................... 27
PARSEC architecture ................................................................................ 28
BladeSystem c7000 enclosure airflow ........................................................ 30
Active Cool Fans ......................................................................................31
Fan location rules .................................................................................... 32
The c7000 enclosure ......................................................................... 32
The c3000 enclosure ........................................................................ 32
Fan population ....................................................................................... 33
The c7000 enclosure ......................................................................... 33

ly
The c3000 enclosure ........................................................................ 34
Fan failure rules ................................................................................ 35

on
Fan quantity versus power.................................................................. 36
Self-sealing BladeSystem enclosure...................................................... 37
Cooling multiple enclosures................................................................ 38
Thermal Logic ......................................................................................... 39

y
Power Regulator technologies ................................................................... 40
Power Regulator for ProLiant ...............................................................41

er
Power Regulator for Integrity .............................................................. 42
iLO 4 power management ................................................................. 43
liv
Dynamic Power Saver........................................................................ 47
Dynamic Power Capping ................................................................... 48
Power delivery modes .............................................................................. 49
de

Non-Redundant Power ....................................................................... 49


Power Supply Redundant ................................................................... 50
AC Redundant ...................................................................................51
HP Intelligent Power Discovery Services ...................................................... 52
TT

HP Intelligent PDUs ........................................................................... 53


HP power distribution units ....................................................................... 54
PDU benefits .................................................................................... 55
HP 16A to 48A Modular PDUs ........................................................... 55
rT

HP Monitored PDUs .......................................................................... 55


BladeSystem c7000 PDUs .................................................................. 56
BladeSystem c3000 PDUs .................................................................. 57
BladeSystem enclosure power supplies ....................................................... 58
Fo

HP Common Slot Power Supplies ........................................................ 58


BladeSystem c7000 enclosure power supplies .......................................61
Power modules and cords .................................................................. 63
Single-phase AC power supply placement ........................................... 64
DC power configuration rules ............................................................. 65
Total available power ........................................................................ 66
BladeSystem c3000 enclosure power supplies ...................................... 67
Power supply placement .................................................................... 68
Total available power ........................................................................ 69
BladeSystem DVD-ROM drive options ............................................................... 70
Learning check .............................................................................................. 71
Rev. 12.31 iii
Implementing HP BladeSystem Solutions

Module 3 — HP BladeSystem Server Blades


Objectives ...................................................................................................... 1
ProLiant Gen8 server blade portfolio .................................................................. 2
ProLiant BL420c Gen8 server blade ............................................................. 2
ProLiant BL460c Gen8 server blade ............................................................. 3
ProLiant BL465c Gen8 server blade ............................................................. 4
Integrity i2 server blade portfolio ....................................................................... 5
Integrity BL860c i2 .................................................................................... 5
Integrity BL870c i2 .................................................................................... 7

ly
Integrity BL890c i2 .................................................................................. 10
Learning check ...............................................................................................12

on
Module 4 — HP BladeSystem Storage and
Expansion Blades

y
Objectives ...................................................................................................... 1

er
HP BladeSystem storage and expansion blades ................................................... 2
HP storage blades ..................................................................................... 2
HP D2200sb Storage Blade ................................................................. 3
liv
HP X1800sb G2 Network Storage Blade ............................................... 4
HP X3800sb G2 Network Storage Gateway Blade................................. 5
Direct Connect SAS Storage for HP BladeSystem..................................... 6
de

BladeSystem tape blade portfolio ................................................................ 7


HP Ultrium Tape Blades ....................................................................... 7
BladeSystem tape blades — Feature comparison .................................... 8
HP Storage Library and Tape Tools ....................................................... 9
TT

Features and benefits of L&TT ............................................................. 10


PCI Expansion Blades ............................................................................... 11
HP PCI Expansion Blade — PCI card details .........................................12
HP IO Accelerator .............................................................................13
rT

Smart Array controller portfolio ........................................................................15


Standard features of Smart Array controllers ................................................16
I/O bandwidths in Smart Array controllers ............................................17
Fo

Smart Array controller classification ............................................................18


HP Smart Array P822 controller .................................................................18
HP Smart Array P220 and HP Smart Array P222 controllers ......................... 19
HP Smart Array P420 and P420i controllers ............................................... 19
Learning check .............................................................................................. 20

iv Rev. 12.31
Contents

Module 5 — Ethernet Connectivity Options


for HP BladeSystem
Objectives ...................................................................................................... 1
Available Ethernet interconnect modules ............................................................. 2
HP 6120XG Ethernet Blade Switch ............................................................. 3
HP 6120XG Ethernet Blade Switch — Front panel .................................. 4
HP 6120G/XG Blade Switch ..................................................................... 5
HP 6120G/XG Ethernet Blade Switch — Front panel ............................. 6

ly
Managing HP blade switches ..................................................................... 7
Cisco Catalyst Blade Switch 3020 features .................................................. 8

on
Catalyst Blade Switch 3020 front bezel ................................................ 9
Cisco Catalyst Blade Switch 3120 features ................................................ 10
Catalyst Blade Switch 3120 front bezel ............................................... 11
HP GbE2c Layer 2/3 Ethernet Blade Switch................................................12

y
GbE2c Layer 2/3 Ethernet Blade Switch front bezel ..............................13
HP 1:10Gb Ethernet BL-c Switch ................................................................14

er
1:10Gb Ethernet BL-c Switch front bezel ..............................................15
HP 1Gb Ethernet Pass-Thru Module ............................................................16
liv
HP 10GbE Pass-Thru Module.....................................................................17
HP 10GbE Pass-Thru Module components ............................................18
Learning check .............................................................................................. 19
de

Module 6 — Storage Connectivity Options


for HP BladeSystems
TT

Objectives ...................................................................................................... 1
Fibre Channel interconnect options .................................................................... 2
Cisco MDS 9124e Fabric Switch for BladeSystem .......................................... 2
Cisco MDS 9124e Fabric Switch features and components ....................... 4
rT

Standard and optional software ........................................................... 4


Cisco MDS 9124e Fabric Switch layout .................................................. 5
Dynamic Ports on Demand ................................................................... 6
Brocade SAN switches............................................................................... 7
Fo

Brocade SAN switch licensing .............................................................. 8


Brocade SAN switch software .............................................................. 9
SAS storage solutions for BladeSystem servers .................................................... 11
HP 3Gb SAS BL Switch ............................................................................. 11
HP Virtual SAS Manager...........................................................................12

Rev. 12.31 v
Implementing HP BladeSystem Solutions

4X InfiniBand Switch modules ..........................................................................13


Mezzanine cards and adapters ........................................................................15
Mezzanine card and slot options available for BladeSystem ..........................15
Type I mezzanine cards and slots ........................................................16
Type II mezzanine cards and slots........................................................16
HBAs available ........................................................................................17
QLogic QMH2562 8Gb Fibre Channel HBA ...............................................17
Emulex LPe1205-HP 8Gb/s Fibre Channel HBA .......................................... 19
Brocade 804 8Gb Fibre Channel Host Bus Adapter .................................... 21
HP 4X InfiniBand Mezzanine HCAs .......................................................... 22

ly
HP IB QDR/EN 10 Gb 2P 544M Mezzanine Adaptor ................................ 23
Learning check .............................................................................................. 24

on
Module 7 — Configuring Ethernet Connectivity
Options

y
Objectives ...................................................................................................... 1
Configuring an HP GbE2c Layer 2/3 Ethernet Blade Switch ................................. 2

er
User, operator, and administrator access rights ............................................. 2
Access-level defaults ............................................................................ 3
liv
Accessing the GbE2c switch ....................................................................... 4
Logging in through the Onboard Administrator ............................................. 5
Configuring redundant switches .................................................................. 6
de

Redundant crosslinks ........................................................................... 6


Redundant paths to server bays ............................................................ 6
Manually configuring a GbE2c switch ......................................................... 7
Configuring multiple GbE2c switches .................................................... 7
TT

Configuring a Cisco Catalyst Blade Switch 3020 or 3120 ..................................... 8


Obtaining an IP address ............................................................................ 8
Obtaining an IP address for the fa0 interface through the Onboard
Administrator ..................................................................................... 8
rT

Using a console session to assign a VLAN 1 IP address .......................... 9


Cisco Express Setup ............................................................................ 9
Assigning the VLAN 1 IP address .............................................................. 10
Obtaining an IP address for the fa0 interface through the Onboard
Fo

Administrator ...........................................................................................12

vi Rev. 12.31
Contents

Configuring an HP 1:10Gb Ethernet BL-c Switch ..................................................13


Planning the 1:10Gb Ethernet BL-c switch configuration ..................................13
Switch port mapping ................................................................................13
Accessing the 1:10Gb Ethernet BL-c switch ...................................................14
User, operator, and administrator access rights ............................................15
Manually configuring a switch ...................................................................16
Configuring multiple switches ....................................................................16
Using scripted CLI commands through telnet .........................................16
Using a configuration file ...................................................................16
Configuring an HP 6120XG or 6120G/XG switch ...............................................17

ly
Switch IP configuration ..............................................................................17
Using the CLI Manager-level prompt .....................................................17

on
Configuring the IP address by using a web browser interface ..................17
Accessing a blade switch from the Onboard Administrator .....................18
Accessing a blade switch through the mini-USB interface (out of band) ... 19
Accessing a blade switch from the Ethernet interface (in band) .............. 19

y
Assigning an IP address to a blade switch ........................................... 20
IP addressing with multiple VLANs ...................................................... 21

er
IP Preserve: Retaining VLAN-1 IP addressing across configuration file
downloads ....................................................................................... 22
liv
Learning check .............................................................................................. 23

Module 8 — Configuring Storage Connectivity


de

Options
Objectives ...................................................................................................... 1
Configuring a Brocade 8Gb SAN switch ............................................................ 2
TT

Setting the switch Ethernet IP address ........................................................... 2


Using EBIPA ....................................................................................... 2
Using external DHCP .......................................................................... 2
Setting the IP address manually ............................................................ 3
rT

Configuring the 8Gb SAN switch ............................................................... 5


Items required for configuration............................................................ 5
Setting the date and time ..................................................................... 5
Verifying installed licenses ................................................................... 5
Fo

Modifying the Fibre Channel domain ID (optional) ................................. 6


Disabling and enabling a switch .......................................................... 6
Using DPOD ...................................................................................... 6
Backing up the configuration ............................................................... 6
Reset button ....................................................................................... 7
Management tools .................................................................................... 8

Rev. 12.31 vii


Implementing HP BladeSystem Solutions

Configuring a Cisco MDS 9124e Fabric Switch.................................................... 9


Setting the IP address ................................................................................ 9
Configuring the fabric switch .................................................................... 10
Items required for configuration.......................................................... 10
Setting the date and time ................................................................... 10
Verifying installed licenses ................................................................. 10
Modifying the Fibre Channel domain ID (optional)................................. 11
Recovering the administrator password ................................................. 11
Fabric switch management tools .................................................................12
Configuring an HP 3Gb SAS BL Switch .............................................................13

ly
Configuration rules for the 3Gb/s SAS Switch .............................................13
Configuring the 3Gb SAS BL Switch ...........................................................14

on
Accessing the 3Gb SAS BL Switch .............................................................15
Confirming the firmware version .................................................................15
Learning check ...............................................................................................16

y
er
Module 9 — Virtual Connect Installation and
Configuration
liv
Objectives ...................................................................................................... 1
HP Virtual Connect portfolio.............................................................................. 2
de

HP 1/10Gb VC Ethernet ............................................................................ 2


HP 1/10Gb-F VC Ethernet .......................................................................... 2
HP Virtual Connect Flex-10 10Gb Ethernet .................................................... 3
HP Virtual Connect 4Gb Fibre Channel Module ............................................ 4
TT

HP Virtual Connect 8Gb 20-port Fibre Channel Module ................................ 4


HP Virtual Connect 8Gb 24-port Fibre Channel Module ................................ 5
HP Virtual Connect FlexFabric modules ........................................................ 6
FlexFabric adapter — Physical functions ................................................ 7
rT

Planning and implementing Virtual Connect ...................................................... 10


Building a Virtual Connect environment ....................................................... 11
Virtual Connect out-of-the-box steps ............................................................12
Fo

Virtual Connect Ethernet stacking ...............................................................13


Virtual Connect Ethernet module stacking .............................................14

viii Rev. 12.31


Contents

Using VC-FC modules .....................................................................................15


Virtual Connect Fibre Channel WWNs .......................................................15
Virtual Connect Fibre Channel port types and logins ....................................16
Fibre Channel logins ..........................................................................16
Fibre Channel zoning and SSP ..................................................................17
N_Port_ID virtualization ............................................................................18
Fabric login using the HBA aggregator’s WWN .................................. 19
N_Port_ID virtualization ..................................................................... 20
Configuring Virtual Connect ............................................................................ 20
Virtual Connect logical flow...................................................................... 22

ly
Create a VC domain ......................................................................... 22
Virtual Connect multi-enclosure VC domains ......................................... 23

on
Define Ethernet networks.................................................................... 30
Define Fibre Channel SAN connections ................................................31
Create server profiles ........................................................................ 32
Implementing the server profile ........................................................... 33

y
Manage data center changes ............................................................ 34
Virtual Connect – Server profile migration .................................................. 35

er
Server profile migration for a failed server ........................................... 36
Virtual Connect Manager ............................................................................... 37
liv
Accessing the Virtual Connect Manager .................................................... 38
Virtual Connect Manager login page ........................................................ 39
Virtual Connect Manager home page........................................................ 40
de

Virtual Connect role-based privileges ..........................................................41


Virtual Connect Manager failover ............................................................. 42
Virtual Connect Enterprise Manager ................................................................ 43
VCEM compared with VC Manager .......................................................... 45
TT

VCEM licensing ...................................................................................... 46


Installing VCEM ...................................................................................... 47
Typical environments for VCEM ................................................................. 47
VCEM user interfaces .............................................................................. 48
rT

VCEM profile failover ........................................................................ 49


Learning check .............................................................................................. 50
Fo

Rev. 12.31 ix
Implementing HP BladeSystem Solutions

Volume 2
Module 10 — Introduction to HP SAN Solutions
Objectives ...................................................................................................... 1
HP MSA2000/P2000 portfolio ......................................................................... 2
P2000 G3 MSA ....................................................................................... 2
Key features.............................................................................................. 5
HP 2000i MSA ......................................................................................... 6

ly
Management tools .................................................................................... 7
EcoStore technology .................................................................................. 7
Active/active controllers ............................................................................. 8

on
Unified LUN presentation ........................................................................... 8
HP P4000 overview ......................................................................................... 9
P4000 product suite ................................................................................ 10
HP SAN/iQ software ........................................................................ 10

y
P4000 centralized management console ............................................. 10

er
Storage software ........................................................................................... 20
HP P4000 snapshots ............................................................................... 20
HP P4000 SAN/iQ SmartClone ............................................................... 21
liv
HP P4000 SAN Remote Copy .................................................................. 23
Learning check .............................................................................................. 24

Module 11 — HP Virtualization Basics


de

Objectives ...................................................................................................... 1
How does virtualization work? .......................................................................... 2
What is a virtual machine? ........................................................................ 3
TT

ProLiant virtualization with VMware ................................................................... 4


Host operating system-based virtualization.................................................... 4
VMware ESXi: Virtualization platform .......................................................... 5
rT

VMware ESX/ESX1 ............................................................................. 6


VMware ESXi features ......................................................................... 7
VMware ESXi architecture .................................................................... 8
Configuring ESXi ................................................................................ 9
Fo

VMware vSphere .................................................................................... 10


Using the vSphere client .................................................................... 10

x Rev. 12.31
Contents

ProLiant virtualization with Citrix Xen and XenServer ...........................................13


Comparing Xen platforms ...................................................................13
Identifying the XenServer product line...................................................14
Citrix Xen architecture overview ...........................................................15
XenCenter overview ..................................................................................16
ProLiant virtualization with Microsoft products .....................................................17
Windows Server 2008 R2 Hyper-V ............................................................17
Learning check .............................................................................................. 19

Module 12 — Configuring and Managing HP

ly
BladeSystem

on
Objectives ...................................................................................................... 1
Placement rules and installation guidelines.......................................................... 2
c7000 enclosure zoning ............................................................................ 2
c7000 enclosure placement rules—Half-height server blades .......................... 4

y
c7000 enclosure placement rules—Full-height server blades ........................... 5
c7000 interconnect bays ............................................................................ 6

er
c3000 enclosure zoning ............................................................................ 7
c3000 enclosure placement rules—Half-height server blades .......................... 8
liv
c3000 enclosure placement rules—Full-height server blades ........................... 9
c3000 interconnect bays ......................................................................... 10
Installation rules for partner blades ............................................................. 11
de

HP PCI Express Mezzanine Pass-Thru card ............................................ 11


Using the Onboard Administrator .....................................................................12
Onboard Administrator user interfaces........................................................13
Local I/O cable connection ................................................................14
TT

First Time Setup Wizard ............................................................................15


Rack and enclosure settings .......................................................................16
Enclosure bay IP addressing ......................................................................17
Using configuration scripts ....................................................................... 19
rT

Active to standby transition ....................................................................... 20


Using the service port connection .............................................................. 21
Power Management settings ..................................................................... 23
Device Power Sequence device bays.......................................................... 24
Fo

Onboard Administrator authentication....................................................... 26


VLAN configuration ................................................................................. 28
VLAN configuration settings ............................................................... 29

Rev. 12.31 xi
Implementing HP BladeSystem Solutions

Device Summary page............................................................................. 30


Rack firmware .......................................................................................... 31
Flashing the Onboard Administrator firmware ............................................. 32
Other firmware operations ................................................................. 34
Redundant flashing ........................................................................... 35
Recovering the administrator password ...................................................... 36
Resetting the Onboard Administrator to factory defaults ............................... 37
Preparing logs from the Onboard Administrator .......................................... 38
Using HP Insight Display ................................................................................ 39
Health Summary screen ........................................................................... 39

ly
Enclosure settings .....................................................................................41
Enclosure information .............................................................................. 42

on
Verifying the firmware version ................................................................... 43
Rebooting the Onboard Administrator ....................................................... 44
Blade and port information ...................................................................... 45
Blade information ............................................................................. 46

y
Port Info view from Insight Display ....................................................... 47
USB Menu .............................................................................................. 48

er
iLO Management Engine ................................................................................ 49
Configuring iLO ...................................................................................... 49
liv
iLO RBSU ......................................................................................... 49
Browser-based setup.......................................................................... 49
HPONCFG ...................................................................................... 50
de

HP Lights-Out Online Configuration Utility ............................................ 53


Important blade iLO settings ..................................................................... 54
General security recommendations ............................................................ 56
Attaching a DVD-ROM drive to BladeSystem enclosures ...................................... 57
TT

Connecting to the enclosure DVD-ROM drive — Insight Display .................... 58


Connecting an ISO image as a CD/DVD ............................................ 59
Connecting to the enclosure DVD-ROM drive — Onboard Administrator ........ 60
Mounting an ISO image as a DVD ......................................................61
rT

Enclosure-based DVD-ROM drive status – Insight Display .............................. 62


Enclosure-based DVD-ROM drive status – Onboard Administrator ................. 63
Learning check .............................................................................................. 64
Fo

xii Rev. 12.31


Contents

Module 13 — Insight Control Management Software


Objectives ...................................................................................................... 1
Insight Control ................................................................................................. 2
Insight Control introduction ......................................................................... 2
Insight Control features .............................................................................. 3
Insight Control server deployment ......................................................... 4
Key server deployment features............................................................. 5
BladeSystem deployment optimizations .................................................. 5
Insight Control server migration ................................................................... 6

ly
Insight Control virtual machine management .......................................... 8
Insight Control performance management ............................................ 10

on
Insight Control remote Management ..................................................... 11
Insight Control power Management .....................................................12
Hardware and software requirements .........................................................14
Insight Software server hardware requirements ......................................14

y
Database..........................................................................................16

er
Web browser ....................................................................................16
Virtualization platform ........................................................................16
HP Systems Insight Manager ............................................................................17
liv
HP SIM overview ......................................................................................17
HP SIM architecture ..................................................................................18
Central Management Server ...............................................................18
de

Management console ........................................................................ 19


Managed systems ............................................................................. 19
HP SIM features ...................................................................................... 19
New features in HP SIM 7.0 ............................................................... 20
TT

Easy and rapid installation ................................................................. 21


Two user interfaces ........................................................................... 22
Manage health proactively ................................................................ 23
Automatic system discovery and identification ...................................... 24
rT

Fault management and event handling ...................................................... 26


Role-based security .................................................................................. 27
HP Version Control .................................................................................. 29
Fo

Version Control Repository Manager ......................................................... 30


Version Control Agent ........................................................................31
Learning check .............................................................................................. 32

Rev. 12.31 xiii


Implementing HP BladeSystem Solutions

Module 14 — Insight Control Server Deployment


Objectives ...................................................................................................... 1
Introducing Insight Control server deployment...................................................... 2
HP Insight Control Server Deployment software ............................................. 2
Benefits of Insight Control server deployment ................................................ 4
Insight Control server deployment architecture ............................................... 6
Server components .................................................................................... 7
Deployment Server .............................................................................. 8
Deployment Server Console ................................................................. 9

ly
Deployment Server database.............................................................. 10
PXE server ......................................................................................... 11

on
Deployment Share .............................................................................12
DHCP server .....................................................................................12
Client components .............................................................................13
Scripted and imaged installation ......................................................................15

y
Jobs and tasks .........................................................................................15

er
Jobs .................................................................................................15
Tasks ................................................................................................15
Jobs and tasks working together ................................................................15
liv
Building jobs .....................................................................................15
Scheduling jobs .......................................................................................16
Job categories .........................................................................................17
de

Firmware Flash ..................................................................................17


Hardware Configuration .....................................................................18
OS Installation.................................................................................. 19
OS Imaging ..................................................................................... 20
TT

Software .......................................................................................... 21
Scripted deployment ................................................................................ 21
Windows configuration file ................................................................ 22
Configuration flow for scripting........................................................... 23
rT

Imaging ................................................................................................. 24
Advantages and disadvantages.......................................................... 25
Imaging preparation ......................................................................... 25
Fo

Configuration flow for imaging ........................................................... 26


Advanced imaging options ...................................................................... 27
Media spanning ............................................................................... 27
Partition resizing ............................................................................... 28
Special functionality for HP BladeSystem........................................................... 29
Rip-and-Replace ...................................................................................... 29
Physical Devices view icons .......................................................................31
Creating virtual bays ......................................................................... 32
Learning check .............................................................................................. 34

xiv Rev. 12.31


Contents

Module 15 — Data Availability and Protection


for an HP Server Blade
Objectives ...................................................................................................... 1
Increasing availability through power protection .................................................. 2
Uninterruptible power supplies .................................................................... 2
HP power protection and management portfolio ........................................... 3
Tower UPS models .............................................................................. 3
Rack-mountable UPS models ................................................................. 4

ly
HP UPS features ........................................................................................ 5
UPS options .............................................................................................. 6

on
Enhanced battery management .................................................................. 6
HP rack and power management software ................................................... 8
HP Power Manager ............................................................................. 8
HP Power Protector UPS Management Software ...................................... 8

y
Rack and Power Manager ................................................................... 9
HP UPS Management Module .................................................................. 10

er
HP Modular Cooling System G2 ................................................................12
Data Protection software ..................................................................................14
liv
HP Data Protector.....................................................................................14
Key benefits ......................................................................................14
Key features ......................................................................................15
de

HP Data Protector Express .........................................................................16


Key features of Data Protector Express ..................................................16
Operating systems supported ..............................................................18
Learning check .............................................................................................. 19
TT

Module 16 — HP BladeSystem Support


Objectives ...................................................................................................... 1
BladeSystem diagnostics ................................................................................... 2
rT

Tools to collect data................................................................................... 2


HP Active Health System ............................................................................ 4
HP Insight Control performance management ............................................... 5
Fo

HP Insight Remote Support ......................................................................... 6


HP Insight Online ...................................................................................... 7
HP iLO Management Engine Event Log ........................................................ 9
Security audits .................................................................................... 9
Integrated Management Log ..................................................................... 10
Array Configuration Utility diagnostics ........................................................12
ACU diagnostic reports ......................................................................13
Automatic Server Recovery ........................................................................14

Rev. 12.31 xv
Implementing HP BladeSystem Solutions

Firmware update tools and options ...................................................................16


Firmware overview ...................................................................................16
Firmware deployment methods ...................................................................17
Available tools for firmware updates ...........................................................17
HP Smart Update Manager ................................................................18
HP BladeSystem Firmware Deployment Tool ......................................... 20
Virtual Connect Support Utility ........................................................... 21
Service Pack for ProLiant .......................................................................... 22
Advantages ..................................................................................... 23
Obtaining firmware with Service Pack for ProLiant ................................. 24

ly
Extended support duration ................................................................. 24
General best practices ............................................................................. 25

on
HP Services for BladeSystem ........................................................................... 26
Important safety information ..................................................................... 26
Safety symbols ................................................................................. 26
Server warnings and cautions ............................................................ 27

y
Preventing electrostatic discharge ........................................................ 28
Grounding methods to prevent electrostatic discharge ........................... 28

er
Troubleshooting flowcharts ....................................................................... 29
Example of troubleshooting power-on problems .......................................... 29
liv
Implementing preventive measures ............................................................. 30
Learning check ...............................................................................................31
de
TT
rT
Fo

xvi Rev. 12.31


HP BladeSystem Portfolio Introduction
Module 1

Objectives
After completing this module, you should be able to:
 Describe the HP BladeSystem positioning

ly
 Identify the components of the BladeSystem portfolio
 List the key HP BladeSystem Generation 8 (Gen8) server technologies

on
 Name the BladeSystem management and deployment tools

y
er
liv
de
TT
rT
Fo

Rev. 12.31 1 –1
Implemen
nting HP BladeS
System Solutions

HP BladeSy
B ystem position
p ning

ly
on
Three ma
ajor features off HP BladeSysteem

y
BladeSystem solutionss provide com mplete infrasstructures tha
at include serrvers, storagee,

er
ng, and pow
networkin wer to facilitate data centeer integration and transfo ormation. Thhey
enable data center cuustomers to respond
r moree quickly and d effectively to changingg
liv
business conditions, lighten the lo
oad on the ITT staff, and ccut total owneership costs.
BladeSystem has keptt pace with the
t changing
g needs of da
ata center cu
ustomers. The
ese
business requirementss include:
de

 Redu
uce connectivvity complex
xity and costss
 Lowe
er purchase and
a operatio ons costs wheen adding o
or replacing
compute/storagee capacity
TT

 Loweer application deploymen nt and infrasstructure operrations costs by reducing


g
the number
n of IT architecture variants
 Allow
w easier, faster, and morre economicaal changes to
o server and
d storage setu
ups
rT

witho
out disrupting local area network (LA
AN) and storaage area neetwork (SAN))
domains
 Allow
w faster mod
dification or addition
a of a
applications
Fo

 Supp
port grid com
mputing and service-oriennted architeccture (SOA)
 Supp port third-parrty componeent integration with well-d
defined interffaces, such a
as
Ethernet NICs/sw witches, Fibre
e Channel ho ost bus adappters (HBAs)/ /switches, and
InfiniBand host channel
c adap pters (HCAs) /switches

1 –2 Rev. 12
2.31
HP BladeSystem Portfolio Introduction

BladeSystem has met those challenges by enabling IT to:


 Consolidate — Single modular infrastructure integrates servers, storage,
networking, and management software that can be managed with a common,
consistent user experience.
 Virtualize — Pervasive virtualization enables you to run any workload, meet high
availability requirements, and support scale out and scale up. It also enables
you to create logical, abstracted connections to LAN/SAN.
 Automate — Freeing up IT resources for more important tasks enables you to
simplify routine tasks and processes, saving time while maintaining control.

ly
BladeSystem evolution

on
Many changes have been made since BladeSystem was first introduced to the
market. The BladeSystem infrastructure was designed to reduce the number of cables,
centralize management, and reduce space occupied by servers. All these features
were enabled to reduce the operational and maintenance costs of server

y
environment.

er
In 2007, HP introduced Virtual Connect, which simplified connection management
(both Ethernet and Fibre Channel). Using Virtual Connect, administrators can design
liv
networks and SANs on a virtual level. This means that cabling is done only once,
and all other changes are made on the Virtual Connect level. Virtual Connect is able
to replace the physical MAC address and WWN number of a server blade with a
de

virtual one. The server is visible to the external world using these virtual addresses.
When a network card or Fibre Channel card has to be replaced, administrators have
nothing else to change in the configuration because new MAC and WWN numbers
will be overwritten with the virtual addresses previously assigned to that blade.
TT

In 2008, HP announced Virtual Connect Flex-10, which has all the features of the
original Virtual Connect, but one 10Gb network port is seen as four independent
network ports. Administrators can assign bandwidth to a single port from 100Mb to
10Gb. ProLiant G6 servers are equipped with a dual-port Flex-10 network card. As a
rT

result, customers using Virtual Connect Flex-10 have eight NICs integrated into a half-
height server blade with flexible speeds instead of two 1Gb ports.
In 2010, HP announced Virtual Connect FlexFabric. This technology was designed
Fo

for converging LAN and SAN connections into a single interconnect module.
In 2012, HP introduced a refresh to its ProLiant server blade line of products. Updates
to the ProLiant Gen8 server blades include a faster memory chipset, a lower voltage
memory option, and HP SmartMemory for enhanced support through HP Active
Health.

Rev. 12.31 1 –3
Implementing HP BladeSystem Solutions

Transitioning to the ProLiant Gen8 servers


Current ProLiant BL490c customers can move to the ProLiant BL460c Gen8 server
because it combines the best of the two server blades. Also, current ProLiant BL460c
customers can move to Gen8 to benefit from the improved BL460c Gen8 server
performance, management features, and overall configuration flexibility.

Key Gen8 technologies


HP is continually upgrading its server portfolio with the latest technologies to meet
customer requirements.

ly
Key Gen8 server technology includes:

on
 Multicore processors — Multi-core Intel Xeon, AMD Opteron, or Intel Itanium 2
processors enable greater system scalability. Customers benefit from software
applications that are developed to take advantage of multi-core processor
technology.

y
 HP SmartMemory — Lower voltage DIMMs allow faster operation speeds and

er
greater DIMM counts. SmartMemory enhances memory performance and can
be managed through the HP Active Health system. SmartMemory verifies that the
memory has been tested and performance- tuned specifically for HP ProLiant
liv
servers. Types of HP SmartMemory include:
 Registered DIMMs (RDIMM)
de

 Unbuffered with ECC DIMMs (UDIUMM)


 Load-reduced DIMMs (LRDIMM)
HP SmartMemory allows for greater performance and greater capacity. Some
TT

Gen8 server blades can be equipped with up to 512 GB of memory.


 iLO Management Engine — The HP Integrated Lights-Out (iLO) Management
Engine is a complete set of embedded management features that support the
complete lifecycle of the server, from initial deployment, through ongoing
rT

management, to service alerting and remote support. The iLO Management


Engine ships standard on all ProLiant Gen8 servers. The iLO Management
Engine includes:
Fo

 HP iLO – Is the core foundation for the iLO Management Engine. iLO
management simplifies server setup, health monitoring, as well as power
and thermal control. iLO enables you to access, deploy, and manage
servers anytime from anywhere.
 HP Agentless Management – Begins to work as soon as the server has
power and data connections. The base hardware monitoring and alerting
capability is built into the iLO chipset.

1 –4 Rev. 12.31
HP BladeSystem Portfolio Introduction

 HP Active Health System – Is an essential part of the iLO management


engine. The Active Health System monitors and records changes in the
server hardware and system configuration. It assists in diagnosing problems
and delivering rapid resolution when system failures occur.
 HP Intelligent Provisioning – Enables out of the box single server deployment
and configuration without the need for media.
 HP Embedded Remote Support – Builds on the existing functions established
with HP Insight Remote Support that either runs in a stand-alone central
system or as a plug-in to the HP Systems Insight Manager (HP SIM).

ly
 Multifunction network interface cards (NICs) — HP multifunction NICs provide a
high-performance network interface with support for TCP/IP Offload Engine

on
(TOE), iSCSI, and Remote Direct Memory Access (RDMA) over a single network
connection. Previously, the typical server environment required separate
connectivity products for networking, storage, interconnects, and infrastructure
management. HP multifunction NICs present a single connection supporting

y
multiple functions, enabling you to manage an entire infrastructure as a single,

er
unified fabric. They provide high network performance with upgrade options to
enhance memory and storage utilization. Multifunction NICs support multiple
fabric protocols, including Ethernet and iSCSI Fibre Channel.
liv
Note
The NICs in Integrity BL860c/BL870c server blades are not multifunction.
de

 Flex-10 support — Gen8 servers have an embedded, dual-port Flex-10 network


card. These two ports can function as eight independent network ports with
adjustable bandwidth (VC Flex-10 modules are required to use this functionality).
TT

 HP Smart Array P700m Controller —This Smart Array controller in a mezzanine


card format allows you to connect external storage to the server blades.
Internal USB and SD Card ports, plus a Trusted Platform Module (TPM) —
rT

Internal card ports and the TPM provide expansion security options in Gen8
server blades.
 Power Regulator for ProLiant and Dynamic Power Capping — Power Regulator
Fo

and Dynamic Power Capping double the capacity of servers in the data center
through dynamic control of power consumption.

Rev. 12.31 1 –5
Implementing HP BladeSystem Solutions

BladeSystem portfolio

ly
on
y
er
liv
The BladeSystem portfolio offers multiple server options, different enclosures for server
de

blades, and a wide choice of interconnect options including Fibre Channel, Ethernet,
SAS, and InfiniBand options.
The BladeSystem portfolio consists of server blades, blade workstations,
interconnects, and multiple storage options such as tape drives and storage blades.
TT

The two BladeSystem enclosures can accommodate any type of server blade that is
available on the market. Any of the server blades can be enhanced with a variety of
mezzanine cards including Ethernet, SAS, Fibre Channel, and InfiniBand options. For
each type of connection, HP offers appropriate interconnect modules including
rT

revolutionary Virtual Connect modules. The whole infrastructure can be managed


from a central location using HP Systems Insight Manager (HP SIM) and other HP
Insight software components.
Fo

1 –6 Rev. 12.31
HPP BladeSystem PPortfolio Introduction

Blade
eSystem enclosure
e es
BladeS
System c3
3000 enclo
osure

ly
The BladeeSystem c30 000 enclosuree can scale ffrom a singlee enclosure h
holding up to
o

on
eight blades, to a racck containing
g seven enclo
osures holdinng up to 56 blades.

BladeS
System c7
7000 enclo
osure

y
er
liv
de
TT

The BladeeSystem c70000 enclosure e holds up to o 16 servers and storage e blades pluss
redundan nt network annd storage sw
witches. It inncludes a shaared, multi-teerabit high-
speed miidplane for wire-once
w connectivity of server bladees to networkk and shared d
rT

storage. Power is deliivered throug


gh a pooled power backkplane that ensures the fu ull
capacity of the redun
ndant hot-plug power sup pplies is avaiilable to all b
blades.
Fo

Rev. 12.3
31 1 –7
Implemen
nting HP BladeS
System Solutions

Blade
eSystem server
s bla
ades

ly
on
y
erBladeS
System server b
blades portfolio
o
liv
BladeSystem server blades
b are deelivered in tw
wo form facto
ors: half-heig
ght and full-
height. Se
erver bladess can be insta
alled (and m mixed with other server bllades) in c30000
and c700 00 enclosure
es. Different series
s are deesigned for d
different usag
ge models.
de

All mode
els can be ca
ategorized into four group
ps:
 2xx series – High-density, low
w-cost serverrs optimized for high-perfformance
computing (HPC)) clusters
TT

 4xx series – Dua


al-socket macchines for mo
ost typical usse
 6xx series – Qua
ad-socket serrvers for virtuualization an
nd demandin
ng applicatio
on
8xx series – Integrity servers supporting H
HP-UX and O
OpenVMS w
with true 64-b
bit
rT

proccessing
HP has seerver blades that meet cu ustomer needds, from a sm mall businesss to the large
est
enterprise
e firm. ProLia
ant server bla
ades supportt the latest A
AMD Opteron n and Intel X Xeon
Fo

processors and a wid de variety of I/O options. Integrity seerver blades feature Intel
Itanium processors.
p HP
H server blades also feaature:
 Virtu
ual Connect technology
t
 A va
ariety of netw
work intercon
nnect alternattives
 Integ
grated Lights Out (iLO) 4 (Gen8 serveers)
 Multtiple redunda
ant features
 Embedded RAID
D controllers

1 –8 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

HP PrroLiant Blade Wo
orkstation
n Solution
ns

ly
on
y
With an HP Blade Workstation
W So
olution, the ccomputing po
ower, in the form of blad
de

er
workstatio
ons, is move
ed to the data
a center wheere the worksstations can be more eassily,
securely, and inexpen
nsively manaaged.
liv
The HP Blade
B Worksttation Solutio
on consists off three prima
ary compone
ents:
 ProLiiant xw460cc Blade Workkstation or P roLiant xw2xx220c Blade
e Workstation
n
de

(based on ProLia
ant server bla
ade architectture)
 The client
c compu
uter (the HP Compaq
C t5730 Thin Clieent is shown in the graphic;
an HP
H dc73 Blad de Workstation Client is a
also supportted)
HP Remote
R Grap
phics Softwarre (HP RGS)
TT

Blade wo orkstations ca
an be installe
ed in c30000 or c7000 eenclosures. O
Other
positionin
ng rules and configuratioons are the sa
ame as for sserver bladess, including
managem ment procedu ures.
rT
Fo

Rev. 12.3
31 1 –9
Implemen
nting HP BladeS
System Solutions

HP Pro
oLiant WS
S460c G6
6 Blade Workstation
W n

ly
HP ProLia
ant WS460c G6 Worksta ation Blade iis ideal for d
desktop powe er users with
h

on
computinng environmeents that requ
uire the use oof high-perfoormance grap phics
applicatio
ons from rem
mote locationns. The small form factor of the HP Pro oLiant xw460c
Blade Workstation
W allows installation of up to
o 64 blade w workstations iin a single 4
42U
rack.

y
ProLiant WS460c
W G6
6 Blade Worrkstations sup
pport the following opera
ating systemss:

er
 Micrrosoft Windo
ows
 Red Hat Enterprise Linux (RHEL)
liv
The optio
onal HP Grap phics Expanssion Blade m module is an expansion b blade that
attaches to the top off the ProLiantt xw460c blaade and enaables use of ffull-size
standard PCIe graphiics card such h as NVIDIA Quadro FX 5600. With hout the
de

expansioon blade, sma all form-facto


or graphics a
adapters are installed internally in the
e
blade woorkstation.

HP Pro
oLiant xw2
2x220c Blade Worrkstation
TT

The HP ProLiant xw2xx220c Blade e Workstationn is a high-d


density mid-raange
workstatio
on with two independentt workstationn nodes in a single half-h height blade
package.. Each worksstation node has its own processor, m memory, diskk drive, and
mezzaninne slot which
h can be fitte
ed with graphhics subsysteem. This allow
ws up to 32
rT

workstatio
ons in a c70
000 enclosurre and 128 w workstations in a standard d 42U rack.
A single HP xw2x220 ally two workstations in tterms of softw
0c is essentia ware licensin ng.
If you purchase a Windows opera ating system with the bla ade workstatiion, you will be
Fo

purchasinng two licensses and receeive two certi ficates of auuthenticity sticckers on it. A
All
software, both HP and third-party y, treats one H
HP xw2x220 0c Blade Wo orkstation ass
two systems.

1 –10 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

Blade
eSystem storage
s and
a expa
ansion
BladeSystem is built not
n only on servers, but aalso on storag
ge and expaansion moduules.
BladeSystem can also o consolidate
e other netwo
ork equipmeent including storage and
d
backup options.
o

HP sto
orage blad
des

ly
on
y
er
liv
D2200sb
D Stora
age Blade
de

HP offers storage solu


utions design ned to fit inside the BladeeSystem enclosure, as we ell
as external expansionn to virtually unlimited stoorage capaccity. HP storaage blades ooffer
flexible expansion
e an
nd work side by side withh ProLiant an nd Integrity se
erver bladess.
The HP portfolio
p of sto
orage blade
es include:
TT

 HP Storage
S D2200sb Storage Blade
 HP Storage
S X380
00sb G2 Ne
etwork Storag
ge Gatewayy Blade
HP Storage
S X180
00sb G2 Ne
etwork Storag
ge Blade
rT

 HP Storage
S IO Accelerator
A
 Direcct Connect SAS
S Storage for HP BladeeSystem
Fo

Rev. 12.3
31 1 –11
Implemen
nting HP BladeS
System Solutions

Ultrium
m Tape Bla
ades

ly
on
Ultrrium SB3000c Tape Blade

y
The HP Storage Ultriu um Tape Blad des offer a c omplete data
a protection,, disaster
recovery, and archiving solution for BladeSysttem customerrs who need an integrate ed

er
data prottection solutio
on. These ha
alf-height tap
pe blades proovide direct a
attach data
protection
n for the adjaacent server and network backup pro otection for a
all data resid
ding
liv
within the
e enclosure.
Each HP Storage Ultrium Tape Bla ade solution ships standa ard with HP D Data Protecto
or
Express Software
S Sing
gle Server Ed
dition softwa are. In addition, each tappe blade
de

supports HP One-Buttton Disaster Recovery (O OBDR), which h allows quicck recovery o


of
the operaating system, applications, and data from the lateest full backuup set. HP
Ultrium Ta
ape Blades are
a the indusstry's first tap
pe blades an nd are develo oped exclusively
for HP Bla
adeSystem enclosures.
e
TT

wing models are availab


The follow ble:
 HP Storage
S SB30
000c Tape Blade
B
HP Storage
S 760c Tape Blade
SB17
rT


Fo

1 –12 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

PCI Ex
xpansion Blade
B

ly
on
y
er
PCI Expansionn Blade

The HP BladeSystem
B PCI Expansio on Blade pro ovides PCI caard expansio on slots to an
n
liv
adjacent server blade e. This blade
e expansion unit uses thee midplane to o pass standard
PCI signa
als between adjacent
a encclosure bays,, to allow a sserver blade
e to add off-the-
shelf PCI--X or PCI-E ca
ards. Custommers need onne PCI Expan nsion Blade ffor each servver
de

blade needing PCI ca ard expansioon. Any PCI ccard from third-party manufacturers tthat
works in HP ProLiant ML and HP ProLiant
P DL sservers shouldd work in thiis PCI
Expansioon Blade.

Note
TT

HP does not offer any wa


arranty or sup port for third-p
party PCI man
nufactured
products.
rT
Fo

Rev. 12.3
31 1 –13
Implemen
nting HP BladeS
System Solutions

Ethernet intercconnects

ly
HP 10GbE Pass-T hru Module

on
To conne
ect embedded d and added d network caards to the p roduction ne
etwork, HP
provides a number off Ethernet inte
erconnects fo
or BladeSysteem.
Ethernet interconnects
i s allow admiinistrators to connect servver blades in
n a variety of
different methods. Mo ost interconnects reduce cabling, withh internal do
ownlinks to

y
individuaal server blad
des and conssolidated up links.

er
The HP portfolio
p nterconnects include:
of in
 HP 10Gb
1 Pass Th
hru module
liv
 HP GbE2c
G switch
h and HP Gb
bE2c Layer 2
2/3
 Cisco
o Catalyst 30
020 Blade Switch
S and C
Cisco Catalysst Blade Swittch 3120G/X
X
de

 HP 1:10Gb
1 Etherrnet switch
 HP ProCurve
P 612
20XG
 HP ProCurve
P 612
20G/XG
TT
rT
Fo

1 –14 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

Ethern
net mezzanine cardss

ly
on
HP NC360m Dual Port 1
1GbE BL-c Ada pter

y
Mezzanine cards are e used to add
d more netw
work connections to a servver blade. Th
he

er
ortfolio of HP
current po P Ethernet mezzanine ca
ards include:
 HP NC325m
N PCI Express Qu
uad Port Gig
gabit Server A
Adapter
liv
 HP NC326m
N PCI Express Du
ual Port 1Gb Server Adap
pter
 HP NC360m
N Dual Port 1GbE
E BL-c Adaptter
de

 HP NC364m
N Qu
uad Port 1Gb
bE BL-c Adap
pter
 HP NC382m
N Dual Port 1GbE
E Multifunctio
on BL-c Adap
pter
 HP NC522m
N Dual Port Flex-10 10GbE M ultifunction B
BL-c Adapter
TT

 HP 530m
5 Dual Port
P Flex-10 10GbE Ethernnet Adapter
 HP NC532m
N Dual Port Flex-10 10GbE M
Multifunction B
BL-c Adapter
rT

 HP NC542m
N Dual Port Flex-10 10GbE M
Multifunction B
BL-c Adapter
 HP NC550m
N Dual Port Flex-10 10GbE M
Multifunction B
BL-c Adapter
 HP NC552m
N Dual Port Flex-10 10GbE M ultifunction B
BL-c Adapter
Fo

 HP 554FLB
5 Dual Port FlexFab
bric 10GbE A
Adapter
 HP 554m
5 Dual Port
P FlexFabriic 10Gb Ada
apter
 HP 10GbE
1 nine Adapterr
Dual Port Mezzan

Important
! You must insstall an appro
opriate interco
onnect for a m
mezzanine carrd.

Rev. 12.3
31 1 –15
Implemen
nting HP BladeS
System Solutions

Storage intercconnects

HP Brocade 8Gb SAN switch

ly
To conneect server bla
ades to extern
nal SAN or o other storagee solutions, sspecific stora
age
interconn
nects must bee used. HP offfers a full po
ortfolio of succh devices, including Cissco

on
and Broccade Fibre Channel switcches.
The HP sttorage intercconnects inclu
ude:
 Broccade 8Gb SA
AN switch

y
 Cisco
o MDS 9124
4e

er
 HP In
nfiniBand sw
witch
 3Gb
b SAS switch
liv
de
TT
rT
Fo

1 –16 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

Storag
ge mezzan
nine cardss

ly
on
y
HP Sm
mart Array P70
00m Controller

Storage mezzanine
hardware
m cards
c

er used to
o connect serrver blades to
e iSCSI contrrollers in a mezzanine
m fo
orm factor annd the P700m
ANs. HP offe
o external SA
m Smart Arra
ers
ay
liv
Controller to connect an MDS600 0 to the enclo
osure. 3Gb S SAS switchess are require
ed
to use P7
700m and ex xternal storagge devices.
The curre
ent portfolio includes:
i
de

 Broccade 804 8G
Gb FC HBA for
f HP c-Classs BladeSysteem
 Emulex LPe1105-H
HP 4Gb FC HBA for HP c-Class Blad
deSystem
TT

 Emulex LPe1205--HP 8Gb FC HBA for HP c-Class Blad


deSystem
 QLog
gic QMH246
62 4Gb FC HBA for HP c-Class Blad
deSystem
 gic QMH2562 8Gb FC HBA for HP c-Class Blad
QLog deSystem
rT

 gic QMH4062 1GbE iSC


QLog CSI Adapterr for HP Blad
deSystem c-Class
 HP Smart
S Array P700m Conttroller
Fo

Rev. 12.3
31 1 –17
Implemen
nting HP BladeS
System Solutions

Integrrity NonS
Stop Blad
deSystem
m

ly
on
y
The Integrity NonStop
er
p BladeSystem offers douuble the perfo ormance andd increased
liv
response time using multicore
m and
d storage sub bsystem tech
hnology (whe
en compared d to
NonStop NS16000 in HP labs). The T cost per transaction is cut in half,, and respon
nse
time and throughput is improved with standarrds-based IP communicattions and
de

NonStop I/O infrastructure with the latest storrage technolo


ogy.
Managea ability has also been imp
proved with H de Plug-in, NonStop Clustter
HP SIM Blad
Essentialss with HP SIM
M, iLO technology, and O Onboard Ad dministrator. Improved
TT

middlewa are and the NonStop


N operating systeem enhance multiple failu
ure fault
tolerance
e, increase on nline managgeability, and
d ease upgra
ades. The Integrity NonStop
BladeSystem:
Proviides the indu
ustry’s best end-to-end tra
ansaction inteegrity for the
e most reliab
ble
rT

data
a
 Leverages Intel im
mprovementss in chip-leveel data integrrity and also prevents data
uption end-to
corru o-end (with Fletcher Checck Sum)
Fo

Better performance, lo
ower cost peer transactionn, and improoved scalabillity make the
e
NonStop BladeSystem m ideal for in
ncreasing traansaction volumes in finance,
healthcarre, telecomm
munications, and
a other ap pplications.

1 –18 Rev. 12
2.31
HP BladeSystem Portfolio Introduction

NonStop NB 54000c and NB5000c BladeSystems


As is typical with other NonStop systems, the NonStop NB 54000c and NB5000c
BladeSystems scale out through built-in clustering of logical processors—up to 4,080
logical processors in the maximum number of clustered systems (8,160 cores). Both
BladeSystems feature 2 – 16 processors per node, with 192 TB maximum memory
per cluster.
Multi-core processing capabilities allow the Integrity NonStop BladeSystems to scale
up, providing nearly twice as much processing power per logical processor at a
lower per-transaction cost. To support these multi-core processors, the NonStop

ly
BladeSystem uses NonStop Multi-core Architecture (NSMA)—a performance-oriented
architecture that runs relational database and transaction processing software. In

on
addition, the NonStop operating system named the J-series has been integrated with
and customized for use in a multi-core architecture environment.
Together, the NSMA and NonStop Operating System J-series help you achieve
double the performance of other Integrity NonStop NS-series systems. To achieve

y
such high levels of performance, both cores in a dual-core Integrity logical processor

er
are deployed resulting in improved performance.
The Integrity NonStop BladeSystem uses a novel I/O Infrastructure with a standard
liv
SAS storage adapter called Cluster I/O Module (or Storage CLIM) and a standard
Ethernet controller called IP Cluster I/O Module (or IP CLIM). The Storage CLIM
supports more storage capacity at a lower cost, provides fault tolerance, and delivers
de

improved performance.
 Integrity NonStop BladeSystem NB5000c — The NB5000c features Itanium
9100 series dual-core 1.66 GHz processors with 18 MB L3 cache.
Integrity NonStop BladeSystem NB54000c — The NB5000c features Itanium
TT

9300 series quad-core 1.66 GHz processors with 20 MB L3 cache. Compared


to the NB50000c, the NB54000c scale up provides nearly twice as much
performance capacity per logical processor at a lower per-transaction cost. The
rT

NB54000c system provides near-linear scalability up to 16,320 cores, with


support for up to 192,000 program processes per node, and 48,960,000
program processes in an Expand network. Built on the Integrity BL860c i2server
blade, the NB54000c system ships with expanded availability, reliability,
Fo

scalability, and latency features. It also includes an improved I/O offload engine
(incorporating dual CLIM OS disks) with an SAS 2.0 storage subsystem that is
aligned with current industry advancements in disk technology.

For more information, visit: http://www.hp.com/go/nonstopblade

Rev. 12.31 1 –19


Implemen
nting HP BladeS
System Solutions

Integrrity Supe
erdome 2

ly
on
y
er
liv
The Integrity Superdo ome 2 represents a categ gory of moduular, mission--critical systems
de

that scale
es up, out, annd within to consolidate all tiers of crritical applications on a
common platform. De esigned arou und the BladeeSystem arch hitecture for tthe Convergeed
Infrastructure, the Sup
perdome 2 usses modular building blo ocks that ena able customeers
to “pay as
a they grow”” from mid-ra ange to highh-end. The mo odular desig gn, supportin
ng
TT

up to 1,500 nodes, le everages stan ndard eight-ssocket and 1 16-socket building blockss,
and is ma anaged from m a single coonsole.
The Supe erdome 2 use es a 19-inch standard racck and features a bladedd design, witth
rT

the basic building blo ock being the


e Superdomee 2-16s enclo osure. The en
nclosure is
specific to
o the Superd dome 2 but iss based on tthe technologgy of the Bla
adeSystem
c7000 en nclosure. It shares a common midpla ne, in addition to commo on fans and
power su upplies, to givve customerss common, eeasy-to-servicee spares.
Fo

1 –20 Rev. 12
2.31
HP BladeSystem Portfolio Introduction

The Superdome 2 is mission-critical by design, with innovations that provide a 450%


boost to infrastructure reliability compared to its predecessor. These innovations
include:
 Online, tool-free serviceability, supported by self-diagnosis and self-healing
capabilities
 A power-once backplane that is 100% passive, with no single points of failure
 The internal high-performance crossbar network that connects processors and
memory, which can be replaced online

ly
Some of the business needs that the Superdome 2 addresses include:
 Meets researcher’s needs for a high-performance computing environment

on
 Can accommodate peak workload requirements
 Provides an always-on infrastructure without going to redundant and failover
configurations

y
 Reduces the number of software licenses required to do business

er
 Reduces the cost and complexity of the infrastructure
 Positions the company to rapidly accommodate and leverage dynamic market
liv
conditions
 Provides the agility that the existing mainframe environment cannot offer
de

 Frees up space in the data center


 Meets a company’s current needs and can scale to meet the future demands of
their data warehouse
Establishes a relationship with a service-oriented partner
TT

 Supports large workloads in a large symmetric multiprocessing (SMP) system


 Moves large volumes of data quickly in and out of an Oracle database
rT
Fo

Rev. 12.31 1 –21


Implemen
nting HP BladeS
System Solutions

Virtu
ual Con
nnect te
echnolo
ogy

ly
on
y
er
liv
Virtua
al Connect map
pping concept

Virtual Coonnect is an industry-stan


ndard-based d implementa
ation of serve
er-edge I/O
de

virtualization. It puts an
a abstractio
on layer betw
ween the servvers and the external
networks so that the LANL and storage area neetwork (SAN N) see a pool of servers
rather tha
an individual servers.
After the LAN and SA AN connectio ons are mad de to the poo
ol of servers, the server
TT

administrrator uses a VC
V Manage er user interfa
ace to createe an I/O connnection pro ofile
for each server. Instea
ad of using the
t default M Media Accesss Control (M MAC) addressses
for all NICs and defaault World-WWide Names (WWNs) fo or all host bu
us adapters
(HBAs), the VC Mana ager creates bay-specificc I/O profilees, assigns un
nique MAC
rT

addresse es and WWN Ns to these profiles,


p and administers them locallyy.
Local admministration of
o network addresses
a is a common in ndustry technnique that
Virtual Connect appliies to a new purpose. N etwork and storage adm ministrators ccan
Fo

establish all LAN andd SAN conne ections oncee during deployment and d need not mmake
connectioon changes later
l if servers are chang ged. When sservers are d deployed,
added, oro changed, Virtual Conn nect keeps thhe I/O profile for that LA
AN and SAN N
connectioon constant.

1 –22 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

Virtua
al Conne
ect FlexFa
abric

ly
on
y
er FlexFabric po
ortfolio
liv
Fibre Chaannel over Etthernet (FCoE) maps Fibrre Channel n natively over Ethernet while
being inddependent off the Etherneet forwarding
g scheme. Th he FCoE proto ocol
specificattion replacess the FC0 an
nd FC1 layerrs of the Fibree Channel sttack with
de

Ethernet. By retaining
g the native Fibre
F Channeel constructs, FCoE allow ws a seamlesss
integratio
on with existing Fibre Chaannel netwo rks and man nagement sofftware.
Computers connect to o FCoE with Converged N Network Ada apters (CNAAs), which
contain both
b Fibre Chhannel HBA and Ethernet NIC functio onality on the
e same adappter
TT

card. CNNAs have one e or more ph


hysical Ethernnet ports. FC
CoE encapsulation can be e
done in software
s with
h a conventio
onal Ethernett network intterface card, however FCCoE
CNAs offfload (from the CPU) the low level fraame processiing and SCS SI protocol
functions traditionally
y performed by
b Fibre Cha annel host buus adapters.
rT

Classical Ethernet hass no flow control, so FCo


oE requires eenhancementts to the
Ethernet standard
s to support
s a flo
ow control meechanism (thhis prevents ccongestion and
ensuing frame
f loss.)
Fo

VC FlexFa
abric:
 Connects data, Fibre
F Channe
el, and iSCSI
 Worrks with existing LAN and
d SAN

Rev. 12.3
31 1 –23
Implemen
nting HP BladeS
System Solutions

Virtual Connect FlexFabric featuress

Virtual Connect FlexFFabric Module

ly
Virtual Co
onnect FlexFabric feature
es include:

on
 Embedded dual-port 10Gb Converged
C N
Network Ada
apter (CNA) with
SI/FCoE on ProLiant
iSCS P G7 server
s bladees
 Eight connections on the syste
em board

y
 Emulex-based CN
NA

er
 Flex-10 LAN/Acccelerated iSC
CSI/FCoE

Virtua
al Conne
ect Flex-10 technology
liv
Flex-10 te
echnology co
omprises two componentss:
 HP VC
V Flex-10 10
0Gb Ethernet module
de

 10G
Gb Flex-10 serrver NICs
HP VC Flex-10 10Gb Ethernet mod
dule:
 Man nages the serrver FlexNIC connectionss to the data center netw
work. Each
TT

FlexN
NIC is part of
o a server profile
 Inclu
udes
 Single wide form factor
rT

 Full-duplex 240Gb/s
2 briidging fabricc, with nonbllocking archiitecture
 Sixteen interrnal facing10
0GBASE-KR E Ethernet portts connect to
o the system
board NIC in i each devicce bay proviiding supporrt for up to eight Flex-10
Fo

Ethernet porrts per serverr or up to 32


2 Ethernet po
orts per serveer

Note
Because of a hardware liimitation, the Broadcom 10
0Gb devices d
do not supportt
9Kb jumbo frames—4Kb is the largestt jumbo framee size.

1 –24 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

How Flex-
F 10 wo
orks
VC Flex--10 device discovery
d

ly
on
y
er
liv
The opera
ating system discovers up
p to four PCI functions peer Flex-10 port
Indivvidual send/receive queu
ue
de

 Indivvidual driver image


If Flex-10 network cards is used with
w a non-Fleex-10 intercon
nnect, the op
perating syste
em
only seess two 1Gb innterfaces.
TT
rT
Fo

Rev. 12.3
31 1 –25
Implemen
nting HP BladeS
System Solutions

Flex-10 configuratio
on before boot
b

ly
on
y
er
For each FlexNIC, the
e VC profile configures thhe:
 Band
dwidth (from 0.1 to 10Gb
b/s)
liv
 Link state
 MAC
C address
de
TT
rT
Fo

1 –26 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

Flex-10 NICs mapp


ping

ly
on
y
er
Each Flexx-10 network card can be e mapped to any Etherneet network de
efined on the
e
Virtual Coonnect. They
y function as completely iindependentt devices.
liv
de
TT
rT
Fo

Rev. 12.3
31 1 –27
Implemen
nting HP BladeS
System Solutions

NIC conffiguration

ly
on
y
Eight Flex
er
xNICs share two 10Gb pipes,
p and yoou can individually assig
gned bandwiidth
liv
per FlexNNIC from 0.1Gb to 10Gbb. The minimuum bandwid dth is 100Mb b/s. You cannot
have a FllexNIC withoout a bandw
width assigned
d.
de
TT
rT
Fo

1 –28 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

Bandwid
dth and netw
work alloca
ation screen
n

ly
on
y
er
liv
 Band
dwidth is pro
ogrammed th
hrough the VC Manager.
Every
y connection
n gets a minimum of100M
Mb.
de

 Custom and Prefe


erred bandw
width selectio
ons are allocated first.
 Connections set to
t Auto even ndwidth.
nly split all reemaining ban
If Cu
ustom selectio
ons add up to
t more band dwidth than the interface
e supports,
TT

those
e connectionns get a prop
portional piecce of the pip
pe.
rT
Fo

Rev. 12.3
31 1 –29
Implemen
nting HP BladeS
System Solutions

Virtua
al Conne
ect modules

HP 1/10Gb Virtual Conneect Ethernet Mo


odule

ly
The Virtua
al Connect Ethernet
E Mod
dule is a blad
de interconnect that

on
 Simp
plifies server connections by cleanly sseparating th
he server encclosure from the
LAN
 Strea
amlines netw
works by redu
ucing cables without add
ding switchess to manage

y
 Allow
ws technician
ns to change
e servers in juust minutes, n
not days.
HP Virtua
al Connect offers converg ged LAN and d storage connectivity. Flex-10
networkin
continuess to expand this
er
ng simplifies data connecctions and co
t technolo
onsumes thee least amoun
ogy and its ca
apabilities a
across ProLian
nt of power. HP
nt, Integrity a
and
liv
Storage product
p liness. Virtual Con
nnect can simmplify and coonverge your server edge
connectio
ons, integratee into any staandards bassed networkin ng infrastructure and red duce
complexity while cuttiing costs.
de

The HP Virtual
V Conne
ect modules include:
i
 HP 1/10Gb
1 VC Ethernet
 HP 1/10Gb-F
1 VC
C Ethernet
TT

 HP Virtual
V ect FlexFabric
Conne
 HP Virtual
V Conne
ect Flex-10 10
0Gb Etherneet
rT

 HP Virtual
V ect 8Gb 20-port Fibre Chhannel Module
Conne
 HP Virtual
V Conne port Fibre Chhannel Module
ect 8Gb 24-p
Fo

1 –30 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction

Virtual Connect environm


ment with BladeSyste
B em enclosuure
The Virtual Connect modules
m plug
g directly into
o the interco
onnect bays oof the
enclosuree. The modulles can be placed side b by side for reedundancy. IInitial
implemen ntations inclu
ude the VC-EEnet module and the VC-FC module.

Important
! To install Fib
bre Channel in a Virtual Coonnect enviro
onment, the en
nclosure must
have at leasst one Virtual Connect Etheernet module, because the V
VC Manager
software run ns on a proce
essor resident on the Ethern
net module.

ly
Virtual Connect environm
ment — Thrree key co
omponentss

on
y
er
liv
de
TT
rT

Three key
y components o
of VC Environm
ment

HP Virtuaal Connect teechnology prrovides uniquue capabilitiees and tangiible


interconnnect value forr BladeSystem m c-Class cusstomers. It simplifies netw
work
Fo

infrastructures by reduucing physica al cabling, ssaves time annd costs asso


ociated with
systems deployment
d and
a operatio ons, provides server workkload mobilitty, and helpss IT
organizations work sm marter. In adddition to ena abling Flex-10 technologyy, Virtual
Connect also provide es the infrastrructure founddation for oth
her Enterprisee-class
managem ment offeringgs from HP, such as HP V Virtual Conneect Enterprisee Manager a and
HP Insighht Dynamics-V VSE.

Rev. 12.3
31 1 –31
Implementing HP BladeSystem Solutions

Virtual Connect Ethernet modules


 Connect selected server Ethernet ports to specific data center networks
 Support aggregation/tagging of uplinks to data center
 Are “LAN-safe” for connection to any data center switch environment (such as
Cisco, Nortel, or HP)
Virtual Connect Fibre Channel modules
 Selectively aggregate multiple server Fibre Channel HBA ports (QLogic/Emulex)

ly
on a Fibre Channel uplink using N_Port ID virtualization (NPIV)
 Connect enclosure to Brocade, Cisco, McDATA, or QLogic data center Fibre

on
Channel switches
 Display as a set of HBA ports to external Fibre Channel switches
Virtual Connect Manager (embedded)

y
 Manages server connections to the data center without impacting the LAN or

er
SAN
 Moves/upgrades/changes servers without impacting the LAN or SAN
liv
Virtual Connect FlexFabric does not require VC-FC.
de
TT
rT
Fo

1 –32 Rev. 12.31


HPP BladeSystem PPortfolio Introduction

HP Bla
adeSystem
m 10Gb KR Ethernett

ly
KR Ethernet co nnection

on
 KR iss the current IEEE 10Gb standard
s
 One
e-lane techno
ology
 One transmit pair

y
 e pair
One receive

er
 N on motherboard (LOM ) and a dual-port mezza
Avaiilable as LAN anine
 Auto
o-sensing 1G
Gb/10Gb
liv
Compatib
bility notes:
 Not compatible with
w XAUI-ba
ased c-Class 10Gb switch
h
de

 Embedded Dual Port NC532


2i 10Gb Etheernet Multifun
nction Serverr Adapter
 Broadcom 5771
5 1 chipse
et
 Whe en running at 1Gb speed
d, the adapteer compatiblle with existin
ng 1GbE
TT

interrconnects:
 HP 1Gb Ethernet switche
es and pass--thru
 Cisco 1Gb Ethernet
E swittches
rT

 1Gb Virtual Connect mo


odules
Fo

Rev. 12.3
31 1 –33
Implementing HP BladeSystem Solutions

Management and deployment tools


One of the advantages of HP BladeSystem over other vendors’ solutions is great
manageability and quick deployment. HP offers multiple management and
deployment tools designed especially for HP BladeSystem.

ProLiant Onboard Administrator


With Gen8 server blades, HP announced a new integrated Lights-Out (iLO) 4
management card. The features of this card include:

ly
 HP Advanced Error Detection Technology Early Video Progress Indicators
Early Fault Detection Messaging

on

 Improved Error Messaging


 Enhanced DIMM SPD Failure Logging
 HP SmartMemory (Gen8 DIMMs) includes special identifier

y
 System ROM can detect third-party DIMMs

er
Active Health System Log Support
liv
 Error Fault Logging without Health Driver
de
TT
rT
Fo

1 –34 Rev. 12.31


HPP BladeSystem PPortfolio Introduction

Onbo
oard Adm
ministrato
or modules

c7000 Onboard
O Adminnistrator with KV
VM

ly
on
y
er
liv
c3000 enclosure
e with OA tray markeed

Unique too the BladeSystem, the Onboard


O Adm ministrator is the enclosurre managem ment
processor, subsystem,, and firmwa are base used d to support the BladeSyystem enclosu ures
de

he managed devices con


and all th ntained withi n the enclosuure. It provid des a secure
single po
oint of contacct for users performing ba asic manageement tasks o on server
blades orr switches wiithin the encllosure. It is fuully integrateed into all of HP system
managem ment applicaations.
TT

The Onbooard Adminiistrator modu ule offers weeb-based and


d command line interface
e
(CLI) man
nageability. ItI has two ma
ajor functionns:
Driving all management featu ures through the two Inteer-Integrated Circuit (I2C)
rT

and the Intelligen


nt Chassis Management
M Bus (ICMB) interfaces
 Aggregating up to 16 iLO po orts in a c70 00 enclosuree and up to e
eight iLO po
orts
in a c3000 enclo osure — simplifying cablle managem ment and provviding a
Fo

grapphical interface to launch individual sserver iLO ma


anagement iinterfaces

Rev. 12.3
31 1 –35
Implementing HP BladeSystem Solutions

With the Onboard Administrator with KVM for c7000, you can directly access
Onboard Administrator and server video from the VGA connections on the rear
Onboard Administrator.
The rear of each module has an LED (blue UID) that can be enabled (locally and
remotely) and used to identify the enclosure from the back of the rack.
The Onboard Administrator features enclosure-resident management capability and
is required for electronic keying configuration. It performs initial configuration steps
for the enclosure, enables run-time management and configuration of the enclosure
components, and informs users of problems within the enclosure through email,

ly
SNMP, or the Insight Display.
The Onboard Administrator monitors and manages elements of the enclosure such as

on
shared power, shared cooling, I/O fabric, and iLO.
The Onboard Administrator can be managed locally, remotely, and through HP SIM
tools. The Onboard Administrator also provides local and remote management
capability through the Insight Display and browser access.

y
er
liv
de
TT
rT
Fo

1 –36 Rev. 12.31


HPP BladeSystem PPortfolio Introduction

Insigh
ht Display
y

ly
on
Insight Displa
ay view

y
The Blade eSystem Insig
ght Display panel
p is designed for connfiguring and d

er
troubleshooting whilee standing ne ext to the encclosure in a rrack. It provides a quick
ew of enclosu
visual vie ure settings and
a at-a-glannce health sta atus. Green indicates tha at
liv
everything in the encllosure is properly configuured and run nning within specification n.

Main Menu
de

From the Insight Displlay Main Meenu you can navigate to the main sub bmenus. For
example, if you want to look at th
he enclosure settings, preess the Down
n button to the
next menu item. The Main
M Menu items includee:
 Health Summary
y
TT

 Enclo
osure Setting
gs
 Enclo
osure Info
rT

 Blad
de or Port Info
o
 Turn Enclosure UID on
 View
w User Note
Fo

 Chatt Mode
 USB Menu

Rev. 12.3
31 1 –37
Implementing HP BladeSystem Solutions

Enclosure Settings Menu


From the Enclosure Settings Menu, you can configure the enclosure, update settings,
and make changes directly from the rack. Enclosure settings available from the
Insight Display panel include:
 Power settings
 Onboard Administrator IP address
 Enclosure Name

ly
 Rack Name
 Insight Display Lockout PIN#

on
y
er
liv
de
TT
rT
Fo

1 –38 Rev. 12.31


HP BladeSystem Portfolio Introduction

iLO Management Engine


With Gen8 servers, HP announced a new iLO management processor. Renamed
from “Integrated Lights-Out” to “Insight Lifecycle Onboard,” iLO simplifies server
setup, engages health monitoring, manages power and thermal control, and
promotes remote administration for ProLiant servers. Features include:
 HP Advanced Error Detection Technology Early Video Progress Indicators
 Early Fault Detection Messaging
Improved Error Messaging

ly

 Enhanced DIMM SPD Failure Logging

on
 HP SmartMemory (Gen8 DIMMs) will include special identifier
 System ROM can detect non-HP DIMMs
 Active Health System Log Support

y
 Error Fault Logging without Health Driver

er
The hardware monitoring and alerting capability is built in to the system. It starts
working as soon as a power cord and an Ethernet cable are connected to the server.
liv
The iLO management processor is embedded on the system board and ships
standard in every ProLiant Gen8 server, including the ProLiant BL, DL, ML, and SL
Series. It is the core foundation of the iLO Management Engine, which is a set of
de

embedded management features that support the complete lifecycle of the individual
server, from initial deployment through ongoing management to service alerting and
remote support. The iLO Management Engine enables you to access, deploy, and
manage a server anytime from anywhere with a Smartphone device.
TT

The iLO Management Engine supports a complete separation of system management


and data processing, not just on the LAN connections, but also within the system
itself. HP Active Health monitoring system captures critical server diagnostics
completely within the iLO management engine.
rT
Fo

Rev. 12.31 1 –39


Implemen
nting HP BladeS
System Solutions

HP In
nsight Co
ontrol

ly
on
y
Delivered
configure
d on DVD me
er
edia, Insight Control usess an integrateed installer to deploy and
e HP Systemss Insight Man nager (HP SIM M) and esseential infrastruucture
liv
managem ment softwaree rapidly and consistently, reducing m manual insta allation
procedurres and speeding time to production. These solutio ons deliver ccomplete
de

lifecycle managemen
m t for HP ProLLiant and Bla
adeSystem infrastructure. HP Insight
Control brings
b a single, consistent managemeent environmeent for rapid d deploymentt of
the operaating system and hardwa are configuraation.
HP Insigh
ht Control alsso includes fuull capabilitiees to migratee complete sservers (both
TT

physical and
a virtual) to t new serve er (both virtuaal and physical), supporting conversion
from physsical to virtua
al and vice-vversa and coonversion bettween differe ent virtualizattion
environments.
rT

In additio
on, Insight Control provid
des proactivee health and performancee monitoring
g,
power ma anagement, performance e analysis, lights-out remote management, and
virtual ma
achine mana agement for HP ProLiant M ML/DL 300--700 series servers and
BladeSystem infrastructure.
Fo

Insight Co
ontrol also extends the fu
unctionality o
of Microsoft SSystem Centeer and VMw
ware
vCenter Server
S by prooviding seammless integra tion of the unique ProLiant and
BladeSystem manage eability features into Micrrosoft System
m Center andd VMware
vCenter Server
S management conssoles.
HP Insigh
ht Control is based on HP
P SIM as the primary ma
anagement co
onsole.

1 –40 Rev. 12
2.31
HP BladeSystem Portfolio Introduction

For customers who have chosen Microsoft System Center as their primary console, we
offer HP Insight Control for Microsoft System Center, which is based on HP Insight
Control, but adds several extensions to make the ProLiant management information
available through the System Center consoles. It also adds monitoring, alerting,
proactive virtual machine management, and ProLiant operating system deployment
and update capabilities to the System Center consoles.
For customers who have chosen VMware vCenter Server as their primary console, we
offer HP Insight Control for VMware vCenter Server, which is based on HP Insight
Control, but adds several extensions to make the ProLiant management information

ly
available through the VMware vCenter Server console, enabling comprehensive
monitoring, remote control, and power optimization directly from the vCenter
console.

on
Features of Insight Control 7.x include:
 Support for the latest ProLiant Gen8 servers
Data Center Power Control (DCPC) support for Superdome 2

y

Power Management for BL Serial 800 Integrity server blades

er

 System Insight Control enhancements


liv
 PolyServe SQL database
 Improved field tools
de

 Federated central management server (CMS)


 ProLiant Agentless Management Pack
 ProLiant Linux Management Pack
TT

 ProLiant VMware Management Pack


 Server Updates Catalog 2
rT
Fo

Rev. 12.31 1 –41


Implemen
nting HP BladeS
System Solutions

HP Sy
ystems In
nsight Ma
anager

ly
on
y
HP System
ms Insight Manager
storage management
m
M
er (HP SIM) is the ffoundation fo
t strategy. HP
P SIM is a ha
or the HP uniified server-
ardware-leveel manageme ent product thhat
liv
supports multiple ope
erating system
ms on HP Pro oLiant, Integrrity, and HP 9000 serverrs,
HP Storag
ge MSA, EVA A, XP arrays, and third-p party arrays. Through a ssingle
de

managem ment view of Microsoft Windows,


W HP--UX 11iv1, HP-UX 11iv2, H HP-UX 11iv3,,
and Red Hat, and SuSE Linux, HP P SIM providees the basic managemen nt features off:
 Syste
em discovery
y and identifiication
Single-event view
w
TT

 Inven
ntory data co
ollection
 Repo
orting
rT

The core HP SIM softw ware uses Web-Based


W Ennterprise Ma anagement (W WBEM) to
deliver th
he essential capabilities
c required to mmanage all H HP server platforms. HP S SIM
can proviide systems management
m t with plug-inns for HP clieents, storage,, power, and
d
printer prroducts.
Fo

1 –42 Rev. 12
2.31
HP BladeSystem Portfolio Introduction

Using HP Integrity Essentials you can choose plug-in applications that deliver
complete lifecycle management for your hardware assets:
 Workload management
 Capacity management
 Virtual machine management
 Partition management
HP Systems Insight Manager can be installed on three different operating systems:
Windows, Linux, and HP-UX. Basic functionality is the same for all versions, but the

ly
Windows version has the greatest scalability and expansion possibilities. HP SIM
can also be easily integrated with other Insight Software components like HP Insight

on
Server Migration software, HP Insight Control Server Deployment software and
others.

HP SIM updates

y
HP SIM 7.0 and ProLiant Gen8 server blades introduce features to the management
software.

er
Updates to the HP SIM management software include:
liv
 Shifting Host Health and Alerting to iLO – Shifting the health and alerting tasks
to iLO provide more processing resources for applications.
Agentless Monitoring and Alerting – Hardware health and inventory available
de

even when the host is off.


 iLO Host Health Polling – HP SIM merges the iLO host health polling, inventory,
and alerts to give proper status rollups.
TT

 Licensing Reports – Generate two reports: by system or by product, including


additional information such as IP addresses, and total number of licenses and
seats.
rT

 Service Pack for ProLiant – SPP is a combination of the ProLiant Support Pack
and the firmware maintenance DVD and is available as an ISO for both
Windows and Linux.
Fo

Rev. 12.31 1 –43


Implementing HP BladeSystem Solutions

Learning check
1. Which operating systems are supported on ProLiant server blades? (Select two.)
a. Microsoft Windows
b. OpenVMS
c. Linux
d. HP-UX
2. A customer with a requirement for InfiniBand and more than 2Gb Fibre Channel

ly
would be a good candidate for which platform?

on
a. ProLiant rack-mount servers
b. HP VDI systems
c. HP BladeSystem

y
d. HP Superdome 2

er
3. The HP Storage tape blades have a full-height form factor.
 True
liv
 False
4. Name the tools that you can use to manage a BladeSystem.
de

.................................................................................................................
.................................................................................................................
.................................................................................................................
TT

.................................................................................................................
.................................................................................................................
5. List three key Gen8 technologies.
rT

.................................................................................................................
.................................................................................................................
Fo

.................................................................................................................

1 –44 Rev. 12.31


HP BladeSystem Enclosures
Module 2

Objectives

ly
After completing this module, you should be able to:

on
 Identify and describe the HP BladeSystem enclosures
 Explain how HP Onboard Administrator modules are used
 Describe the power architecture used in HP BladeSystem systems, including:

y
 Power modes

er
 Power supplies
 Power modules
liv
 Power distribution units (PDUs)
 Describe Thermal Logic technology
de

 Describe the cooling technologies designed for the BladeSystem systems


 Describe the use of a DVD-ROM drive in the enclosures
TT
rT
Fo

Rev. 12.31 2 -1
Implemen
nting HP BladeS
System Solutions

Blad
deSystem enclo
osure fa
amily

ly
HP offerss two BladeS
System enclossures:

on
 Blad
deSystem c30000 enclosurre — A loweer-cost, smalleer version ta
argeted for
remo
ote sites and small and medium
m businnesses (SMB
Bs)
 Blad
deSystem c70 on designed for data cen
000 enclosurre — An enteerprise versio nter

y
appllications

er
The Blade eSystem c30
000 enclosurre has a sma
aller rack foootprint, spanning 6U
compared to the 10U U of the c700
00 enclosuree. Seven c30 000 enclosurres per 42U
he maximum number of c3000
rack is th c enclo
osures in a fuully populated rack.
liv
The c30000 enclosure e is designed d for a small to mid-size ccompany, brranch office,, or
remote siites that have
e little or no rack space. The c3000 enclosure is the right
choice if::
de

 Two and eight se


erver or stora
age blades p
per enclosure are needed
 Less than 100 se n the compa ny or organization
ervers exist in
TT

 Servver blade purrchases are spread


s out o
over time
 Simp
ple power co
onnections, such
s as connnecting to a U
UPS or wall outlets, are
requ
uired
rT

Choose the HP BladeeSystem c700


00 for large r and dynam mic data cen
nter
environm
ments. The c7
7000 enclosu
ure is the rig ht choice if:
 Eight server or sttorage blade
es are needeed per enclossure
Fo

 The server
s enviro owing rapid ly, with frequent server p
onment is gro purchases
 ents include rack-level PD
Power requireme DUs or data center UPSs
 The highest levells of availability and red undancy aree required
 The server
s blade
es need multiiple rack-bassed shared storage arrayys

2 -2 Rev. 12
2.31
HP BladeSystem Enclosures

BladeSystem enclosure features


The cost advantage of the BladeSystem is driven by reductions in interconnect
components, which is especially important when considering deploying servers in
LAN and storage area network (SAN) environments.
BladeSystem enclosures feature:
 Cable-less server installation
 BladeSystem Insight Display and wizards for first-time setup

ly
 Onboard Administrator for remote management
 Multiple enclosure setup functions

on
 Choice of power input
 -48VDC, 110VAC, or 220VAC
 Ability to handle higher ambient temperatures

y
 Enclosure-based CD/DVD drive and 3-inch LCD Insight Display

er
 Interconnect fabrics of up to 8Gb/s
 Choice of redundant and non-redundant fabrics
liv
 RoHS compliance for devices changes in a single unit
de
TT
rT
Fo

Rev. 12.31 2 -3
Implemen
nting HP BladeS
System Solutions

Blade
eSystem enclosure
e e comparison

ly
on
Both Blad
deSystem enclosures cann hold comm on critical coomponents ssuch as serve ers,
interconn
nects, mezza
anine cards, storage blad des, and fans. Key differrences in the

y
BladeSysstem enclosures include rack
r size, red
dundancy op ptions, and sscalability.

er
Following
g is a feature
e comparison of BladeSyystem c7000
0 and c3000
0 enclosures::
 Heig
ght
liv
 c3000 — 6U
6 height
 c7000 — 10U height
de

 Form
m factor, whe
en fully popu
ulated
 c3000 — Eight half-heig
ght blades, ffour full-height blades, orr six half-heig
ght
and one full-height
TT

 c7000 — Sixteen half-height bladess or eight full-height blad


des
 Orie
entation
 c3000 — Horizontal
H ade orientat ion
bla
rT

 c7000 — Vertical
V blade
e orientation
 Power supplies
Fo

 c3000 — Six power sup


pplies provid
ding 1200W
W each
 c7000 — Six power sup
pplies provid
ding 2400W
W each

2 -4 Rev. 12
2.31
HP BladeSystem Enclosures

 System fans
 c3000 — Six HP Active Cool 100 fans
 c7000 — Ten Active Cool 200 fans
 Interconnect bays
 c3000 — Four interconnect bays
 c7000 — Eight interconnect bays
 Onboard Administrator

ly
 c3000 — Dual Onboard Administrator option

on
 c7000 — Single or dual Onboard Administrator capability with KVM
 Midplane
 c3000 — Tested up to 6 Gbit

y
 c7000 — Tested up to 10 Gbit

er
 Connection
 c3000 — Onboard Administrator serial/USB connections in front
liv
 c7000 — Onboard Administrator serial/USB connections in rear
 KVM support
de

 c3000 — Enclosure KVM


 c7000 — Onboard Administrator with KVM
TT
rT
Fo

Rev. 12.31 2 -5
Implemen
nting HP BladeS
System Solutions

Blade
eSystem c7000
c enclosure

ly
on
Following
g are the fea
atures of the c7000 enclo
osure:
 2400
0W power supplies
s

y
 Increased po —2400W; s upports moree blades witth fewer pow
ower output— wer

er
supplies
 High efficien
ncy to save energy;
e proviides 90% effficiency from as low as 10%
liv
load
 Low standby y power that facilitates reeduced poweer consumption when
servers are idle
i
de

 Onboard Ad
dministrator 2.30 or late r
 V high line operation onlyy
200 – 240V
TT

 Does not interoperate with existing 2


2250W supp
plies
A 16-lice
ense Insight Control
C suite SKU ships sstandard with
h the c7000
0 enclosure,
preconfig
gured with:
rT

 Ten fans
f
 Six 2400W
2 high
h efficiency power
p suppliees
Fo

2 -6 Rev. 12
2.31
HP Blad
deSystem Enclossures

Blade
eSystem c3000
c enclosure
e

ly
on
y
c3000
0 Onboard Adm
ministrator trayy

er
The c30000 enclosure
e includes fouur full-height device bayss or eight ha
alf-height devvice
bays, acccommodating the full arrray of BladeS System serveer, storage, ttape, and PC
CI
Expansio
on blades.
liv
An integrrated Insight Display is linked to the O
Onboard Ad
dministrator for local
enclosure
e manageme ent.
de

The c30000 enclosure e ships with two


t o support half-height
enclosurre dividers to
devices. To install a full-height
f de
evice, removee the dividerr and the corrresponding
blanks.
TT

Note
If you are using full-height se
erver blades in the enclosure, any empty full-height device
bays should beb filled with blade blanks. To o make a full-heeight blank, join
n two half-heigh
ht
blanks togethher.
rT
Fo

Rev. 12.3
31 2 -7
Implemen
nting HP BladeS
System Solutions

BladeS
System c3
3000 enclo
osure — Rear
R view

ly
on
y
The rear of the c3000 0 enclosure offers four innterconnect b
bays. The avvailable bayss

er
can supp port a variety
y of pass-thru
u modules annd switch tecchnologies, including
Ethernet, Fibre Chann nel, and Infin
niBand. The enclosure suupports up too three
liv
independ dent I/O fabbrics with the ability to co
ombine intercconnect bayys 3 and 4 fo
or a
fully redu
undant fabricc.
The HP In
nfiniBand sw
witch module is double-w ide; two neig
ghboring ba
ays are
de

combinedd into one ba


ay to supporrt these 20G
Gb switches.
The enclo
osure link mo
odule links enclosures in a rack. Encllosure links a
are designed
d to
support only
o BladeSyystem enclosuures in the sa
ame rack.
TT

The availlable enclosu


ure KVM mo odule enabless local admiinistrators to manage
individua
al servers witthout accessiing the Onbo
oard Administrator or iLO O managem
ment
processors.
rT

Note
The KVM mo
odule is an optio
onal componennt that must be o ately.
ordered separa
Fo

Power is delivered byy single-phasse power suppplies installed in the Bla


adeSystem
c3000 enclosure. Base c3000 en power supplies. Howeve
nclosures shiip with two p er,
up to six power suppplies may be installed dep pending on the AC redu undancy leve
el
required and the nummber of devicces installed in the enclo
osure. AC po ower suppliess
are auto-switching be
etween 100V VAC and 24 40VAC, provviding custom mers with
diverse deployment
d options.
o

2 -8 Rev. 12
2.31
HP Blad
deSystem Enclossures

BladdeSystem enclo
osure manage
m ement h
hardwa
are and
d
softw
ware
HP Onboard
O Administtrator

ly
on
y
er
liv
Onboa
ard Administrattor device view
w
de

The Onbo oard Adminiistrator provides a singlee point from w


which to view
w the entire
ment and perform basic m
BladeSystem environm managemen nt tasks on BladeSystem
devices.
The Onbo oard Adminiistrator can also
a be used he HP Virtual SAS Manager
d to access th
TT

(VSM) appplication on
n switches insstalled in thee BladeSystem
m enclosure. After selectiing
a switch, you can use
e the Onboa ard Administrrator to:
 View
w switch statu
us informatio
on
rT

 View
w other switch informatio
on
 Clickk virtual butto
ons to:
 Power off the switch
Fo

 Reset the sw
witch
 Toggle the Unit
U Identifica
ation (UID) liight on or offf
 Ope
en the Manag
gement Conssole (VSM)
 Ope
en the Port Mapping
M wind
dow to view
w detailed po
ort mapping information

Rev. 12.3
31 2 -9
Implemen
nting HP BladeS
System Solutions

Onbo
oard Adm
ministrato
or module compo
onents

c7000 Onboard
O Admiinistrator module

ly
The Onbo oard Adminiistrator moduule provides a single point of control for intelligen
nt
managem ment of the entire
e enclosu
ure. It has beeen designed
d for both loccal and remo ote

on
administrration of a BladeSystem enclosure.
e
Each Onboard Administrator mod
dule has a nnetwork, USB
B, and serial port and som
me
models also
a have a VGA
V connecttor.

y
 work port — Ethernet 1000BaseT RJ4
Netw 45 connectorr, which provvides Etherne
et
acce
ess to the On nboard Admiinistrator and d the iLO pro
ocessor on each server
blad
de. It also sup
configured to use
er
e the enclosu
onnect moduules with management prrocessors
pports interco
ure managem ment networkk. It auto neg
gotiates
liv
10000/100/10 or o can be con nfigured to fo
orce 100Mb b or 10Mb fu ull duplex.
 USB port — USB B 2.0 Type A connector uused for conn necting supported USB
devicces such as DVD
D drives, USB key drivves, or a keyyboard or moouse for
de

enclo
osure KVM use.
u To conne ect multiple d
devices, a USB hub (not included) is
required.
 al port — Se
Seria erial RS232 DB-9connecto
D or with PC sttandard pin-o
out. It conne
ects
TT

a co
omputer with a null-modem serial cab
ble to the Onboard Administrator
command line innterface (CLI).
 A connector — VGA DB-15 connector with PC stan
VGA ndard pin-ouut. To access the
KVM
M menu or Onboard
O Adm
ministrator CLLI, connect a VGA monito or or rack KV
VM
rT

monitor for enclo


osure KVM. This
T port is a available onlyy in the new
west Onboard d
Adm
ministrator release.
Fo

2 -10 Rev. 12
2.31
HP BladeSystem Enclosures

The uppermost enclosure uplink port functions as a service port that provides access
to all the BladeSystem enclosures in a rack. If no enclosures are linked together, the
service port is the top enclosure uplink port on the enclosure link module. Linking the
enclosures enables the rack technician to access all the enclosures through the open
uplink port.
If you add more BladeSystem enclosures to the rack, you can use the open enclosure
up port on the top enclosure or the down port on the bottom enclosure to link to the
new enclosure.
The Onboard Administrator module for the c7000 enclosure is available with or

ly
without KVM support. Firmware for both versions is the same, but the part numbers
are different.

on
Redundant Onboard Administrator modules
When two Onboard Administrator modules are present in an enclosure, they work in
an active - standby mode, ensuring fully redundant integrated management. Either

y
module can be the active module. The other becomes the standby module.

er
If you install two Onboard Administrator modules of the same firmware revision, the
one on the left of the enclosure will be the active one. If two Onboard Administrator
modules installed into the same enclosure have different firmware versions, the
liv
automatic configuration sync is disabled. Both Onboard Administrator modules will
put a clear entry into syslog stating exactly which version is on which Onboard
Administrator and how to upgrade them. However, the different firmware versions do
de

not affect which module is active or standby. The same rules apply.
Configuration data is constantly replicated from the active Onboard Administrator
module to the standby Onboard Administrator module, regardless of the bay in
which the active module currently resides.
TT

When the active Onboard Administrator module fails, the standby Onboard
Administrator module automatically becomes active. This happens regardless of the
position of the active Onboard Administrator module. This automatic failover occurs
rT

only when the currently active module comes completely offline and the standby
module can no longer communicate with it. In all other cases, the administrator must
initiate the failover by logging into the standby module and promoting it to active.
Fo

After the failed Onboard Administrator module is replaced, it automatically becomes


the standby module and receives the configuration information from the active
module. It remains standby until the administrator manually promotes it to the active
module or the active module fails.

Note
You can hot plug (add without powering down the system) Onboard Administrator
modules but they are not hot-swappable (replaceable without powering down the system).

Rev. 12.31 2 -11


Implemen
nting HP BladeS
System Solutions

Dual Onboard
O Administra
A ator tray

ly
on
c3000 tray for
f Onboard A
Administrator mo
odules

y
An enclosure ships with
w one Onboard Admin istrator moduule and supp
ports up to tw
wo
Onboardd Administrattor modules.
The stand
which hoouses the mod
er
dard Onboard Administrator module is preinstalleed in a front-loading trayy,
dule and the
e BladeSystem
m Insight Dissplay. The Onboard
liv
Administrrator tray:
 Supp
ports dual Onboard
O Adm
ministrator mo
odules for an
n enclosure
de

 Fits into an Onbo ot for a second Onboard


oard Adminiistrator moduule with a slo d
Adm ministrator
 Supp
ports either single
s mode or dual/reduundant modee Onboard A
Administrato
or
modules
TT

 Requ
uires a blankk module if th
here is no red
dundant Onboard Administrator
module
rT
Fo

2 -12 Rev. 12
2.31
HP Blad
deSystem Enclossures

Onboard Administrator liink module

ly
on
y
er
liv
The Onbo oard Administrator link module
m is se parate from the Onboard d Administraator
module. It is containe
ed within the Onboard A Administrator module slee eve. The rearr-
loading Onboard
O Addministrator link module ccontains RJ-445 ports for e
enclosure
de

up/down n links and Onboard


O Adm ministrator neetwork accesss.
Compone
ents of the Onboard
O Adm as shown in tthe graphic,
ministrator linnk module, a
are:
TT

1. osure down-link port — Connects to the enclosurre uplink porrt on the


Enclo
enclo
osure below with a CAT5
5 patch cabl e
2. Enclo
osure up-link
k port
rT

 Connects to the enclosurre downlink port on the eenclosure ab


bove with a
CAT5 patch cable
 On a stand-alone enclossure or top eenclosure in a series of lin
nked enclosu
ures,
Fo

osure uplink port functionns as a service port and temporarily


the top enclo
connects to a PC with a CAT5 patchh cable
3. OA1 Ethernet connection — Connects to the manageement networrk using a
CAT5 patch cable
4. 2 Ethernet connection — Reserved forr future enhancements
OA2

Rev. 12.3
31 2 -13
Implemen
nting HP BladeS
System Solutions

HP In
nsight Dissplay

ly
on
Inssight Display m
main screen

Insight Display, powe


ered by the Onboard
O Ad ministrator, p
provides locaal management

y
through an
a LCD display conveniently sited on the front of tthe system.

er
Insight Display is a sttandard component of c3 3000 and c7 7000 enclosures. It provides
ace that can be used for initial enclossure configurration, and is a valuable
an interfa e
tool durin
ng the trouble eshooting prrocess. If a p
problem occuurs, the display changes
liv
color andd starts to bliink to get the
e attention off an adminisstrator. The In
nsight Display
can even be used to upgrade the Onboard A Administrator firmware.
de

us available to an admin
The menu nistrator stand
ding in front of the blade
e enclosure a
are:
 y — Displayss the current condition of the enclosurre.
Heallth Summary
 Enclo gs — Enabless configuration of the enclosure, inclu
osure Setting uding Power
Mod de, Power Lim
mit, Dynamic Power, IP ad ddresses for Onboard Ad dministrator
TT

modules, enclosu ure name, annd rack nam e. It is also uused for conn
necting a DVVD
e to the blades and settin
drive ng the lockouut PIN.
 osure Info — Displays the
Enclo e current encclosure config
guration.
rT

 Blad o — Presentss basic inform


de or Port Info mation abouut the server blade
configuration andd port mapp
ping.
U on — Illuminates the eenclosure ideentification LLED. When th
Turn Enclosure UID his
Fo

optio
on is selectedd, the display
y backgrounnd color chan
nges to blue,, and a blue
LED is visible at the
t rear of th he enclosure..

2 -14 Rev. 12
2.31
HP BladeSystem Enclosures

 View User Note — Displays six lines of text, each containing a maximum of
16 characters. This screen can be used to display contact information or other
important information for users working on-site with the enclosure.
 Chat Mode — Enables communication between the person in front of the
enclosure and the administrator managing the enclosure through the Onboard
Administrator.
 USB Menu — Can be used to update Onboard Administrator firmware or to
save or restore the Onboard Administrator configuration when using a USB stick
plugged into the USB port on an Onboard Administrator module.

ly
on
y
er
liv
de
TT
rT
Fo

Rev. 12.31 2 -15


Implemen
nting HP BladeS
System Solutions

iLO Managem
M ment Engine

ly
on
The HP iLLO Managem ment Engine is a set of em mbedded ma anagement ffeatures that
support the complete lifecycle of the
t individua al server, from
m initial dep
ployment
through ongoing
o man nagement to service alertting and rem mote support.. The iLO
Managem ment Engine enables you u to access, d deploy, and manage a sserver anytim me

y
from anyw where with a smartphone e device. It ssupports a co omplete sepa aration of

er
system management and a data pro ocessing, no ot just on the LAN connecctions, but also
e system itsellf.
within the
liv
Through use of key iLO O technologies such as rremote conso ole with DVRR, virtual med
dia,
ower, and virrtual serial po
virtual po ort, you can remotely control iLO ma anaged serve ers
as efficiently as if you
u are actually
y at the remoote site. The iLO firmware e innovationss
de

enable yo ou to scale management


m t of iLO devicces easily thrrough directo
ory services and
to providee enhanced remote conssole performa ance through h Terminal Seervices.
iLO Mana
agement Eng
gine ships sta
andard on a
all ProLiant G
Gen8 servers..
Compone
ents of the iLO
O Managem
ment Engine include:
TT

 iLO managemen
m t processor — Is the coree foundation of the iLO M
Managementt
Engine. It is emb
bedded on th he system boaard and shipps standard iin every
ProLiiant Generattion 8 (Gen8 8) server blad
de. HP iLO simplifies servver setup,
rT

engaages health monitoring,


m manages
m po
ower and thermal control,, and promo otes
remoote administrration. Furthe
ermore, iLO eenables you to access, deploy, and
manage a serverr anytime fro om anywheree with a smartphone device.
Fo

 Agen
ntless Manag gement — Iss the base haardware mon nitoring and alerting
capa
ability built in
nto the system
m (running o
on the iLO ch
hipset) and sttarts working
g as
soon
n as a powerr cord and an Ethernet ca able are connnected to the server.

2 -16 Rev. 12
2.31
HP BladeSystem Enclosures

 Active Health System — Provides diagnostics tools and scanners in one bundle.
 Intelligent Provisioning (previously known as SmartStart) — Offers out-of-the box
single-server deployment and configuration without the need for media.
 Embedded Remote Support — Builds on Insight Remote Support, which runs on
a stand-alone system or as a plug-in to HP Systems Insight Manager (HP SIM). It
provides phone-home capabilities that can either interface directly with the
backend (which is ideal for smaller customers, or for remote sites without a
permanent connection to the main site), or can use an HP Insight Remote
Support host server as an aggregator.

ly
For more information about the iLO Management Engine go to:

on
http://www.hp.com/go/ilo

You also can enable Power Regulator on supported server models from the iLO
Standard browser, CLP, and script interfaces. On supported server models, iLO
displays the present power consumption in Watts. The present power is a five-minute

y
average that is calculated and displayed through all iLO interfaces.

er
liv
de
TT
rT
Fo

Rev. 12.31 2 -17


Implemen
nting HP BladeS
System Solutions

Agenttless Mana
agement

ly
on
HP iLO 4 Agentless Mannagement Conssole

y
For customers who wa ant to enrich the hardwaare managem ment with ope erating syste
em

er
on and alertting, iLO Management Enngine featurees Agentless Manageme
informatio ent,
an optionnal applicatio
on loaded innto the operaating system that routes th
he operatingg
liv
system management information and a alerts ovver the mana agement netw work.
Operating system age ents are not required; alll SNMP traps and alertinng take place e
from the iLO architectture. Agentle
ess Managem ment providees:
de

 Incre
eased securitty and stabiliity, even wheen systems are not yet po
owered on
 Deta
ailed informa eeds time to issue diagno
ation that spe osis and reso
olution
TT
rT
Fo

2 -18 Rev. 12
2.31
HP Blad
deSystem Enclossures

Active
e Health Sy
ystem

ly
on
y
er
The Activve Health Sysstem is an esssential comp ponent of thee HP iLO Ma anagement
Engine. Itt monitors an nd records changes in th e server hard dware and ssystem
configuraation. It assistts in diagnossing problemms and deliveering rapid rresolution wh
hen
liv
system failures occur. This technology monitorrs and secureely logs more e than 1,6000
system pa arameters an nd 100% of configuration
c n changes fo or accurate p
problem
de

resolution
n. Because Active
A Health is agentlesss, it does not impact application
performa ance.
Previouslyy, whenever you had a sy ystem issue wwithout an obvious root ccause, you
would relly on runningg diagnostic tools to try tto isolate thee cause. Althoough these
TT

tools ofte
en do a good d job of provviding the neecessary information, theyy can only b be
used afteer the fact and often just look at subsyystems individ dually. Circu
umstances occcur
where the ese tools can
nnot provide the informattion needed to isolate the e root cause.
rT

Active He
ealth System technology:
 Monnitors and seccurely logs more
m than 1,6
600 system pparameters a
and 100% off
configuration cha
anges for mo ore accurate problem ressolution
Fo

 Enab
bles you to deploy
d updattes three timees faster with
h 93% less downtime usin
ng
HP Smart
S Update e Manager (SUM)
(
 Runss as agentlesss system and
d does not im
mpact appliccation perform
mance

Rev. 12.3
31 2 -19
Implementing HP BladeSystem Solutions

In minutes, customers can securely export an Active Health file to an HP Support


professional to help resolve issues faster and more accurately. When Insight Remote
Support is enabled, HP Support receives this data automatically. With this log, HP
Support can solve even the most elusive, intermittent issues in a minimum amount of
time and with little effort on the customer’s end.
Customers with very tight security requirements can switch off the Active Health
System logging.
Benefits include:
Faster root-cause analysis and problem resolution

ly

 Always-on proactive diagnostics rather than reactive

on
 Continuous monitoring for increased stability and shorter downtimes
 Rich configuration history
 Health and service alerts

y
 Integrated diagnostics tools and scanners

er
 Easy export and upload to HP Service and Support

For more information on the HP Active Health System, go to:


liv
http://h18013.www1.hp.com/products/servers/management/activehealthsystem/inde
x.html
de
TT
rT
Fo

2 -20 Rev. 12.31


HP Blad
deSystem Enclossures

HP Inttelligent Prrovisioning
g

ly
on
HP Intellig
gent Provisio
oning enable
es single-servver deploymeent and confiiguration

y
without th
he need for additional
a media.
m Previo us generatio
on server provvisioning and

er
maintena ance capability is now em
mbedded in the iLO Man nagement Enngine across all
ProLiant Gen8
G serverss.
Intelligent Provisioning
g is targeted
d for provisioning and deeploying sing
gle servers an
nd
liv
provides these operatting system installation o ptions:
 Reco
ommended/E
Express Insta
allation
de

 Assissted/Guided
d Installation
 Man
nual Installation
With Inte
elligent Provissioning, you can choose from numero
ous options:
TT

 Boott into HP Inte


elligent Provissioning on thhe server by pressing F10
0 at server PO
OST
so th
hat you can begin
b server configuratioon and mainttenance
 Update drivers and
a systems software
s by cconnecting d
directly to HPP.com and
rT

perfo
orm firmware
e updates an
nd install an operating syystem in the ssame step
 Rollb
back firmwarre from within
n the HP Inteelligent Provissioning main
ntenance me
enu
Insta
all Windows, Linux, and VMware
V quicckly
Fo

 Proviision a serve
er remotely ussing iLO
 Remo
ote Support registration
 Full system
s integrration and operating systtem configurration elimina
ates 45% of
stepss, allowing you
y to deploy y server threee times faster.

Rev. 12.3
31 2 -21
Implementing HP BladeSystem Solutions

Communication between iLO and server blades


In the BladeSystem architecture, a single enclosure houses multiple servers. A
separate power subsystem provides power to all server blades in that enclosure.
ProLiant server blades use the iLO management processor to send alerts and
management information throughout the server blade infrastructure. However, there is
a strict communication hierarchy among ProLiant server components.
The Onboard Administrator management module communicates with the iLO
processor on each server blade. The Onboard Administrator module provides
independent IP addresses for each server blade. The iLO firmware exclusively controls

ly
any communication from iLO to the Onboard Administrator module. There is no path
from an iLO processor on one server blade to the iLO processor on another blade.

on
The iLO processor has information only about the presence of other server blades in
the infrastructure and whether there is enough amperage available from the power
subsystem to boot the iLO host server blade.
Within BladeSystem enclosures, the server blade iLO network connections are

y
accessed through a single, physical port on the rear of the enclosure. This greatly

er
simplifies and reduces cabling.

Note
liv
The iLO on a server blade maintains an independent IP address.
de
TT
rT
Fo

2 -22 Rev. 12.31


HP Blad
deSystem Enclossures

HP iLO
O Advance
ed for HP BladeSysttem

ly
on
y
er
liv
HP
P iLO features ccomparison

iLO functionality can be enhanced d by using iLLO Advanced


d for BladeS
System. This is a
de

simple liccense key tha


at unlocks ne
ew capabilities.
iLO Adva
anced for Bla
adeSystem fe
eatures includ
de:
 Sharred remote co onsole — Up p to four On board Administrator/iLO O users with
remoote console privileges
p in different
d loca
ations can coollaborate ussing the sharred
TT

remoote console. It is used to troubleshoot,


t , maintain, a
and administter remote
serveers. The session leader ca an allow eithher view onlyy or full conssole control b
by
indivvidual particiipants. Share
ed remote co onsole modee is supported d from the
integ
grated remote e console onn clients using
g Microsoft IInternet Explo orer browserrs.
rT

 Micrrosoft Terminal Services Pass-Through


P h — Microsoft Terminal SServices workk as
long as the operating system is functioninng. With iLO O Advanced, a Terminal
Servvices session is routed throough the iLO
O network intterface to improve
Fo

prodduction netwo ork security. It automatica


ally switches to Terminal Services whe
en
the operating
o sysstem is loade
ed and avail able. When not available, iLO
Adva anced provid des its own graphical
g connsole.

Rev. 12.3
31 2 -23
Implementing HP BladeSystem Solutions

 Directory Services integration — Onboard Administrator/iLO integrates with


enterprise-class directory services to provide secure, scalable, and cost-effective
user management. You can integrate Microsoft Active Directory with iLO devices
to maintain iLO user accounts. Integrating with a directory services application
such as Active Directory allows you to use the Lightweight Directory Access
Protocol (LDAP) directory to authenticate and authorize user privileges to multiple
iLO devices. With Active Directory, you have the flexibility to integrate with or
without a schema extension.
 A simple installation program is available to install a management console

ly
snap-in and extend an existing directory schema to enable directory support
for iLO.

on
 A directory migration tool is available to automate setup for both methods
of integration.
 Integration also supports LDAP nested groups.

y
 You can configure a redundant domain controller when using Active
Directory and iLO.

er
iLO can use a backup domain controller if the primary domain controller is
unavailable. In an Active Directory configuration, there is no need to configure
liv
the actual iLO device to allow a backup domain controller. The Microsoft
Domain Name System (DNS) server will automatically update the DNS name to
reflect domain controller availability.
de

You should configure iLO to reference the DNS name of the domain, not the
specific IP address of the domain controller. If the primary DC is unavailable, the
DNS lookup of the domain will not return that server’s IP, so that iLO can
connect to the next available domain controller. Alternatively, in the iLO
TT

configuration, you can use a comma or a semicolon between the IP addresses


for iLO when trying to contact the Active Directory.
 Automatic and on-demand video footage — Onboard Administrator/iLO
rT

Console Replay captures and stores for replay the console video during a
server's last major fault or boot sequence. Server faults include an ASR, server
boot sequence, Linux panic, or Windows blue screen. Additionally users are
able to manually record and save any console video sequence to their client
Fo

hard drive for replay from the ProLiant Onboard Administrator/iLO Integrated
remote console.
 iLO Text Console — Onboard Administrator/iLO text consoles provide server
access via a text console, similar to a graphical remote console.
 iLO Video Player — Onboard Administrator/iLO allows you to view
automatically captured server video footage or on-demand captured footage
within an iLO session or separately through the iLO video player.

2 -24 Rev. 12.31


HP BladeSystem Enclosures

 Multi-factor authentication — Onboard Administrator/iLO provides strong user


authentication with two-factor authentication using digital certificates embedded
on smartcards or USB flash drives. Using this form of strong authentication, iLO
access can be restricted only to IT individuals possessing a certificate bearing
smartcard or flash drive and a PIN.
 Power Regulator reporting — Both iLO Advanced and iLO Advanced for
BladeSystem enable access to power-related data from any of the three iLO
interfaces (browser, script, or command line) on supported server models.
Available information includes time spent in Power Regulator Dynamic Savings

ly
mode and average, peak and minimum power consumption over 24-hour
intervals. Check the server QuickSpecs to verify specific system support for
Power Regulator and power monitoring.

on
 Virtual folders — This feature allows you to mount a local folder on a remote
server.

y
er
liv
de
TT
rT
Fo

Rev. 12.31 2 -25


Implemen
nting HP BladeS
System Solutions

Blad
deSystem pow
wer and cooling

ly
on
y
er
liv
In the past, better datta center perrformance wwas the goal, and power and cooling
de

costs werre the price paid


p for performance. Ass energy costs skyrocket, processor and
memory technologies
t s make perfo
ormance the a abundant resource, and power and
cooling are
a at a prem mium. As servver density rrises, so do p
power requirrements. As
power increases, so does
d heat ouutput. The ina wer and cool data centerrs
ability to pow
TT

y is preventin
effectively ng many com mpanies from m achieving ttheir IT goalss.
Power an
nd cooling arre issues regardless of fo
orm factor. Ho
owever, incre
eased serverr
and proccessor density
y have accellerated the d
demands.
rT

To achievve a controlla
able balance e between po ooling while boosting data
ower and co
center en
nergy efficien
ncy, significant tradeoffs m
must be mad
de:
 Larger fans move
e more air bu
ut take more power.
Fo

 Smaller fans nee m to move th e same amo


ed higher rpm ount of air.
 High
her rpm means more noisse for a given size fan.
 Physical limits dicctate how fast a fan can go.
 More
e fans requirre more pow
wer and resultt in more cosst.

2 -26 Rev. 12
2.31
HP Blad
deSystem Enclossures

Blade
eSystem enclosure
e e design challeng
ges

ly
Aperture
es in backplane
es/signal midpla
anes of BladeS
System enclosurres

on
Challenges faced by the BladeSystem design engineers in
ncluded:
 Small apertures in
i the backpplane assemb bly meant tha
at getting suffficient air fro
om
the server
s bladess required high pressure.

y
 The Xeon
X processor E5 seriess in the ProLiant BL460c Gen8 serverr blade requires

er
up too 30 cubic fe
eet per minutte (CFM) to ccool and theerefore can re
equire high
airflo
ow.
Up to
o 16 half-heiight blades per
p chassis reequire large air volumes to be moved
d.
liv

HP Active
e Cool Fans and Thermal Logic are thhe solutions tto these challlenges.
de
TT
rT
Fo

Rev. 12.3
31 2 -27
Implemen
nting HP BladeS
System Solutions

PARSEC archiitecture

ly
on
y
er
liv
The Blade
eSystem c70
000 enclosure e uses paralllel, redundan
nt, scalable, enclosure-
based co
ooling (PARSE
EC) architectture:
de

 allel — Fresh,, cool air flow


Para ws over all thhe blades (in
n the front off the enclosure)
and all the intercconnect modules (in the b back of the eenclosure).
 Reduundant — Fa ans located in each of four cooling zo ones supply direct coolin
ng
TT

for server bladess in their resp


pective zoness and redund
dant cooling
g for adjacennt
zonees. Each zone e can contain four serverr blades.
 Scalaable — To operate, serveer blades req
quire a minim
mum of four fans installed d at
rT

the rear
r of the c7
7000 enclosure. The enclosure suppo orts up to 10 fans so thatt
cooling capacity y can scale as
a needs cha ange.
 osure-based — By mana
Enclo aging cooling ne
g throughoutt the entire enclosure, zon
cooling minimize es the powerr consumptio n of the fan subsystem and increasess
Fo

fan efficiency
e in a single zone if one of thhe server bla
ades requiress more coolinng.
This saves operating costs an nd minimizess fan noise. H HP recommends using att
leastt eight fans. Using 10 fan
ns optimizes power and ccooling.

2 -28 Rev. 12
2.31
HP BladeSystem Enclosures

PARSEC architecture optimizes thermal design to support all customer configurations


from 1 to 16 servers, with one to 10 fans. The BladeSystem enclosure features a
relatively air-tight manifold. The servers seal into the front section when in use; doors
seal off when servers are not in use. The rear section has back flow preventers that
seal when a fan does not rotate or is not installed.
The middle section wraps around the complex power and signal distribution
midplanes to ensure that air is properly metered from the 10 parallel fans to the 16
parallel servers. These are three large snap-together plastic, metal, and gasket
subassemblies.

ly
Cooling is managed by the Thermal Logic technology, which features Active Cool
Fans. These fans provide adaptive flow for maximum power efficiency, air movement,

on
and acoustics. Active Cool Fans provide an adaptive flow for maximum power
efficiency, air movement, and acoustics.
The PARSEC architecture is designed to draw air through the interconnect bays. This
allows the interconnect modules to be smaller and less complex.

y
The power supplies are designed to be highly efficient and self-cooling. Single- or

er
three-phase enclosures and N+N or N+1 redundancy yield the best performance per
watt.
liv
de
TT
rT
Fo

Rev. 12.31 2 -29


Implemen
nting HP BladeS
System Solutions

Blade
eSystem c7000
c enclosure airflow

ly
on
y
er
Schema of airflow inside a c7000 enclo
osure

Thermal Logic
L uses a control algo
orithm to optiimize for anyy configuratio
on based on
n
liv
the follow
wing customeer parameterrs:
 Airflo
ow
de

 Acou
ustics
 Powe
er
 Perfo
ormance
TT

Airflow th
hrough the enclosure is managed
m to eensure that eevery device gets cool air
and doess not sit in the hot exhausst air of anotther device, a and to ensurre that air on
nly
goes whe ere it is need
ded for coolinng. Fresh airr is pulled intto the interco
onnect bays
through a side slot in the front of the enclosuree. Ducts movve the air fro om the front to
o
rT

the rear of
o the enclosure, where itt is then pulleed into the innterconnect m modules and d
the centra
al plenum, and then exha austed out thhe rear of thee system.
Fo

2 -30 Rev. 12
2.31
HP Blad
deSystem Enclossures

Active
e Cool Fa
ans

ly
on
y
er
liv
HP Active
e Cool Fans are an innovvative designn that can coool 16 bladess using as litttle
as 100WW of power. The
T design iss based on a aircraft technology that generates fann-tip
de

speeds up to 136 mp ph with high pressure and


d high airfloww while using
g less powerr
than traditional fan designs.
With 20 patents pend
ding, Active Cool Fans m
meet a number of data ce
enter
requireme
ents:
TT

 The most energy--efficient airflow


 Movving enough air to cool ju
ust the comp onents that n
need it
Enou
ugh power to
o pull cool air through thee blades and
d enclosure
rT

 Half the noise ou


utput of equivvalent rack-m
mount serverss
 Loweer power con y using only the number of fans need
nsumption by ded to mainta
ain
Fo

prese
et cooling thresholds
 Easy
y scalability to m stringen t future roadmap require
t even the most ements

Rev. 12.3
31 2 -31
Implementing HP BladeSystem Solutions

Fan location rules


The c7000 enclosure
The c7000 enclosure ships with four Active Cool 200 Fans and supports up to 10
fans. Install fans in even-numbered groups, based on the total number of server
blades installed in the enclosure:
 Four server blades — Install fans in bays 4, 5, 9, and 10.
 Six server blades — Install fans in bays 3, 4, 5, 8, 9, and 10.

ly
 Eight server blades — Install fans in bays 1, 2, 4, 5, 6, 7, 9, and 10.
Ten server blades — Install fans in all bays.

on

Important
! If the fans are not in these exact locations, the thermal subsystem will be degraded and no
newly inserted server will be allowed to power up.

y
The c3000 enclosure

er
The c3000 enclosure ships with a minimum of four fans and supports up to six. The
c3000 supports Active Cool 100 Fans. To ensure proper cooling, HP recommends
liv
that you distribute fans based on these fan location rules:
 Four-fan configuration — Fans in bays 2, 4, 5, and 6 support a maximum of
four half-height blades or two full-height blades.
de

 Six-fan configuration — Fans in all six bays support population of all server
bays.
TT
rT
Fo

2 -32 Rev. 12.31


HP Blad
deSystem Enclossures

Fan populatio
p on
The c7
7000 encllosure

ly
on
y
er
liv
de

Fan location placem


ment (c7000)

 In a four-fan con
nfiguration, fan
f bays 4, 5 5, 9, and 10
0 are used to support a
TT

maximum of two o devices locaated in devicces bays 1, 2


2, 9, or 10. O
Only two devvice
bayss can be usedd with four fa
ans.
 In a six-fan confiiguration, fan bays 3, 4,, 5, 8, 9, and
d 10 are use
ed to supportt
rT

devicces in device
e bays 1, 2, 3,
3 4, 9, 10, 11, or 12.
 n eight-fan configuration, fan bays 1,, 2, 4, 5, 6, 7, 9, and 10
In an 0 are used to
o
suppport devices in
i all device bays.
Fo

 are used to suupport devices in all device


In a ten-fan conffiguration, alll fan bays a
bayss.

Important
! Install fan bla
anks in any unu
used fan bays.

Rev. 12.3
31 2 -33
Implemen
nting HP BladeS
System Solutions

The c3
3000 enclosure

ly
on
y
er
Fan
F population (c3000)

Base c30 000 enclosurres ship with four Active C Cool 100 Fa ans installed, supporting up
liv
to four ha
alf-height devvices or two full-height seerver blades.. Adding twoo additional
fans to th
he enclosure allows popu ulation of eigght half-heigh
ht devices or four full-heig
ght
de

server bla
ades.
 Four--fan configurration require
es populationn of fan bayys 2, 4, 5, an
nd 6.
 Six-fa
an configura
ation enabless population of all fan ba
ays.
TT

In a four-fan configura
ation, the Onnboard Adm ministrator preevents bladee devices in
bays 3, 4 7, and 8 frrom powering g on and ideentifies the fa
an subsystem
m as degrade ed.
To popula ate blade de
evices in thesse bays, pop
pulate c3000 0 enclosures with six fanss.
rT
Fo

2 -34 Rev. 12
2.31
HP BladeSystem Enclosures

Fan failure rules


In the event of a fan failure, the Onboard Administrator indicates on the Insight
Display and web GUI whether the fan failure resulted in loss of redundancy. The
health LED of the failed fan illuminates solid amber.

Important
! Remove and replace this fan to correct the failure condition. Replacing the failed fans will
result in automatically returning the fan subsystem health to OK.

If the fan subsystem is marked degraded, another fan failure will result in marking the

ly
fan subsystem as failed. In this circumstance the Onboard Administrator probably
cannot prevent a server from overheating.

on
Caution
Failure to replace the affected fans could result in loss of data or damage to hardware.

In all cases of fan failure, the Onboard Administrator continues to monitor server

y
temperatures and provides adequate cooling. In extreme cases such as fan failure, or

er
elevated enclosure or server ambient temperatures, the system resorts to maximum
enclosure fan rpm. When the failed fan is replaced, fan subsystem redundancy is
restored and the fan rpm returns to a controlled rpm.
liv
Fan redundancy rules control system behavior in the event of the loss of fan:
 If the 10-fan rule (c7000) is in place and the failed fan is in bay 1, 2, 6, or 7
de

and no blades are powered on in right half of enclosure (bays 5 through 8 and
13 through 16):
 The fan subsystem is still redundant.
TT

 The failed fan is marked failed.


 Place the remaining fans to ensure compliance with the six-fan rule.
In the c3000 enclosure, if you have six fans installed, they are automatically 5+1
rT

redundant. If one fan fails, the Onboard Administrator will not prompt you to step
down to the four-fan configuration, because some of the server blades would have to
be powered down. Instead, the Onboard Administrator allows the server to run with
five fans, provided that adequate cooling continues.
Fo

Rev. 12.31 2 -35


Implemen
nting HP BladeS
System Solutions

Fan qu
uantity versus powe
er

ly
on
y
er
Fan
F quality verssus power

The preceeding graph represents the number o of fans versuss power draw
w. The circled d
liv
area indiicates the po
oint at which 10 fans are more efficient than eightt fans for the
e
same airfflow delivere
ed.
de

According to the lawss of airflow dynamics,


d 10
0 fans will mmove more CFM of air with
less poweer than six fa
ans or eight fans.
f In addiition, althoug
gh it might se
eem to be a
contradicction, they wiill be quieterr. Six high-po
owered fans actually are 3.7dB loude er
than eighht lower-powe ered fans.
TT

For sound
ds with simila
ar frequency
y content, mo
ost people coonsider a 3dB change in
sound pre
essure a 2x difference in
n sound. Sim ilarly, peoplee perceive a 10dB increa
ase
as being nine times as
a loud.
rT
Fo

2 -36 Rev. 12
2.31
HP Blad
deSystem Enclossures

Self-se
ealing Blad
deSystem enclosure
e

ly
on
y
er
The c700 00 enclosure and the com mponents witthin it optimiize the coolin ng capacity
through unique
u mechanical designs. Airflow tthrough the eenclosure is m managed to
liv
ensure that every devvice gets coool air, devicess do not sit in
n the hot exhhaust air of
another device,
d and ensures
e that air only goees where it is needed for cooling. Fresh
air is pulled into the interconnect
i bays throug h a slot in thhe front of the
e enclosure.
de

Ducts mo ove the air fro


om the front to the rear o
of the enclosuure, where it is then pulleed
into the interconnects and the cen ntral plenum,, and then exxhausted outt the rear of tthe
system.
Fan louve ers automaticcally open when
w a fan is installed annd automatically close wh hen
TT

the fan iss removed. When


W a fan is installed innto the enclosure, the servver blade in the
enclosuree activates a lever that oppens a door on the fan a assembly to aallow air to fflow
through the server blaade.
rT
Fo

Rev. 12.3
31 2 -37
Implemen
nting HP BladeS
System Solutions

Coolin
ng multiple
e enclosurres

ly
on
y
er
liv
Multiple c70
000 enclosures cooling requireements
de

The c70000 enclosure can operate e with four ennclosures in a rack if the data center is
equippedd to deliver sufficient
s airfllow at the fro
ont of the racck and no aiir recirculatio
on
occurs ovver the top or around the e sides of thee racks.
HP recommmends that you run the Power Sizer before installing enclosu
ures to
TT

determinee the load th


hat the propo
osed system w
would place on the cooling and pow
wer
systems.
rT
Fo

2 -38 Rev. 12
2.31
HP BladeSystem Enclosures

Thermal Logic
Thermal Logic is the portfolio of technologies embedded throughout HP servers to
produce an energy efficient data center. Thermal Logic reduces energy consumption,
reclaims capacity, and extends the life of the data center.
Thermal Logic innovations include:
 Dynamic Power Capping – Reclaim trapped power and cooling capacity by
safely “capping” server power consumption. The result: triple server capacity.
Sea of Sensors – Up to 32 sensors adjust fan speeds and powers only the slots

ly

that are in use. The result: 2.5x more efficient than ProLiant G5 servers and much
quieter.

on
 Common Slot Power Supplies – Reduce spares with standardized form factors
and “right-size” to match capacity. The result: up to 92% efficiency.
 Power Management Tools – Insight Control Suite management software delivers

y
deep insight, precise control, and ongoing optimization to unlock the potential
of the infrastructure.

er
Intelligent Power Discovery – The industry's first automated, energy-aware
network to bring together facilities and IT by combining HP Intelligent PDUs,
liv
Platinum common slot power supplies, and Insight Control software.
de
TT
rT
Fo

Rev. 12.31 2 -39


Implemen
nting HP BladeS
System Solutions

Power Regulator techn


nologies

ly
on
Schema of ProLiant
P Power Regulator operration

y
HP Powerr Regulator teechnologies improve servver energy eefficiency by giving CPUss full
power for application ns when theyy need it and d power savings without performance e

er
degradattion when ap pplication acctivity is reduuced. It enab
bles you to re
educe power
consumption and gen nerate less daata center heeat, resulting
g in compoun nded cost
savings. You
Y save firsst by using le
ess power in racks and seecond by pro oducing less
liv
work for air cooling systems.
s Thesse factors ca n save on op perational exxpenses and
enable greater densitty in the dataa center enviironment, an nd do not neccessarily resu
ult
in loss of system perfo
ormance.
de

Power Reegulator Static Low Powerr and Dynam mic Power Saavings modess as well as
operatingg system-baseed modes (AAMD PowerN Now or Intel Demand Bassed Switching)
can be enabled to sa ave on serverr power and cooling costs. On suppo orted ProLian
nt
TT

servers, Power
P Regula
ator allows CPUs
C to operrate at lower frequency a
and voltage
during peeriods of red
duced applicaation activityy.
This power managem ment technoloogy enables dynamic or static change es in CPU
performaance and pow n dynamic m ode, Power Regulator au
wer states. In utomatically
rT

adjusts th
he server's processor pow wer usage annd performance to match h CPU
applicatioon activity. Power
P Regulaator effectivelly executes a
automated po olicy-based
power ma anagement at a the individ
dual server leevel. In addittion, a uniqu
ue static low
Fo

power mo ode allows servers


s to run
n continuouslly in a systemm's lowest poower state.
Power Reegulator is an n operating-ssystem-indepeendent poweer managem ment feature o
of
ProLiant servers.
s It is included
i on all ProLiant sservers (200 series and g
greater).

Note
For additiona
al information about
a Power Reg
gulator visit:
http://h180004.www1.hp.co om/products/seervers/manageement/ilo/pow
wer-regulator.htm
ml

2 -40 Rev. 12
2.31
HP BladeSystem Enclosures

Power Regulator for ProLiant


Power Regulator for ProLiant enables ProLiant servers with policy-based power
management to control CPU power state (CPU frequency and voltage) based on a
static setting or automatically based on application demand.
HP Power Regulator uses processor P-states to regulate server power consumption in
various workload environments.
The Power Regulator feature provides iLO-controlled speed stepping for Intel x86 and
AMD processors. It improves server energy efficiency by giving processors full power

ly
when they need it and reducing power when they do not. This power management
feature allows ProLiant servers with policy-based power management to control

on
processor power states.

Important
! Dynamic Power Savings mode is not available on all processor models. To determine
which processors are supported, consult the Power Regulator website at:

y
http://www.hp.com/servers/power-regulator

er
Because Power Regulator resides in the BIOS, it is independent of the operating
system and can be deployed on any supported ProLiant server without waiting for an
liv
operating system upgrade. HP has also made deployment easy by supporting Power
Regulator settings in the HP iLO scripting interface.
The Power Regulator for ProLiant feature enables iLO 4 to dynamically modify
de

processor frequency and voltage levels based on operating conditions to provide


power savings with minimal effect on performance.

Note
TT

In addition to Power Regulator, ProLiant servers also support operating system based
power management using Intel Demand Based Switching and AMD Opteron PowerNow.
rT
Fo

Rev. 12.31 2 -41


Implementing HP BladeSystem Solutions

Power Regulator for Integrity


Although power monitoring operates independently of the operating system, Power
Regulator for Integrity requires a compliant operating system version. Power
regulation also requires power-performance state (p-state) capable hardware.

Note
Consult operating system documentation for details on power management support for a
given system.

Power Regulator for Integrity operates in four modes:

ly
 Static Low Power Mode — Power Regulator for Integrity sets the processors to the
p-state with the lowest power consumption and forces them to stay in that state.

on
This mode saves the maximum amount of resources, but it might affect the system
performance if processor utilization stays at 75% utilization or more.
 Static High Performance Mode — Power Regulator for Integrity sets the

y
processors to the p-state with the highest performance and forces them to stay in
that state. This mode ensures maximum performance, but it does not save any

er
resources. This mode is useful for creating a baseline of power consumption
data without Power Regulator for Integrity.
liv
 Dynamic Power Savings Mode — Allows the system to dynamically change
processor p-states when needed based on current operating conditions. The
implementation of this mode is operating system specific, so consult your
de

operating system documentation for details.


 Operating System Control Mode — Power Regulator for Integrity configures the
server to enable the operating system to control the processor p-states. Use this
setting to put the operating system (including operating system-hosted
TT

applications) in charge of power management. Moving to or from this state does


not require a reboot of Integrity servers.
The HP Power Regulator for Integrity modes are available on supported platforms
rT

equipped with Dual-Core Intel Itanium Processor 9100 series 1.6 GHz dual-core
parts.

Note
Fo

The user must have the Configure iLO 4 Settings privilege to change these settings.

2 -42 Rev. 12.31


HP Blad
deSystem Enclossures

iLO 4 power ma
anagemen
nt

ly
on
y
er
liv
iLO 4 powwer manageement enable es you to view
w and contro ol the powerr state of the
server, monitor power usage, mon nitor the proccessor, and modify powe er settings. TThe
Power Ma anagement page
p in the iLO
i 4 interfa ace has threee menu optioons:
de

 Servver Power — The followin


ng options arre available::
 Momentary Press — This button provvides behaviior identical to pressing tthe
physical pow
wer button.
TT

 Press and Hold — This button


b is idenntical to presssing the phyysical power
button for fivve seconds and
a then releeasing it. Thiss option provvides the
Advanced Configuration
C n and Power Interface (AC CPI)-compatible
functionality
y that is impleemented by ssome operatting systems. These
rT

operating sy ystems behavve differentlyy depending on a short p press or long


press. The behavior
b of th
his option miight circumveent any grace eful shutdow
wn
features of thhe operating g system.
Fo

 Reset — Thiss button initiates a system


m reset. This option is nott available
when the server is powe ered down. TThe behavior of this optio on might
circumvent any
a graceful shutdown fe atures of thee operating ssystem.
 Cold Boot — This functio on immediateely removes power from tthe system,
circumventin ng graceful operating
o sysstem shutdow
wn features. TThe system w
will
restart after approximateely six secon ds. This optio
on is not ava
ailable when
n
the server is powered do own.

Note
Some of the power
p control options
o do not g
gracefully shut down the opera
ating system.

Rev. 12.3
31 2 -43
Implementing HP BladeSystem Solutions

 Power Meter — The Power Meter page displays server power utilization as a
graph. This page has two sections:
 Power Meter Readings
 Power History
 Power Settings — The iLO Power Settings page allows you to view and control
the Power Regulator modes. The Power Management Settings page enables you
to view and control the Power Regulator mode of the server. Power Regulator for
ProLiant settings are:

ly
 Static Low Power mode — Sets the processor to minimum power, reducing
processor speed and power usage. Guarantees a lower maximum power

on
usage for the system.
 Static High Performance mode — Processors will run in their maximum
power/performance state at all times regardless of the operating system
power management policy.

y
Note

er
Selecting Static High Performance mode usually causes the system to use more power,
especially when it is lightly loaded. Most applications benefit from the power savings
offered by Dynamic Power Savings mode with little or no impact on performance.
liv
Therefore, if choosing Static High Performance mode does not increase performance, HP
recommends that you re-enable Dynamic Power Savings mode to reduce power use.
de
TT
rT
Fo

2 -44 Rev. 12.31


HP BladeSystem Enclosures

 Dynamic Power Savings mode — Automatically varies processor speed and


power usage based on processor utilization. Enables you to reduce overall
power consumption with little or no impact on performance. Does not
require operating system support. The server uses only the power it needs.
Unfortunately, this can cause system applications to overstate overall server
utilization because the measurements include data from throttled-down
processors.
 OS Control mode — Processors will run in their maximum power/
performance state at all times unless the operating system enables a power

ly
management policy.

Note

on
With the exception of the OS Control mode, Power Regulator modes configured through
iLO do not require a reboot and are effective immediately. OS Control mode changes
become effective on the next reboot.

The Power Capping Settings section displays measured power values and enables

y
you to set a power cap and disable power capping. Measured power values include

er
the server power supply maximum value, the server maximum power, and the server
idle power. The power supply maximum power value refers to the maximum amount
of power that the server power supply can provide. The server maximum and idle
liv
power values are determined by two power tests run by the ROM during POST.

Note
de

The iLO command line interface (CLI) gives you command line access to the same
functions available through the iLO browser-based interface.
TT
rT
Fo

Rev. 12.31 2 -45


Implementing HP BladeSystem Solutions

Power efficiency
HP iLO 4 enables you to implement improved power usage using a High Efficiency
Mode (HEM). HEM improves the power efficiency of the system by placing the
secondary power supplies into step-down mode. When the secondary supplies are in
step-down mode, the primary supplies provide all the DC power to the system. The
power supplies are more efficient (more DC output Watts for each Watt of AC input)
at higher power output levels, and the overall power efficiency improves.
When the system begins to draw more than 70% capacity of the maximum power
output of the primary supplies, the secondary supplies return to normal operation (out

ly
of step-down mode). When the power use drops below 60% capacity of the primary
supplies, the secondary supplies return to step-down mode.

on
HEM enables systems to achieve power consumption equal to the maximum power
output of the primary and the secondary supplies, while maintaining improved
efficiency at lower power usage levels. HEM does not affect power redundancy. If
the primary supplies fail, then the secondary supplies immediately begin supplying

y
DC power to the system, preventing any downtime.

er
HEM can only be configured through the ROM-Based Setup Utility (RBSU). These
settings cannot be modified through iLO. The settings for HEM are Enabled or
Disabled (also called Balanced Mode), and Odd or Even supplies as primary. These
liv
settings are visible in the High Efficiency Mode & Standby Power Save Mode section
of the System Information, Power tab. This section displays the following information:
de

 If HEM is enabled or disabled


 Which power supplies are primary (if HEM is enabled)
 Which power supplies do not support HEM
TT
rT
Fo

2 -46 Rev. 12.31


HP Blad
deSystem Enclossures

Dynam
mic Power Saver

ly
on
y
The Dynaamic Power Saver
supplies operate
o
S

er feature
e takes advanntage of the fact that mo
ineffficiently when lightly load
ost power
ded and morre efficiently when heavilly
liv
loaded. A typical powwer supply ru unning at 200% load couuld have efficciency as low
w as
60%. Ho owever, at 50
0%, the load could be 90 0% efficient.
de

In the gra
aphic, the top
p example shows the pow wer demand d spread ineffficiently acro
oss
six power supplies. Thhe second ex xample dem onstrates thaat with Dynamic Power
Saver, the
e power load d is shifted to
o two powerr supplies forr more efficie
ent operationn.
When thee Dynamic Power
P Saver feature
f is en abled, the to
otal enclosure
e power
TT

consumption is monitored in real-time. As a reesult, automaatic adjustme


ents are tied to
changes in demand. Power supplies are placeed in a stand dby conditioon when the
power deemand from the server ennclosure is loow. When po ower demand increases, the
standby power
p supplies instantaneously deliveer the requireed power. Thhis enables the
rT

e to operate at optimum efficiency,


enclosure e w
with no impacct on redunda ancy.
Dynamic Power Saver is supporte ed on the HP 1U power ssupply and BBladeSystem
enclosure
es. It is enablled by an intterconnect o n the manag
gement board
d. When the
e
Fo

power suupplies are placed in stan ndby mode, their LEDs fla
ash.

Rev. 12.3
31 2 -47
Implementing HP BladeSystem Solutions

Dynamic Power Capping


Dynamic Power Capping enables retrieval of stranded power and optimizes power
and cooling capacity in data centers. Dynamic Power Capping safely limits power
usage with no performance degradation without risk to electrical infrastructure. For
enclosures of blades, users will set an enclosure level power cap, and the Onboard
Administrator will dynamically adjust individual server power caps based on their
specific power requirements. By capping power usage at historical peak power
usage instead of significantly higher face-plate, ROM burn, or power calculator
default values, IT organizations can fit up to 36% more servers in their existing rack

ly
infrastructure.
Benefits of Dynamic Power Capping include:

on
 Reduces costly power and cooling overhead by efficiently using the power and
cooling resource budgeted to each rack
 Maximizes utilization of data center floor space by fitting more servers or

y
enclosures in each rack

er
 Postpones the need for costly data center expansions or facilities upgrades
Before using Dynamic Power Capping, ensure the enclosure contains redundant
Onboard Administrator modules and is in an N+N Redundant power mode.
liv
For more information on Power Capping, refer to:
http://h18013.www1.hp.com/products/servers/management/dynamic-power-
de

capping/support.html?jumpid=reg_R1002_USEN
TT
rT
Fo

2 -48 Rev. 12.31


HP Blad
deSystem Enclossures

Power delivery
y modes
BladeSysstem enclosures can be configured
c inn one of threee power delivery modes:
 Non-Redundant Power
 Power Supply Re
edundant
 AC Redundant
R

Non-R
Redundantt Power

ly
on
y
er
liv
The Non--Redundant Power
P modee provides noo power reduundancy; an ny power sup
pply
de

or AC lin
ne failure will cause the system
s to pow
wer off. Tota
al power is th
he power
available
e from all power suppliess installed.
 Six power
p supplies installed in a BladeSyystem c7000
0 enclosure = 14400W
TT

 Six power
p supplies installed in a BladeSyystem c3000
0 enclosure = 7200W
This scenario is used to demonstra ate simple ennclosure setuups or in classsrooms for
training purposes.
p It is not recomm
mended for a production n environmen nt.
rT
Fo

Rev. 12.3
31 2 -49
Implemen
nting HP BladeS
System Solutions

Powerr Supply Redundant

ly
on
y
er
liv
Bla
adeSystem encllosures with DC
C redundant co
onfiguration

The most basic powe er configuratiion has two power supplies. Based oon the power
de

supply pllacement rule


es, these pow
wer suppliess would popuulate bays 1 and 4. To
reach po
ower supply redundancy,
r , you would add anotherr power supp ply in bay 2.
As long as
a there are not more de evices in the enclosure than two power supplies ccan
support, the system iss power suppply redundannt.
TT

With the Power Supp ply Redundan nt configurattion, a minim


mum of two p power supplies
is require
ed. Up to sixx power supp plies can be installed in a
an enclosure e. One poweer
supply is always rese erved to provvide redundaancy. In the eevent of a single power
rT

supply faailure, the red


dundant pow wer supply taakes over thee load.
This N+1 Power Mod de configuration is cost-ssensitive but provides min nimal
redundanncy. It is mosst often seleccted by smalll and medium m-sized businesses that
purchase
e three or fouur power sup pplies and onne power disstribution unit (PDU) or hhave
Fo

the capability to connnect only a single


s ord. It could also be selected by
line co
customers with high-pperformance computing a applications where redundancy is le ess
important than low co ost.

Note
The graphic shows
s two circu
uits (circuit A annd B) being useed. This is possiible but not
necessary forr the power suppply redundant mode. Total po ower for the c70 000 enclosure is
total power available,
a less one
o power supp ply. A 5+1 conffiguration = 120 000W. The
c3000 enclossures can provide up to 6000 0W in a 5+1 co onfiguration.

2 -50 Rev. 12
2.31
HP Blad
deSystem Enclossures

AC Re
edundant

ly
on
y
er
liv
Bla
adeSystem encllosures with AC
C redundant co
onfiguration

In the N+
+N AC Redundant power mode, a m minimum of twwo power supplies is
de

required. N power su upplies provid


de power annd N providees redundanccy, where N
can equaal 1, 2, or 3.
The Onbo oard Adminiistrator reservves sufficientt power so th
hat any numb ber of powerr
supplies from
f 1 to 3 can
c fail, and d the enclosuure will contin
nue to opera ate at full
TT

performaance on the remaining


r lin
ne feed. Wheen correctly w wired with re
edundant AC C
line feedss, AC Redundant mode ensures
e that a
an AC line ffeed failure w
will not cause
e
the enclo
osure to powe er off.
rT

AC Redundant mode provides full redundancyy and is the configuration


ended for larg
recomme ge enterprise
e customers b
because it en
nsures full pe
erformance w
with
one powe
er line feed.
Total available powerr is determined by half o of the total nuumber of pow wer supplies
Fo

installed in the enclossure. For exa


ample, for a cc7000 enclo osure with sixx power
supplies installed, a 3+3
3 configurration yields 7200W of ttotal power. Similarly, a
c3000 enclosure with h six power supplies
s insta
alled yields 33600W of to otal available
power in an AC Redu undant config guration.
When prroperly conne ected to two separate cirrcuits, the On
nboard Adm ministrator
ensures th
hat poweredd enclosure devices
d do noot exceed ha alf of the tota
al available
power, ennsuring up-time as the re
emaining pow wer supplies sustain the e enclosure loa
ad.

Rev. 12.3
31 2 -51
Implementing HP BladeSystem Solutions

HP Intelligent Power Discovery Services


Intelligent Power Discovery Services combine an HP Intelligent Power Distribution Unit
(iPDU) and HP Common Slot (CS) Platinum/Platinum Plus power supplies with HP
Insight Control software to create an automated, energy-aware network between IT
systems and facilities. Intelligent Power Discovery Services with Intelligent PDUs
automatically track power usage and document configurations to increase system
uptime and reduce the risk of outages.
Intelligent Power Discovery provides automated server discovery on a network
through power line communication technology that is embedded in CS Platinum

ly
Power Supplies. Power line communication is a feature that allows the power supply
to communicate with the iPDU. The communication between the power supply and

on
iPDU helps:
 Automatically discover the server when it is plugged into a power source
 Map the server to the individual outlet on the iPDU

y
When combined with the HP line of Platinum-level high-efficiency power supplies, the

er
Intelligent PDU actually communicates with the attached servers to collect asset
information for the automatic mapping of the power topology inside a rack. This
capability greatly reduces the risk of human errors that can cause power outages.
liv
HP Thermal Discovery Services help you reduce energy usage and increase compute
capacity. This feature helps you squeeze the most IT out of every bit of data center
de

power and cooling capacity and reduce energy consumption by 10% compared to a
ProLiant G6 server.
The automated energy optimization capabilities in the ProLiant Gen8 family are
enabled by HP 3D Sea of Sensor technologies. Embedded intelligence across a
TT

sense of location, power utilization, and thermal demand provides a high level of
visibility and control over the energy efficiency of the data center.

For more information about HP Intelligent Power Discovery, go to:


rT

http:// www.hp.com/go/ipd
Fo

2 -52 Rev. 12.31


HP Blad
deSystem Enclossures

HP Inttelligent PD
DUs

Rea
ar view of 12 O
Outlet iPDU

The key element


e of HP Power Discovery Serviices is the iP DU, which iss a power

ly
on unit with full remote outlet
distributio o control,, outlet-by-ouutlet power trracking, and
d
automate ed documenttation of pow wer configura ation. HP iPD DUs track ouutlet power

on
usage at 99% accura acy, showing g system-by-ssystem poweer usage and d available
power. The iPDU reco ords server ID informatioon by outlet a and forward ds this
information to HP Insight Control,, saving hou rs of manual spreadshee et data-entry
time and eliminating human wirin ng and docuumentation errors.

y
When co ombined with h the HP line of Platinum--level high-effficiency pow
wer supplies, the

er
Intelligent PDU actuallly communiccates with thee attached servers to collect asset
informatioon for the au
utomatic map pping of the power topollogy inside a rack. This
capabilityy greatly red
duces the riskk of human eerrors that ca
an cause pow wer outages.
liv
HP iPDUss provide pow ple objects frrom a single source. In a rack, the iPD
wer to multip DU
distribute
es power to th
he servers, sttorage units,, and other p
peripherals.
de

Using the
e popular core-and-stick architecture
a o
of the HP mo odular PDU lline, the iPDU U
monitors power consu umption at th he core, load d segment, sttick, and outtlet level, with
h
unmatcheed precision and accuraccy. Remote m management is built in. TThis iPDU offe ers
power cyycle ability off individual outlets
o on thee Intelligent E
Extension Ba
ars.
TT

Functionss of iPDUs incclude:


 Help
ps you track and
a control power
p that o
other PDUs ca
annot monito
or, with 99%
%
accu
uracy greaterr than 1 wattt
rT

 Gathhers informattion from all monitoring p


points at 1/2
2 second inte
ervals to enssure
the highest
h precission
Meaasures current draw less th
han 100mw;; the iPDU ca
an detect a n
new server e
even
Fo

befo
ore it is powe
ered on
 overs and maps servers to
Disco t specific ouutlets insuring
g correlationn between
equipment and power
p data collected,
c as a function o of Intelligent Power
Disco
overy

Rev. 12.3
31 2 -53
Implemen
nting HP BladeS
System Solutions

HP po
ower distribution units

ly
on
Monitored PDU

HP PDUs provide pow wer to multiple objects froom a single ssource. In a rack, the PDU

y
distribute
es power to th
he servers, sttorage units,, and other p
peripherals.

er
PDU syste
ems:
 Address issues of power distrribution to co
omponents w
within the com
mputer cabin
net
liv
 Redu
uce the numb
ber of powerr cables com
ming into the cabinet
 Proviide a level of power prottection throug
gh a series o
of circuit brea
akers
de

For more infformation about the HP powerr distribution un it portfolio, go to:


http://h180004.www1.hp.ccom/products/sservers/prolian ntstorage/poweer-
protection/p
pdu.html
TT
rT
Fo

2 -54 Rev. 12
2.31
HP BladeSystem Enclosures

PDU benefits
Benefits of the modular PDUs from HP include:
 Increased number of outlet receptacles
 Modular design
 Superior cable management
 Flexible 1U/0U rack mounting options
 Easy accessibility to outlets

ly
 Limited three-year warranty

on
HP 16A to 48A Modular PDUs
HP Modular PDUs have a unique modular architecture designed specifically for data
center customers who want to maximize power distribution and space efficiencies in
the rack.

y
Modular PDUs consist of two building blocks—the Control Unit (core) and the

er
Extension Bars (sticks). The Control Unit is 1U/0U, and the Extension Bars mount
directly to the frame of the rack in multiple locations.
liv
Available models range from 16A to 48A current ratings, with output connections
ranging from four outlets to 28 outlets.

HP Monitored PDUs
de

The monitored vertical rack-mount power distribution units provide both single- and
three-phase monitored power, as well as full-rack power utility ranging from 4.9 kVA
to 22 kVA. Available monitored PDUs include:
TT

 Full-rack models with 39 or 78 receptacles and half-rack versions


 Three-phase models with 12 C-19 receptacles
Single-phase models with 24 C-13 and 3 C-19 receptacles
rT


Fo

Rev. 12.31 2 -55


Implemen
nting HP BladeS
System Solutions

BladeS
System c7
7000 PDUs

ly
on
y
er
Avvailable power distribution un its for a c7000
0 enclosure

The PDUss available fo


or the c7000
0 enclosure a
are detailed in the preceding table.
liv
Note
A pair of PDUUs must be orde
ered for AC feeed redundancy.. If AC redunda
ancy is not
de

required, a single PDU may be acceptablee.


TT
rT
Fo

2 -56 Rev. 12
2.31
HP Blad
deSystem Enclossures

BladeS
System c3
3000 PDUs

ly
on
y
er
liv
Avvailable power distribution un its for a c3000
0 enclosure
de

The PDUss available fo


or the c3000
0 enclosure a
are detailed in the prece
eding table.

Note
A pair of PDUUs must be orde
ered for AC feeed redundancy.. If AC redunda
ancy is not
required, a single PDU may be acceptablee.
TT
rT
Fo

Rev. 12.3
31 2 -57
Implemen
nting HP BladeS
System Solutions

Blade
eSystem enclosure
e e power supplies
HP Co
ommon Slo
ot Power Supplies
S

ly
on
y
HP Comm
design th
mon Slot (CS
er
S) Power Supplies share a common ellectrical and physical
hat allows forr hot-swap, toool-less insta
allation into H
HP server and storage
liv
solutions.. CS power supplies
s are available in multiple high-efficiency iinput and
output opptions, allowiing users to “right-size”
“ a power supp ply for speciffic
de

server/stoorage configgurations and d environmennts. This flexiibility helps tto minimize


power wa aste, lower overall
o energy costs, and avoid "trapped" power capacity in the
data centter.
CS Power Supplies su
upport Intellig
gent Power D
Discovery and are availa
able in the
TT

g models:
following
 Com
mmon Slot Pla
atinum Plus Power
P Suppliees
 Are compatiible with ProLiant Gen8 sservers only
rT

 Provide up to
o 94% powe
er efficiency a
at 50% serveer utilization level
 Com
mmon Slot Pla
atinum Powerr Supplies
Fo

 Are compatiible with ProLiant G6 and


d 7 servers o
only
 Provide up to
o 94% powe at 50% serveer utilization level
er efficiency a
 Com
mmon Slot Go
old Power Su
upplies and C
Common Slo
ot Silver Powe
er Supplies
 Are compatiible with ProLiant G6, G7
7, and Gen8
8 servers
 Provide up to
o 92% powe
er efficiency at 50% servver utilization level
 Are a cost-effective optio
on for entry-leevel servers

2 -58 Rev. 12
2.31
HP Blad
deSystem Enclossures

Common
n Slot Platin
num Plus Po
ower Suppliees

ly
on
y
The CS Platinum
P Plus Power Supply family is id
deal for ProLLiant Gen8 ccustomers

er
operating
g mid-to-large data cente
er environments with a foocus on reduccing power,
downtime e, and humaan resource expenses.
e Thee CS Platinumm Plus Power Supply:
 Enables HP Intelligent Power Discovery
D — Creates eneergy aware n network that
liv
helpss to reduce data center outages, shrinnk deploymeent times from
m hours to
minuttes, reclaim stranded
s powwer, and ma ximize IT commpute densitty.
de

 Provid
des certified best power efficiency (9 ndustry — Re
94%) in the in educes data
cente
er power requ uirements byy up to 60WW/server (as ccompared too ProLiant G6
6
poweer estimates).. This can save up to $800 annually p
per server.
 Supports redunda ant High Efficciency and LLoad-Balancing modes — Maximizess the
TT

er efficiency capabilities
powe c of
o power sup pplies.
 Providdes compatib bility — Is co
ompatible w
with a wide ra
ange of ProLiant and
Integrrity servers, as
a well as HP Storage soolutions. Easily accessiblee, hot-plug
rT

powe er supplies minimize


m server downtimee as well as tthe costs asso
ociated with
mainttaining multiple sets of sp pares. One p
power supplyy suits all cusstomer
enviro
onments, botth Class A and B.
Fo

 Featuures multiple output optioons (460W, 7 1200W) — Enables you tto


750W, and 1
choosse the powerr supply sized appropria tely for each
h server confiiguration. Th
his
flexib
bility helps to
o minimize poower waste, lower overall energy cossts, and avoid
trappped power ca apacity in the
e data centeer.
CS Platinum/Platinum m Plus power supplies alsso enable HPP Power Discovery Services,
which foccus on increa
asing compute density wwhile reducingg data cente
er outages.

Note
One CS Pla
atinum Hot Plug Power Supp
ply Kit is requiired for each sserver.

Rev. 12.3
31 2 -59
Implementing HP BladeSystem Solutions

CS 1200W -48VDC Power Supply


The CS 1200W -48VDC model is supported on ProLiant ML350p Gen8, DL360 G7,
DL380 G7, and DL385 G7 servers. This model supplies the highest power output
option available in CS design for 48VDC input and provides 90% efficiency at 50%
utilization. It is primarily used for server solutions with shared power architectures.
CS 750W - 48VDC Power Supply
The HP CS 750W - 48VDC Power Supply provides an option for the following
ProLiant servers:

ly
 DL360p Gen8
DL380p Gen8

on

 DL385p Gen8
 ML350p Gen8
SL6500 Gen8

y

It is the lowest-cost power solution available in CS design for 48VDC input. The CS

er
750W - 48VDC Power Supply offers a higher- efficiency DC power solution with
improved power input cabling options.
liv
 Improved power efficiency to 94% at 50% utilization — Reduces power waste
and consumption when compared to previous generation 1200W -48VDC
(90%) power supply option.
de

 Improved power input connector design — Simpler terminal block design


provides users with greater flexibility in cable selection, design, and
management.
TT

 Compatible with a wider range of HP ProLiant server solutions —More options


and greater flexibility for DC power usage within ProLiant Gen 8 servers.
 Uses the HP Common Slot power supply design — Easily accessible hot-plug
rT

power supplies that minimize server downtime.

For more information about the HP power supply portfolio, go to:


http://www.hp.com/go/proliant/powersupply
Fo

2 -60 Rev. 12.31


HP Blad
deSystem Enclossures

BladeS
System c7
7000 enclo
osure pow
wer suppliees

ly
on
y
er
liv
de

Power supplies
s for a cc7000 enclosurre

The powe er supplies convert single


e-phase AC tto 12V DC current and fe eed the powe er
backplan ne. Moving th he power sup pplies into thhe enclosure allowed HP to reduce th he
TT

transmisssion distancee for DC powwer distributio on and use a


an industry-sttandard 12VV
infrastructure for the BladeSystem.
B . Using a 12 V infrastructuure allowed HP to eliminate
several power-related componentss and improvve power effficiency on th he server bla
ades
and in the infrastructuure. The conttrol circuitry w
was stripped
d and put on the
rT

managem ment board anda fans.


The c700
00 enclosure supports upp to six poweer supplies deepending onn whether it is
equipped
d with a three
e-phase or single-phase power config guration. Ad
dditionally, th
he
Fo

c7000 en
nclosure bunndled with the
e HP Insight Management suite proviides six HP
2400W high-efficienccy hot-plug power
p supplies.

Rev. 12.3
31 2 -61
Implementing HP BladeSystem Solutions

Key features of the 2400W power supplies include:


 Increased power output—2400W; supports more blades with fewer power
supplies
 High efficiency to save energy; provides 90% efficiency from as low as 10%
load
 Low standby power that facilitates reduced power consumption when servers are
idle
 Uses Onboard Administrator 2.30 or later

ly
Important
! The 2400W power supplies do not operate with 2250W power supplies. Therefore, to

on
use the 2400W power supplies with a c7000 enclosure that uses 2250W power supplies,
you need to replace all the 2250W power supplies with the 2400W power supplies.

y
er
liv
de
TT
rT
Fo

2 -62 Rev. 12.31


HP Blad
deSystem Enclossures

Powerr modules and cordss

ly
on
y
er
Different
D input power
p moduless for a c7000 eenclosure
liv
The c700
00 enclosure can be insta
alled in both the AC and
d DC environments:
 Three
e-phase (3Ø
Ø) AC power
de

 Single-phase (1Ø
Ø) AC powerr
 -48V
V DC power
Each type
e of power environment
e requires
r a sp
pecific powerr module.
TT

BladeSysstem powerr cords


The Blade eSystem is designed to match
m what thhe customer already has in the data
center. It uses standarrd power corrds:
rT

 IEC-C
C19 – 16A 208V
2 = 3328
8VA
 MA L15-30p 24A 3Ø 208V = 8646V
NEM VA
Fo

 IEC 309
3 5-pin 16
6A 3Ø 230V
V = 11040VA
A
In the Bla o L15-30p line cord ca n power onee enclosure p
adeSystem, one populated wiith
16 half-he
eight blades.
nt of comparrison, if it had been desig
As a poin gned for racck-based pow
wer,
BladeSystem enclosurres would req quire 60A to
o 100A threee-phase powe er.

Rev. 12.3
31 2 -63
Implemen
nting HP BladeS
System Solutions

Single
e-phase AC
C power supply
s pla
acement

ly
on
y
Power supply placement fo
or a c7000 encclosure

Install the
e power supp
Two power supp
er
plies based on
plies — Powe
o the total nnumber of suupplies neede
er supplies inn bays 1 and
d4
ed:
liv

 Three power sup


pplies — Pow
wer supplies in bays 1, 2
2, and 4
Fourr power supp
plies — Powe
er supplies inn bays 1, 2,, 4, and 5
de

 Five power supp


plies — Powe
er supplies inn bays 1, 2, 3, 4, and 5
 Six power
p supplies — Powerr supplies in all bays
TT

Note
The Insight Display panel sliides left or rightt to allow accesss to power sup
pply bays 3 and
d
4.
rT

The prece
eding graphic further deffines the pow
wer supply p
placement ba
ased on the
power redundancy mode.
m

Note
Fo

In single-phasse configuration
ns, you can usee fewer than sixx power supplie
es.

The placeement rules are


a enforced by the Onb board Administrator. Wheen the power
supplies are
a placed incorrectly, th
he Insight Dissplay shows an error.

Important
! Three-phase AC
A power requ
uires that all six power suppliess be installed.

2 -64 Rev. 12
2.31
HP BladeSystem Enclosures

DC power configuration rules


WARNING
! Never attempt to install an AC power supply into a DC power module. Doing so could
cause damage to both the power module and the power supply.

The product configuration rules are:


 -48V DC power module can only accept DC power supplies.
 -48V DC hot-plug power supplies are only supported with the -48V DC power

ly
module.
Mixing of AC and DC components within the same system is prohibited.

on

 Keying on the DC power module and DC power supply prevents incorrect


insertions.

Caution

y
! To prevent damage to components in the enclosure, never mix AC and DC power in the

er
same enclosure.
liv
de
TT
rT
Fo

Rev. 12.31 2 -65


Implemen
nting HP BladeS
System Solutions

Total available
a power

Power supplies
s in a c7
7000 enclosuree

Total pow
wer available
e to the enclo
osure, assum
ming 2400W
W are availab ble from each
h
power suupply, depends on the poower mode cconfigured fo
or the enclosu
ure.

ly
 If no
o power redundancy is co onfigured, th e total poweer available iis defined ass
the power
p availa
able from all supplies insttalled. Thereffore, if six po
ower supplie
es
are installed
i in an
a enclosure installed, 144400W of po ower will be available to o

on
the enclosure.
e
 If the
e N+1 powe er mode is coonfigured, theen the total p
power availaable is define
ed
as thhe total powe
er available, less one pow wer supply. TTherefore, an
n enclosure wwith

y
a 5+ +1 configuration will rece
eive 12000W W of power.

er
 If the
e N+N AC Redundant
R mode is configgured, then tthe total pow
wer availablee is
the amount
a from the A or B side
s with the lesser numb
ber of suppliees. Therefore,,
an enclosure
e with
h 3+3 config
guration will receive 720
00W of powe er.
liv
Important
! HP strongly re
ecommends tha at you run the H
HP BladeSystemm Power Sizer to
o determine the
de

power and co ooling requirem


ments of your co
onfiguration. Reefer to
http://www..hp.com/go/bla adesystem/pow wercalculator too download the
e Power Sizer.

mple
Exam
Single-phase pow wer runs on 30A circuits in North Am merica. Whe en you apply the
TT

80%% rule (in an NA/JPN envvironment, yo ou can only pull 80% of the total powwer
available on a circuit) this tra
anslates to 24
4A availablee. Therefore, you would uuse
4A modular PDU,
a 24 P which can
c only sup pport 4992VAVA or two powwer suppliess.
With
h redundant AC feeds, yo ou can suppo ort four pow
wer supplies p
per enclosure
e.
rT

Four power supp plies can provvide 9600W W of power to o the compoonents.
A full encclosure of 16 blades requuires up to 37
700W. Four power supp
plies enable
N+N AC C redundancy y as long as you have reedundant AC
C feeds.
Fo

Note
3700W averrages 231.25W W per blade. If thhe 3700W figuure does not incclude the
Onboard Administrator, fans, and interconnnects, you still have overhead of 800W per AC
feed to coverr the additional need and rema ain N+N redunndant.

2 -66 Rev. 12
2.31
HP Blad
deSystem Enclossures

BladeS
System c3
3000 enclo
osure pow
wer suppliees

ly
The c30000 enclosure power supp plies are sing
gle-phase po
ower suppliess that supporrt
both low--line and hig
gh-line enviro
onments. Wa ttage output per power ssupply depen nds

on
on the rated AC input voltage.
 200V
VAC to 240V
VAC input = 1200W DC
C output
120V
VAC input = 900W DC output
o

y

 100 VAC input = 800W DC output


AC powe
customerss with diverse
er
er supplies are auto-switcching betweeen 100VAC a
e deploymennt options.
and 240VAC
C, providing
liv
Each AC power supp ply ships with
h a standard PDU power cord (C13 to o C20). Pow wer
supplies may be connnected to standard wall o
outlets; howeever, proper wall outlet
cords must be purcha
ased.
de

Important
! Wall outlet power cords sho ould only be useed with low-linee (100V to 110V V) power sourcees.
If high-line po
ower outlets are
e required, safeety regulations rrequire the use of a PDU or a
TT

UPS between n the c3000 encclosures power supplies and w wall outlets.

Optional DC power supplies


s are available. Eaach 48V DCC Common Slot Power
an provide 1200W. Up to
Supply ca t six total D
DC power suppplies can be
e used in a
rT

c3000 enclosure. DCC and AC poower suppliess cannot be mixed insidee one c3000 0
enclosure
e.

Caution
Fo

Without prop per surge protecction, connectinng directly to a standard wall o


outlet may causse
loss of data or
o damage to th he BladeSystemm enclosure.

Rev. 12.3
31 2 -67
Implemen
nting HP BladeS
System Solutions

Powerr supply pllacement

ly
on
y
er
liv
c30
000 enclosure power supply nnumbering and
d placement

The quanntity of power supplies is a function o of the power redundancy mode versus
de

the quanttity, type, and configurattion of the deevices installeed in the encclosure. The
tables dissplay the pro
oper location n for power ssupplies in th
he Power Sup pply Redundant
and AC Redundant
R power
p modess. For properr functionalityy, the AC Redundant pow wer
mode req quires two AC C circuits; on
ne connected d to power supplies 1, 2,, and 3 and the
TT

second connected to power supplies 4, 5, an d 6.

Note
There is no Onboard
O Admin nistrator-enforceed rule that dicta
ates the power supply placem ment
rT

based on thee number of servver blades; how wever, there is oone for the fanss. Power supplyy
population is dependent on the power sup ply redundancyy level and the quantity and
n of server blades and interconnnects.
configuration
Fo

2 -68 Rev. 12
2.31
HP BladeSystem Enclosures

Total available power


Total power available to the enclosure may vary depending on the input AC voltage,
the power redundancy mode, and the quantity of power supplies installed. For
enclosures connected to 208VAC–240VAC, the maximum power available from six
installed power supplies is as follows:
 In Non Power Redundant mode, total power available from six power supplies is
7200W DC.
 In Power Supply Redundant mode, six power supplies provide a total of 6000W

ly
DC.
 In AC Redundant mode, six power supplies provide a total of 3600W DC.

on
Important
! HP strongly recommends that you run the Power Sizer (available from
http://www.hp.com/go/bladesystem/powercalculator) or HP Power Advisor to determine
the power and cooling requirements of your configuration.

y
er
liv
de
TT
rT
Fo

Rev. 12.31 2 -69


Implementing HP BladeSystem Solutions

BladeSystem DVD-ROM drive options


Attaching a DVD-ROM drive to the HP BladeSystem enclosure enables local media
access to the server blades. Insight Display, iLO, and the Onboard Administrator
allow system administrators to connect and disconnect the media device to one or
multiple server blades at a time.
This feature enables administrators to:
 Perform operating system installations such as SmartStart installations or
imaging tasks

ly
 Install additional software

on
 Perform critical operating system updates and patches
 Update server platform firmware
The DVD-ROM drive can be attached using the:

y
 DVD-ROM drive bay in the front of the c3000 enclosure

er
 USB port on the c7000 enclosure Onboard Administrator module
 Local I/O cable connection to the individual server blades
liv
 ISO images on a locally attached USB key
The DVD-ROM drive offers local drive access to server blades by using the virtual
de

media scripting capability of iLO. The DVD-ROM drive is connected directly to the
server blade’s USB and provides significantly improved data throughput, as
compared to iLO virtual media, using physical disks or ISO files, especially over
long distances.
TT
rT
Fo

2 -70 Rev. 12.31


HP BladeSystem Enclosures

Learning check
1. List three factors that distinguish an ideal deployment for a c3000 enclosure.
.................................................................................................................
.................................................................................................................
.................................................................................................................
2. An HP c7000 enclosure can use standard wall-outlet power.
a. True

ly
b. False

on
3. What is the difference between the Onboard Administrator in the c3000 and
the c7000 enclosures?
a. The c3000 Onboard Administrator is not a DDR2 module.

y
b. The c3000 Onboard Administrator does not have USB ports.

er
c. The c3000 Onboard Administrator has the same components, but they are
in different locations.
d. The c3000 does not support a redundant Onboard Administrator.
liv
4. The Onboard Administrator module for the c7000 enclosure is available with
KVM support and without KVM, and these two versions require different
de

firmware.
 True
 False
TT

5. With the Power Supply Redundant configuration in a BladeSystem, a minimum


of four power supplies is required.
 True
rT

 False
Fo

Rev. 12.31 2 -71


Implementing HP BladeSystem Solutions

6. What are the benefits of using an industry-standard 12V infrastructure for the
BladeSystem?
.................................................................................................................
.................................................................................................................
7. What BladeSystem challenges are met by Thermal Logic and Active Fans
technology?
.................................................................................................................
.................................................................................................................

ly
.................................................................................................................

on
y
er
liv
de
TT
rT
Fo

2 -72 Rev. 12.31


HP BladeSystem Server Blades
Module 3

Objectives
After completing this module, you should be able to describe the HP ProLiant
Generation 8 (Gen 8) and Integrity server blades that constitute the HP BladeSystem
portfolio.

ly
on
y
er
liv
de
TT
rT
Fo

Rev. 12.31 3 –1
Implemen
nting HP BladeS
System Solutions

ProLiant Ge
en8 serrver bla
ade portfolio
ProLia
ant BL420c Gen8
8 server blade
b

ly
on
y
workload
er
The HP ProLiant BL420c Gen8 serrver blade iss an entry-levvel blade. Th
d spans from single appliications for m
mid-market so
he BL420c
olutions to la
arge enterprisse
liv
requiremeents.
This serve
er blade feattures two eig
ght-core Xeonn processors with the Inte
el C600 serie
es
de

chipset. Additional
A fe
eatures of the
e ProLiant BL4
420c Gen8 server includ de:
 SAS/
/SATA/SSD hot-plug drivves
 Max
ximum 2 TB storage
s confiiguration
TT

 Twelve memory DIMMs,


D six per
p processo
or
 1x8 and 1x16 PC
CIe Gen3 mezzanine
m slo
ots
 iLO Managemen
M nt Engine
rT
Fo

3 –2 Rev. 12
2.31
HP BladeSyste
em Server Blad
des

ProLia
ant BL460c Gen8
8 server blade
b

ly
on
y
The HP ProLiant BL46

er
60c Gen8 server blade o
scalability, and expandability, ma
offers a balan
aking it a sta
andard for da
nce of perforrmance,
ata center coomputing. Th
his
liv
server bla
ade features two eight-coore Xeon pro ocessors with the Intel C6
600 series
chipset. Additional
A fe
eatures includ
de:
Up to
o 512GB of DDR3
D LRDIMMMs — With LRDIMMs, a ProLiant BL4
460c Gen8
de

serve
er can be co
onfigured with up to 512 G
GB of memo
ory.
 I/O expansion slots — The BL460c
B Gen8 ports two I/O
8 server supp O expansion
n
mezzzanine slots:
TT

 X16 PCI Exp press Type A – Supports d


dual-port meezzanine cardds. One port is
routed to intterconnect module bay 3 and the othher to bay 4..
 X16 PCI Exppress Type B – Supports ddual-port andd quad-port m mezzanine
rT

cards. For dual-port card


ds, one port is routed to interconnect bay 5 and tthe
y 6. For quad
other to bay d-port cards, one port is routed to interconnect ba
ays
5, 6, 7, and
d 8.
Fo

 Interrnal storage — The BL46 60c Gen8 su pports a varriety of intern


nal storage
optioons, including solid state drives, allow
wing up to 2 TB of internal storage to
o be
configured. The configuration
c n options aree shown in th
he following table.
Total cap
pacity Drive cconfiguration
Hot plug
p SFF SAS 2.0TTB 2xx – 1.0TB
Hot plug
p SFF SATA 2.0TTB 2xx – 1.0TB
Hot plug
p SFF SAS SS
SD 1.6TTB 2x – 800GB
Hot plug
p SFF SATA SSD
S 800G GB 2x – 400GB

Rev. 12.3
31 3 –3
Implemen
nting HP BladeS
System Solutions

ProLia
ant BL465c Gen8
8 server blade
b

ly
on
y
The HP ProLiant BL46 65c Gen8 serrver blade iss an ideal seerver for virtualization andd

er
consolida
ation. The BLL465c Gen8 is the first seerver blade tto achieve more than 2,0 000
cores perr rack by usin
ng AMD Oppteron 6200 series proceessors with up p to 16 coress
each.
liv
Features of the BL465
5c Gen8 incllude:
 Smart Array conttroller with 512
5 MB flash--backed writte cache
de

 SmartMemory
 SAS and SAS solid-state drive
es
 iLO Managemen
M nt Engine
TT
rT
Fo

3 –4 Rev. 12
2.31
HP BladeSyste
em Server Blad
des

Integ
grity i2 server blade portfollio
Integrrity BL86
60c i2

ly
on
y
er
liv
de

The Integrity BL860c i2 is a full-he


eight server b
blade with Ittanium 93000 series
processors and the In
ntel 7500 chiipset. This seerver supportts up to two p
processors w
with
TT

our processorr cores.


two or fo

Note
rT

The Integrity
y BL860c i2 only
o supports iidentical processors in a tw
wo-processor
configuratio
on.

The Integrity BL860c i2 supports up


u to 384 GGB of memoryy using 24 PPC3-10600
Fo

Registered CAS9 mem mory module es. These meemory modules support e error correctin
ng
code (ECCC), as well as
a double ch hip sparing teechnology. DDouble chip sparing can
detect an
nd correct an
n error in DRA
AM bits, praactically eliminating the d
downtime
needed too replace failed DIMMs.

Note
The Integrity
y BL860c i2 re
equires a min imum of 8 GB
B of RAM to o
operate. Doub
ble
chip technology is not enabled with 2 GB memory m modules.

Rev. 12.3
31 3 –5
Implementing HP BladeSystem Solutions

The server features two SFF SAS hot-plug hard drive bays. Hardware RAID is
provided by an embedded HP P410i RAID controller, which supports RAID 1 for
HP-UX and Linux. Because the SAS controller does not support Microsoft Windows,
Windows internal disk mirroring requires a Smart Array controller and cannot use the
internal hard drives.

Important
! RAID 1 configuration requires two identical hard drives.

The server also features four autosensing 1Gb/10Gb NIC ports through two dual

ly
NC532 Flex-10 adapters, plus an additional 100Mb NIC dedicated to Integrity iLO
management.

on
y
er
liv
de
TT
rT
Fo

3 –6 Rev. 12.31
HP BladeSyste
em Server Blad
des

Integrrity BL87
70c i2

ly
on
y
er
liv
The Integrity BL870c i2 server bla
ade is a full-hheight, doublle-wide form factor serve
er
blade tha
at occupies tw
wo device ba ay slots in a BladeSystem m enclosure.
de

The BL87
70c i2 server blade features the Intel 7
7500 chipseet and:
 Processors — The Integrity BL870c i2 maay contain up
p to four Itan
nium 9300
quadd-core processors, with up to 24 MB of L3 cache.. Processor kkits include:
TT

 Quad-core processors
p
 Itanium 9320 (1.33G
GHz/4-core/
/16MB/155
5W); up to 1.46 GHz witth
Turbo) processor
p
rT

 Itanium 9340 (1.6G


GHz/4-core/
//20MB/185
5W); up to 1
1.73 GHz wiith
Turbo) processor
p
 Itanium 9350 (1.73G
GHz/4-core/ 5W); up to 1.86 GHz with
/24MB/185
Fo

Turbo) processor
p
 Dual-core prrocessor
 Itanium 9310 (1.6GH
Hz/2-core/1
10MB/130W
W) processor
Note
The Integrity
y BL870c i2 su
upports two, tthree, or four-p
processor con
nfigurations.
Processors must
m be identical.

Rev. 12.3
31 3 –7
Implementing HP BladeSystem Solutions

 Memory — The Integrity BL870c i2 server blades supports up to 768 GB of


memory.
 Forty-eight PC3-10600 16Gb DIMMs
 High-speed memory bus bandwidth of 4.8GT/s

Note
Memory for the BL870c i2 must be installed in groups of four DIMMs.

 Integrity iLO 3 — Integrity iLO management processors make it simpler, faster,

ly
and less costly to remotely manage Integrity servers. Integrity iLO 3 ships with a
built-in Advanced Pack License. iLO Advanced features include Virtual Media,

on
LDAP directory services, iLO power measurement, and integration with Insight
Power Manager. No additional iLO licensing is needed.

Note
The iLO Management Engine with iLO 4 is not supported on Integrity server

y
blades.

er
 Storage
 Up to four SFF SAS hot-plug hard drive bays, providing up to 3.6 TB of
liv
internal storage using four 900 GB SAS drives.
Note
de

Mixed disk configurations are supported, although not for RAID configurations.
RAID 1 configuration requires two identical hard drives.

 Two P410i 3Gb SAS controllers provide support for RAID 0, RAID 1, and
TT

HBA mode options.


 NICs — Eight autosensing 1Gb/10Gb NICs through four embedded NC532i
dual-port Flex-10 adapters.
rT

Important
! Flex-10 capability requires operating system drivers and the use of an HP Virtual
Connect Flex-10 10GbE Ethernet module.
Fo

3 –8 Rev. 12.31
HP BladeSystem Server Blades

 Mezzanine card options — Six additional I/O expansion slots by using


mezzanine cards. Supported mezzanine cards include:
 HP NC553m Dual Port 10Gb FlexFabric Adapter
 HP NC551m Dual Port 10Gb FlexFabric Adapter
 HP NC552m 10Gb 2-port Flex-10 Ethernet Adapter
 HP NC532m Dual Port Flex-10 10GbE BL-c Adapter
 HP Emulex LPe1205 8Gb FC BL-c HBA (2-port 8Gb Emulex FC HBA)

ly
 HP QMH 2562 8Gb FC BL-c HBA (2-port 8Gb QLogic FC HBA)
 HP Smart Array P711m/1G FBWC Controller

on
 HP Smart Array P700m/512 Controller
 HP 4X QDR IB CX-2 Dual Port Mezz HCA for HP BladeSystem
 HP NC364m 4-port 1GbE BLc Adapter

y
 HP NC360m 2-port 1GbE BLc Adapter

er
liv
de
TT
rT
Fo

Rev. 12.31 3 –9
Implemen
nting HP BladeS
System Solutions

Integrrity BL89
90c i2

ly
on
y
er
liv
The Integrity BL870c i2 is a full-he
eight, quadruuple-wide forrm factor serrver blade that
de

occupies four device bay slots in the BladeSysstem enclosuure. It features the Intel 75
500
chipset and supports Integrity iLOO 3. It also feeatures:
 Processors — The Integrity BL890c i2 maay contain up
p to eight Ita
anium 9300
quadd-core processors, with up to 24MB o
of L3 cache. Processor kits include:
TT

 Quad-core processors
p
 Itanium 9320 (1.33G
GHz/4-core/ 5W); up to 1.46 GHz witth
/16MB/155
Turbo) processor
p
rT

 Itanium 9340 (1.6G


GHz/4-core/
//20MB/185
5W); up to 1
1.73 GHz wiith
Turbo) processor
p
 Itanium 9350 (1.73G
GHz/4-core/
/24MB/185
5W; up to 1.86 GHz with
h
Fo

Turbo) processor
p
 Dual-core prrocessor
 Itanium 9310 (1.6GH
Hz/2-core/1
10MB/130W
W) processor
Note
The Integrity
y BL890c i2 supports up to eight processsors. Processo
ors must be
identical.

3 –10 Rev. 12
2.31
HP BladeSystem Server Blades

 Memory — The BL890c i2 server blades supports up to 1.5TB of memory:


 Ninety-six PC3-10600 16Gb DIMMs
 High-speed memory bus bandwidth of 4.8GT/s

Note
Memory for the BL890c i2 must be installed in groups of four DIMMs.

 Storage — Up to eight SFF SAS hot-plug hard drive bays, providing up to 7.2
TB of internal storage using eight 900GB SAS drives. Four P410i 3Gb SAS

ly
controllers provide support for RAID 0, RAID 1, and HBA mode options.

on
Note
Mixed disk configurations are supported, although not for RAID configurations.
RAID 1 configuration requires two identical hard drives

y
 NICs — The Integrity BL890c i2 ships with 16 autosensing 1Gb/10Gb NICs via

er
eight embedded NC532i dual port Flex-10 adapters.

Important
!
liv
Flex-10 capability requires operating system drivers and the use of a Virtual
Connect Flex-10 10GbE Ethernet module.
de

 Mezzanines — The Integrity BL890c i2 supports 12 additional I/O expansion


slots by using mezzanine cards. Supported mezzanine cards include:
 HP NC552m 10Gb 2-port Flex-10 Ethernet Adapter
 HP Emulex LPe1205 8Gb FC BL-c HBA (2-port 8Gb Emulex FC HBA)
TT

 HP QMH 2562 8Gb FC BL-c HBA (2-port 8Gb QLogic FC HBA)


 HP Smart Array P711m/1G FBWC 6G SAS Controller
rT

 HP Smart Array P700m/512 Controller


 HP 4X QDR IB CX-2 Dual Port HCA for HP BladeSystem
 HP NC364m 4-port 1GbE BL-c Adapter
Fo

 HP NC360m 2-port 1GbE BL-c Adapter


 HP NC532m Dual Port Flex-10 10GbE BL-c Adapter
 HP NC551m Dual Port 10Gb FlexFabric Adapter
 HP NC553m Dual Port 10Gb FlexFabric Adapter

Important
! A maximum of eight of these additional adapters are supported with the BL890c
i2 server blade.

Rev. 12.31 3 –11


Implementing HP BladeSystem Solutions

Learning check
1. How many processors can be installed in an Integrity BL860c i2 server blade?
a. 1
b. 2
c. 4
d. 8
2. What is the maximum memory supported in a ProLiant BL460c Gen8 server

ly
blade?

on
a. 96 GB
b. 128 GB
c. 256 GB

y
d. 512 GB

er
3. The ProLiant BL460c Gen8 supports up to two Intel Xeon processors.
 True
liv
 False
de
TT
rT
Fo

3 –12 Rev. 12.31


HP BladeSystem Storage and Expansion Blades
Module 4

Objectives
After completing this module, you should be able to:
 Describe the features and functions of HP:

ly
 Storage blades
 Tape blades

on
 Expansion blades
 Describe the features and functions of HP Smart Array controllers

y
er
liv
de
TT
rT
Fo

Rev. 12.31 4 –1
Implemen
nting HP BladeS
System Solutions

HP BladeSy
B ystem storage
s and exxpansio
on blad
des
HP BladeeSystem is bu
uilt not only on
o servers, b
but also on sttorage and e expansion
modules. HP offers many storage solutions tha at increase eeither storage
e capacity orr
storage performance
p for server bllades. A Blad
deSystem ca an also conso olidate otherr
network equipment,
e including storage and ba ackup options.

HP sto
orage blades

ly
on
y
er
liv
de

D2200sb
D Stora
age Blade

HP offers storage solu utions design ned to fit inside the BladeeSystem enclosure, as we
ell
as external expansion n for virtually
y unlimited sttorage capacity. HP stora
age blades
offer flexiible expansio
on and workk side-by-sidee with ProLiant and Integrity server
TT

blades.
The HP sttorage portfo
olio for Blade
eSystems inccludes:
D2200sb Storag
ge Blade
rT

 X380
00sb G2 Ne
etwork Storag
ge Gatewayy Blade
 X180
00sb G2 Ne
etwork Storag
ge Blade
Fo

 IO Accelerator
A

4 –2 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades

HP D2
2200sb Sttorage Bla
ade

ly
on
y
er
The HP D2200sb
D Storage Blade delivers
d direcct-attached sstorage (DAS
S) for server
blades. The
T enclosuree backplane provides a PPCIe connecttion to the ad djacent serveer
liv
blade and enables hiigh-performa
ance storage access without additional cables.
The D220 00sb storagee blade featuures an onbooard Smart A
Array P410i ccontroller witth
1GB flash-backed wrrite cache (FB BWC) for inccreased perfo
ormance andd data
de

n. Other feattures include


protection e:
 Supp
port for up to
o 12 hot plugg SFF SAS orr SAS/SATA/ /Solid State or SATA
Midlline hard disk drives in a half-height b
blade, includ
ding supportt for enterprisse
300 GB 15K SAS hard drives
TT

 Internal Smart Arrray P410i co


ontroller withh 1 GB FBW
WC
 Simp
ple configura
ation and setup with the H
HP Array Co
onfiguration U
Utility (ACU)
rT

 Easyy maintenancce and troubleshooting w


with industry-sstandard ma
anagement to
ools
inclu
uding HP Sysstem Insight Manager
M (HPP SIM)
 Com ance software to create a shared
mpatibility with HP Virtuall SAN Applia
Fo

stora
age environmment inside a BladeSystem
m enclosure
 Abiliity to configu
ure the storagge blade forr RAID levels 0, 1, 1 + 0,, 5, and 6
(RAID ADG) by using
u the inte
ernal Smart AArray P410i controller with1 GB flash
h-
backked write cacche

Note
RAID 6 and RAID
R 60 requirre purchase of a Smart Array A
Advanced Packk (SAAP) license
e.

Rev. 12.3
31 4 –3
Implemen
nting HP BladeS
System Solutions

HP X1800sb G2
2 Network Storage Blade

ly
on
y
er
HP X1800sb G2 Network Storage Blade is a flexible stora age server so
olution for
BladeSystem environmments. File se
erving insidee the BladeSyystem enclosu
ure is available
liv
when thee X1800sb G2
G is paired with
w the D22 200sb storagge blade. The X1800sb G G2
can also be used as an
a affordable SAN gatew way to proviide consolida
ated file-servvice
access to
o Fibre Channel, SAS, or iSCSI SANs.
de

Features include:
 6 GB
B (3 x 2 GB)) PC3-10600
0R RDIMMs
 HP Smart
S Array P410i Contro
oller (RAID 0
0/1)
TT

 2 x 146GB SFF SAS


S 15k hot plug hard d drives with M
Microsoft Win ndows Storag ge
Servver 2008 R2,, Standard X64
X Edition p
pre-installed ((in a RAID 1 configuratio
on)
 Integ
grated NC55
53i Dual Portt FlexFabric 1
10GbE Convverged Netw
work Adapterr
rT

 One
e additional 10/100
1 NIC
C dedicated tto iLO 3 man
nagement
 Two I/O expansion mezzanine slots
Fo

 Supp
ports up to tw
wo mezzanin
ne cards
Functiona 1800sb G2 can be enha
ality of the x1 anced with o
optional softw
ware such ass HP
Mirroring
g Software or Data Protecctor Express..

4 –4 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades

HP X3
3800sb G2 Networrk Storage
e Gatewayy Blade

ly
on
y
The X380 00sb G2 Ne

er
etwork Storagge Gateway Blade is useed to access Fibre Chann
SAS, or iSCSI SAN sttorage, transslating file da ata from the server into b
blocks for
nel,
liv
storage too provide co
onsolidated fiile, print, and
d managemeent hosting sservices in a
cluster-ab
ble package.
de

Built on the ProLiant BL460c


B serve
er blade, thee X3800sb G G2 Network Storage
Gateway y Blade is a ready-to-dep
r loy SAN gatteway solutio on, with Winndows Storag ge
Server 20008 R2, Ente erprise x64 Edition
E pre-innstalled. The X3800sb G G2 also includ
des
a Microso oft Cluster Se
erver (MSCS) license and d Microsoft i SCSI Software Target.
TT

Key featu
ures include:
 One
e quad-core Intel Xeon Pro
ocessor E564
40 (2.66 GH
Hz, 80w)
 6 GB
B (3 x 2 GB)) PC3-10600
0R RDIMMs
rT

 Smart Array P410


0i controller (RAID 0/1)
 Two 146GB SFF SAS 15k hott plug hard d Windows Storage Server
drives with W
2008 R2, Enterp
prise X64 Ediition pre-insta
alled (in a RA
AID 1 config
guration)
Fo

 Integ
grated NC55
53i Dual Portt FlexFabric 1
10GbE Convverged Netw
work Adaptorr
 One
e additional 10/100
1 NIC
C dedicated tto iLO 3
 Two I/O expansion mezzanine slots
 Supp o two mezzanine cards
port for up to

Rev. 12.3
31 4 –5
Implemen
nting HP BladeS
System Solutions

Direct Connect SAS Stora


age for HP
P BladeSysstem

ly
on
y
er
liv
Direct Coonnect SAS Storage
S for BladeSystem
B allows customers to build
d local serveer
de

storage quickly
q with zoned
z storagge or low-cosst shared storage within tthe rack. The
e
high-perfoormance 3G Gb/s SAS arcchitecture co onsists of a Smart Array PP700m
controllerr in each servver and 3Gb b SAS BL swiitches conneected to an HHP Modular D Disk
System (MMDS) 600.
TT

By combiining the sim mplicity and cost


c efficienccy of direct-atttached stora age with the
flexibility and resourcce utilization of a SAN, server administrators can have a simp ple
in-rack zo oned direct attach
a SAS sttorage solution that is ideeal for growing capacityy
rT

requireme ents.
Fo

4 –6 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades

Blade
eSystem tape
t blad
de portfo
olio
HP Ulttrium Tape
e Blades

ly
on
y
The HP Ultrium
U
integrated
d data prote
er
Tape Blades are id deal for BladdeSystem cusstomers who need an
ection solution. These halff-height tapee blades provvide direct-
liv
attach daata protectionn for the adjaacent server and networrk backup pro otection for a
all
data resid
ding within the
t enclosure e. Ultrium Tap
pe Blades offfer a comple ete data
protection
n, disaster re
ecovery, and archiving so olution for BladeSystem ccustomers.
de

Each Ultrrium Tape Bla ade solution ships standa ard with Data a Protector E
Express Basicc
backup and
a recovery y software. In
n addition, ea ach tape blaade supports HP One-Buttton
Disaster Recovery
R (OBDR), which allows quickk recovery of the operating system,
applicatioons, and datta from the la atest full bacckup set. Ultrium Tape Blaades are the
TT

industry'ss first tape bllades and arre developed d exclusively for BladeSysstem enclosu
ures.
The curre
ent BladeSysttem tape blade portfolio consists of:
rT

 HP Ultrium
U 448c Tape Blade — Includes LTO-2 Ultrium
m tape technnology with 4
400
GB of
o capacity on
o a single data
d cartridg
ge (2:1 comp
pression) and
d performancce
up to
o 173 GB/hrr (2:1 compre
ession)
HP SB1760c
S Tapee Blade — In
ncludes LTO-4 4 Ultrium tap
pe technologgy with 1.6 TB of
Fo

capa
acity on a sin
ngle data ca
artridge (2:1 compression n) and perforrmance up to
o
576 GB/hr (2:1 compression n)
 HP SB3000c
S pe Blade — In
Tap ncludes LTO--5 Ultrium tappe technolog
gy with 3 TB of
capa
acity on a sin
ngle data ca
artridge (2:1 compression n) and perforrmance up to
o1
TB/h
hr (2:1 compression)

Rev. 12.3
31 4 –7
Implemen
nting HP BladeS
System Solutions

BladeS
System tap
pe blades — Featurre comparrison

ly
on
y
er
liv
Comparing HP BladeSyystem tape blad
des

Ultrium 448c,
4 SB1760
0c, and SB3 3000c tape b blade featurees are listed in the
de

precedingg table. The main differeences are in tthe recordingg technologyy (LTO-2, LTO
O-4,
or LTO-5), compressed capacity on o a single ddata cartridge (400GB, 1 1.6TB, or 3.0
0TB,
1 data comp
at a 4:2:1 pression ratio
o), and susta ined transferr rate (173GBB/hr,
576GB/h hr, or 1TB/hr, at the 3:2::1 data comppression ratio
o).
TT

The maximum configu uration per enclosure


e takkes into acco
ount the tape blades
connected to half-heig
ght server blades.
All tape blades
b providde integrated —direct-attach
d data proteection for the enclosures—
rT

data prottection for the adjacent server


s blade and networkk backup pro otection for a
all
data with
hin the enclosure.
The HP ta
ape blades are
a electrically connected d to the adja cent server bblades througgh
a signal midplane thaat functions as
a a PCIe buus to link adja acent slots o
of the enclosu
ure.
Fo

Therefore
e, the tape bllades will be
e seen exactlyy the same aas if they were directly
connected (by way of SCSI, for ex xample) to thhat server bla
ade.

For inform
mation about thee compatibility of these tape b blades with BladeSystem serve
er
blades, re
efer to the BladeSystem Compa atibility section of the QuickSppecs for the
respective
e tape blade, or visit: http://w
www.hp.com/g go/connect

4 –8 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades

HP Sto
orage Librrary and Tape
T Toolss

ly
on
y
er
HP Librarry and Tape Tools (L&TT) is a free, rob bust diagnosstic tool for a
all HP tape
storage and
a magneto o-optical stora
age productss. Targeted ffor a wide ra ange of userss, it
liv
is ideal fo
or customers who want to o verify theirr installation, ensure prodduct reliabilitty,
perform their
t own dia
agnostics and d achieve fa ster resolutio on of tape deevice issues.
L&TT perfforms firmwa are upgradess, verificationn of device o
operation, faiilure analysiss
de

and a range of utility y functions. Pe


erformance ttools assist in
n troubleshoooting
bottleneccks and system configurattion checks w warn of common host issues. It also
provides seamless integration with h HP supporrt by generatting and ema ailing test ressults
and supp port tickets.
TT

HP Suppo ort requires the


t use of L&
&TT to troubleeshoot most ddevice issuess, so it is
recomme ended that a support ticke
et is pulled a
and the devicce assessmen nt test is run
before ca
alling.
rT

Operating systems cu
urrently suppo
orted includee HP-UX, Wiindows, Linuxx, OpenVMS
S,
Solaris, and
a Mac OS S X.

L&TT is available from a lin


nk on the CD th at ships with th
he product or ass a free
Fo

download frrom the HP web bsite: http://ww


ww.hp.com/sup pport/tapetoolss

Rev. 12.3
31 4 –9
Implementing HP BladeSystem Solutions

Features and benefits of L&TT


 Free, easy-to-install, and easy-to-use diagnostic tool
 Downloaded and installed from HP.com
(http://www.hp.com/support/tapetools) in less than five minutes
 Intuitive user interface that requires no customer training
 Choice between local installation or running from a remote installation, CD or
memory stick

ly
 Reduced product downtime through preventative maintenance, fast issue
diagnosis with corrective actions

on
 Automated, smart firmware downloads, updates and notifications
 Comprehensive device analysis and troubleshooting tests
 First-level failure analysis of both the device and system without HP involvement

y
 Troubleshoot system performance issues through the use of analysis tools

er
 A direct link to the ITRC web-based troubleshooting content
 Seamless integration with HP hardware support organization
liv
 Ability to generate and email support tickets to the support center for faster
service and support
An all-inclusive source of device information for HP support center
de

 Drive health, life, usage, utilization, performance


 Media health, life, usage (Ultrium only)
TT

 Backup quality (Ultrium only)


 Integration with HP TapeAssure service (http://www.hp.com/go/tapeassure )
rT
Fo

4 –10 Rev. 12.31


HP BladeSysteem Storage and
d Expansion Blades

PCI Expansion
E n Blades

ly
on
y
HP offers an expansio
er
on blade to support
s card
ds that are no
ot offered in a mezzanine
liv
or.
form facto
The Blade
eSystem PCI Expansion Blade
B provid es PCI card expansion slots to an
de

adjacent server blade


e. This blade
e expansion unit uses thee mid-plane to o pass standdard
PCI signa
als between adjacent
a encclosure bays,, so you can add off-the--shelf PCI-X o
or
PCIe card
ds.
The PCI Expansion
E Blade fits into a half-heighht device bayy and is man
naged by the
e
TT

partner server blade — by its ope erating system and drivers.


Customerrs need one PCI Expansioon Blade forr each serverr blade that rrequires PCI
card expansion. Any PCI card fro
om third-partyy manufacturers that worrks in ProLiannt
rT

ML and ProLiant
P DL se
ervers should
d work in thiss PCI Expansion Blade.

Note
HP does not offer any warra
anty or support for third-party PCI products.
Fo

Rev. 12.3
31 4 –11
Implemen
nting HP BladeS
System Solutions

HP PC
CI Expansion Blade — PCI ca
ard detailss

ly
on
y
er
liv
Each PCI expansion blade
b can hoold one or tw wo PCI-X card
ds (3.3V or u
universal) or
one or twwo PCIe cardds (x1, x4, orr x8). It cannnot hold one of each type
e of PCI card
d;
de

that is, on
ne PCI-X and
d one PCIe card at the sa ame time.
Installed PCI-X cards must use lesss than 25W per card. Insstalled PCIe cards must u
use
less than 75W per PC CIe slot, or a single PCIee card can usse up to 150W
W with a
special power
p connecctor enabledd on the PCI eexpansion b
blade.
TT

Customerrs typically in
nstall SSL or XML accelerrator cards, vvoice over IPP (VoIP) cardss,
special purpose
p telecommunicatio on cards, andd graphic accceleration ccards in the PPCI
expansioon blade.
rT
Fo

4 –12 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades

HP IO
O Accelera
ator

The HP IO
O Accelerato or is part of a comprehennsive solid sttate storage pportfolio. Thiis
storage device
d is targ
geted for markets and ap pplications reequiring highh transaction

ly
rates and
d real-time daata access thhat will beneefit from appllication perfo
ormance
enhancemment.

on
Three mo
odels are ava
ailable:
 HP 80GB
8 IO Acccelerator for BladeSystem
m c-Class
HP 160GB
1 IO Acccelerator fo
or BladeSysteem c-Class

y

HP 320GB
3 IO Accelerator
A fo
or BladeSysteem c-Class

er

With the IO Accelera ator, the amo


ount of free RRAM required d by the drivver depends on
the size of
o the blocks used when writing
w to thee drive. Sma
aller blocks re
equire more
liv
RAM. Gu uidelines for 80 GB blocks of storagee are listed in
n the followinng table.

Average Blo ock RAM usage


de

Size(bytes) (Megabytes)
(
8,192 400
4
4,096 800
2,048 1,500
1,024 2,900
TT

512 5,600
5

Typical usse cases incllude:


rT

 Data
abases that historically
h were
w run in m emory or accross many disk spindles for
perfo
ormance rea
asons
 Seism
mic data pro
ocessing
Fo

 Busin
ness intellige
ence and datta mining
 Real-time financia
al data proccessing and vverification
 Conttent caching for near-stattic data for fiile/web servvers
 3D animation/re
a endering
 CAD
D/CAM
 ual Desktop Infrastructure (VDI) solutio
Virtu ons
 Hype
ervisor running multiple virtual
v machiines

Rev. 12.3
31 4 –13
Implementing HP BladeSystem Solutions

Solid state technology can be implemented in various ways within a server. The two
most common implementations are as a solid state drive (SSD) (in a SATA or SAS
form factor) or as an I/O card attached to the PCI Express bus.
As an I/O card, the IO Accelerator is not a typical SSD; rather it is attached directly
to the server's PCI Express fabric to offer extremely low latency and high bandwidth.
The card is also designed to offer high I/O operations per second (IOPs) and nearly
symmetric read/write performance. The IO Accelerator uses a dedicated PCI Express
x4 link with nearly 800MB/s of usable bandwidth. Each mezzanine slot in an
enclosure offers at least that amount of bandwidth, so by combining cards, you can

ly
easily scale the storage to match an application's bandwidth needs.
The IO Accelerator's driver and firmware provide a block-storage interface to the

on
operating system that can easily be used in the place of legacy disk storage. The
storage can be used as a raw disk device, or it can be partitioned and formatted
with standard file systems. You can also combine multiple cards using RAID (up to
three cards with a full-height server blade) for increased reliability, capacity, or

y
performance in a single server blade.

er
liv
de
TT
rT
Fo

4 –14 Rev. 12.31


HP BladeSysteem Storage and
d Expansion Blades

Sma
art Array contrroller portfolio
o

ly
on
y
er
liv
The HP array
a controller portfolio consists
c of seeveral models with differiing SAS
de

channels,, memory siz zes, and perfformance. A All Smart Arraay products sshare a
common set of config guration, man nagement annd diagnostic tools, inclu uding Array
Configuraation Utility (ACU),
( Arrayy Diagnostic Utility (ADU
U), and HP SIIM. These
software tools reduce e the cost of training
t for eeach successsive generatioon of producct
and take much of the guesswork out o of troubleeshooting fieeld problems. These toolss
TT

lower the
e total cost off ownership by reducing training and d technical exxpertise
necessaryy to install and maintain HP server sttorage.
The graphic outlines the
t enhancements of the Smart Arrayy controllers shipping in tthe
rT

ProLiant Gen8
G serverss.

More inform
mation about Sm
mart Array cont rollers is availa
able at:
http://h180
006.www1.hp.ccom/products/sservers/prolian ntstorage/array
ycontrollers/ind
de
Fo

x.html

Rev. 12.3
31 4 –15
Implementing HP BladeSystem Solutions

Standard features of Smart Array controllers


Several features that are common to all Smart Array controllers give them their
reputation for reliability:
 Data compatibility — Complete data compatibility with previous-generation
Smart Array controllers allows for easy data migration from server to server. The
controller upgrades any time better performance, greater capacity, or increased
availability is needed. Every successive generation of Smart Array controllers
understands the data format of other Smart Array controllers.

ly
 Consistent configuration and management tools — Smart Array products use a
standard set of configuration and management tools and utility software that

on
minimize training requirements and simplify maintenance tasks.
 Universal hard drive standards — Form-factor compatibility across many
enterprise platforms enables easy upgrades, data migration between systems,
and management of spare drives.

y
 Online spares — You can configure spare drives before a drive failure occurs. If

er
a drive fails, recovery begins with an online spare and data is reconstructed
automatically.
liv
 Recovery ROM — Recovery ROM provides a unique redundancy feature that
protects from a ROM image corruption. A new version of firmware can be
flashed to the ROM while the controller maintains the last known working
de

version of the firmware. If the firmware becomes corrupt, the controller reverts
back to the previous version of firmware and continues operating. This reduces
the risk of flashing firmware to the controller.

Note
TT

Although common in most new controllers, Recovery ROM is not a standard feature of
all Smart Array controllers.
rT

 Pre-failure alerts and a pre-failure warranty — Failing components can be


detected and replaced before a fault occurs.
In addition there is software consistency among all Smart Array family products:
Fo

 Array Configuration Utility (ACU)


 Option ROM Configuration for Arrays (ORCA)
 Array Diagnostic Utility (ADU)
 HP SIM
 HP Intelligent Provisioning

4 –16 Rev. 12.31


HP BladeSystem Storage and Expansion Blades

I/O bandwidths in Smart Array controllers


The Smart Storage family of controllers and drives for ProLiant Gen8 servers provides
higher I/O bandwidths with PCIe 3.0. This provides maximum compute and I/O
performance for dense high-performance computing environments.
Some examples of improved I/O bandwidth:
 The ProLiant SL230 server introduces the higher performance Intel Socket-R while
offering the same density to the SL140 (8 nodes per 4U chassis). Flexibility in
options includes single GPU and I/O accelerator support.

ly
 The ProLiant ML350p server has Increased I/O expansion by 50% and
increased the I/O capacities by 200% with PCIe Gen3. More I/O bandwidth to

on
the processor, resulting in lower latency (Gen8 = 40 lanes/processor, G7 = 24
lanes/processor).
 The ProLiant DL380p server has 200% the I/O capacities with PCIe Gen3. More
I/O bandwidth to the processor resulting in lower latency (Gen8 = 40

y
lanes/processor, G7 = 24 lanes/processor).

er
 HP 331FLR and 331T adaptors feature the next generation of Ethernet integration
that reduces power requirements for four ports of 1Gb Ethernet and optimizes
I/O slot utilization.
liv
Other features include:
Choice of FlexLOM adapter tailored to meet the system workload
de

 Easily update firmware using HP Service Pack for ProLiant (SPP)


 Optional single GPU or 160GB SLC PCIe IO Accelerator configurations
Optional dual front hot-plug hard drive configuration
TT

 I/O Virtualization support for VMware NetQueue and Microsoft VMQ on


331FLR, 331T 530FLR, and 530M adaptors
rT

Note
This is important because it meets the performance demands of consolidated virtual
workloads.
Fo

Rev. 12.31 4 –17


Implementing HP BladeSystem Solutions

Smart Array controller classification


To simplify the Smart Array controller product line, HP divides it into three general
categories:
 Integrated controllers — Integrated Smart Array controllers are intelligent array
controllers for entry-level, hardware-based fault tolerance. These low-cost
controllers provide an economical alternative to software-based RAID.
 Entry-level controllers — Entry-level controllers are usually less expensive than
high-performance controllers and have smaller memory sizes. If write cache is

ly
available, it is provided as an upgrade as opposed to shipping standard with
the controller.

on
 High-performance controllers — Smart Array controllers generally have write
cache as a standard feature, and it is often upgradeable in this category of
controllers. This group also supports RAID 60 and RAID 6, with the optional
SAAP2.

y
HP Smart Array P822 controller

er
The HP Smart Array P822 controller supported on ProLiant Gen8 servers can support
two times more total drives internally and externally over previous generations, for up
liv
to 227 drives (108 drives are supported with the Smart Array P812 controller).
Additional features include:
de

 PCI bus — Full-height, half-length card PCIe 3.0 x8


 Memory bus speed — DDR3-1333 MHz 72-bit with 2 GB FBWC
 SAS/SATA connectivity — Two x4 ports mini-SAS internal with expander
support; four x4 ports external
TT

 Maximum drives — Up to 227


 Management software support — ACU, HP System Management Homepage
(SMH), HP SIM, ORCA, SPP Storage
rT

 RAID support — RAID 0, 1, 10, 5, 6, 50, and 60


 SAAP — SAAP 2.0 is included standard
Fo

4 –18 Rev. 12.31


HP BladeSystem Storage and Expansion Blades

HP Smart Array P220 and HP Smart Array P222 controllers


The Smart Array P220 and P222 controllers are entry-level 6 Gb/s array controllers
that provide improved performance, greater attach rate, and lower maintenance. The
P222 controller is ideal for RAID 0/1, 10, 5, 50, 6, and 60. Additional advanced
features are upgradable by using SAAP2. The P222 controller delivers increased
server uptime by providing advanced storage functionality, including:
 Online RAID level migration
 FBWC

ly
 Global online spare

on
 Pre-failure warning

HP Smart Array P420 and P420i controllers


The HP Smart Array P420 and P420i controllers are enterprise-class 6 Gb/s

y
controllers that provide improved performance, internal scalability, and lower
maintenance. The P420 controller is ideal for RAID 0/1, 1+0, 5, 50, 6 and 60.

er
Additional advanced features are upgradable by SAAP2. The P420 delivers
increased server uptime by providing advanced storage functionality, including
online RAID level migration with FBWC, global online spare, and pre-failure
liv
warning.
Smart Array P420 and P420i controllers:
de

 Support up to 27 drives depending on the server implementation


 Upgrade seamlessly from past generations and upgrade to next generation HP
high performance and high capacity SAS Smart Array controllers
TT

 Deliver high performance and data bandwidth with 6Gb/s SAS technology;
retain full compatibility with 3Gb/s SATA technology
 Feature x8 PCI Express Gen 3 host interface technology for high performance
rT

and data bandwidth up to 8.5 GB/s maximum bandwidth


 Can be upgraded from 40-bit 512MB cache to 72-bit 1GB FBWC or 72-bit 2GB
FBWC
Fo

 Enable array expansion, logical drive extension, RAID migration and strip size
migration with the addition of the flash backed cache upgrade

Note
A minimum of 512 MB cache is required to enable RAID 5 and 5+0 support with the
Smart Array P420i controller.

Rev. 12.31 4 –19


Implementing HP BladeSystem Solutions

Learning check
1. What enables the server blades to partner with storage and expansion blades
within the HP BladeSystem enclosures?
.................................................................................................................
.................................................................................................................
.................................................................................................................
.................................................................................................................

ly
2. The HP PCI Expansion Blade can partner with one full-height server blade and
two half-height server blades.

on
 True
 False
3. You can connect a storage blade and a tape blade to a single, full-height server

y
blade.

er
 True
 False
liv
4. The SB40c storage blade requires a dedicated Smart Array controller. This
controller is:
de

a. Embedded on the system board of the partner server blade


b. Embedded in the SB40c storage blade
c. Embedded on the mezzanine card installed in the partner server blade
TT

d. Embedded in the signal midplane


5. Which combination of PCI cards is not allowed in a PCI Expansion Blade?
a. Two PCI-X cards
rT

b. One PCI-X card


c. One or two PCIe cards
Fo

d. One PCI-X card and one PCIe card

4 –20 Rev. 12.31


Ethernet Connectivity Options
for HP BladeSystem
Module 5

Objectives
After completing this module, you should be able to describe the following

ly
HP BladeSystem Ethernet interconnect modules for HP ProLiant server blades:
HP 6120XG Ethernet Blade Switch

on

 HP 6120G/XG Blade Switch


 Cisco Catalyst Blade Switch 3020
Cisco Catalyst Blade Switch 3120

y

 HP GbE2c Layer 2/3 Ethernet Blade Switch


er
HP 1:10Gb Ethernet BL-c Switch
HP 1Gb Ethernet Pass-Thru Module
liv

 HP 10GbE Pass-Thru Module


de
TT
rT
Fo

Rev. 12.31 5 –1
Implementing HP BladeSystem Solutions

Available Ethernet interconnect modules


The Ethernet interconnect modules available for the BladeSystem enclosures are:
 HP 6120XG Ethernet Blade Switch (516733-B21) — Designed for the BladeSystem
enclosure, the HP 6120XG Blade Switch provides sixteen 10Gb downlinks and
eight 10G enhanced small form-factor pluggable transceiver (SFP+) uplinks
(including a dual-personality CX4 and SFP+ 10G uplink and two 10Gb cross-
connects).
HP 6120G/XG Blade Switch (498358-B21) — Designed for the BladeSystem

ly

enclosure, the HP 6120G/XG Blade Switch provides sixteen 1Gb downlinks,


four 1Gb copper uplinks, and two 1Gb SFP uplinks, along with three 10Gb

on
uplinks and a single 10Gb cross-connect.
 Cisco Catalyst Blade Switch 3020 (410916-B21) — Flexible to fit the needs of a
variety of customers, the Cisco Catalyst Blade Switch 3020 for BladeSystem
provides an integrated switching platform with Cisco resiliency, advanced

y
security, enhanced manageability, and reduced cabling requirements.

er
 Cisco Catalyst Blade Switch 3120 (451438-B21/451439-B21) — As the next
generation in switching technology, the Cisco Catalyst Blade Switch 3120 Series
liv
introduces a switch stacking technology that treats individual physical switches
within a rack as one logical switch. This innovation simplifies switch operations
and management. The Cisco Catalyst Blade Switch 3120 Series is supported by
de

both HP ProLiant and HP Integrity server blades.


 HP GbE2c Layer 2/3 Ethernet Blade Switch (438030-B21) — This HP switch
provides Layer 2 switching and Layer 3 routing features. It has 16 internal
downlinks, 5 uplinks, and 2 internal cross-connects. Four of the five uplinks can
TT

be either copper or fiber using optional SX SFP fiber modules. This switch is
supported by ProLiant server blades only.
 HP 1:10Gb Ethernet BL-c Switch (438031-B21) — This easy-to-manage
rT

interconnect provides sixteen 1Gb downlinks and four 1Gb uplinks, along with
three 10Gb uplinks and a single 10Gb cross-connect. This switch is supported
by ProLiant server blades only.
HP 1Gb Ethernet Pass-Thru Module (406740-B21) — This 16-port Ethernet
Fo

interconnect provides 1:1 connectivity between the server and the network. A
pair of pass-thru modules offers a redundant connection from the servers to the
external switches. It is supported by both ProLiant and Integrity server blades.
 HP 10GbE Pass-Thru Module (538113-B21) — The HP 10GbE Pass-Thru Module is
designed for BladeSystem customers requiring a nonblocking, one-to-one
connection between each server and the network. This pass-thru module
provides 16 uplink ports that accept both SFP and SFP+ connectors.

5 –2 Rev. 12.31
EEthernet Conne ctivity Options for HP BladeSyystem

HP 612
20XG Eth
hernet Bllade Switch

ly
Designedd for the Blad deSystem encclosure, the H
HP 6120XG Blade Switch h provides

on
sixteen 10
0Gb downlin nks and eighht 10G SFP+ uplinks (including a dua al-personalityy
CX4 and SFP+ 10G uplink,
u and two
t 10Gb crross-connectss). A robust sset of industrry-
standard Layer 2 swittching functioons, quality oof service (Q
QoS) metering g, security, a
and
high-avaiilability featu
ures round ou
ut this extrem
mely capablee switch offerring.

y
The 61200XG switch iss suited for data
d migrating to next-generation 10Gb h
centers m high-

er
performaance architecctures. With the
t support o of dual speed
ds (1Gb andd 10Gb) on tthe
uplinks and Converge ed Enhanced d Ethernet (C
CEE) hardware capabilityy, the 6120XG
G
provides true future-proofing and investment pprotection.
liv
The 6120 0XG blade sw witch brings consistency and interopeerability acro
oss existing
network investments
i to help reducce the compl exity of netw
work manage ement throug gh
de

resilient core-to-edge
c connectivity and automa ated provisio
oning technologies. With a
variety off connection interfaces, the 6120XG sswitch offers excellent invvestment
protection as ease of deeployment and reduced
n, flexibility, and scalability, as well a
operation nal expense.
TT

d it has wire speed performance on a


6120XG uses a nonblocking architecture, and all
downlinkks and all uplinks.
rT
Fo

Rev. 12.31 5 –3
Implemen
nting HP BladeS
System Solutions

HP 6120XG Ethe
ernet Blade
e Switch — Front pa
anel

ly
The follow
wing table id
dentifies the front
f panel ccomponents o
of the HP 612
20XG Blade
e
Switch.

on
Description
1 Port 17 (10GBA ASE-CX4)*
2 Console
C port (U
USB 2.0 mini-AB connector)
3 Clear
C button

y
4 Port 17 SFP+ (10GbE) slot*†
5 Port 18 SFP+ (10GbE) slot†

er
6 Port 19 SFP+ (10GbE) slot†
7 Port 20 SFP+ (1 10GbE) slot†
8 Port 21 SFP+ (10GbE) slot†
liv
9 Port 22 SFP+ (1 10GbE) slot†
10 Port 23 SFP+ (10GbE) slot*†
11 Port 24 SFP+ (10GbE) slot*†
de

12 Reset button (reecessed)


* Dual-personality port
† Supports 10
0GBASE-SR SFP+, 10GBASE-LLR SFP+, 10GBA ASE-LRM SFP+,,
1000BASE-T SFP, 1000BASEE-SX SFP, and 1 1000BASE-LX S FP optical transsceiver
modules
TT

Port 17 co
onsists of a CX4
C port mu ultiplexed witth an SFP+ p
port. Only onne port can bbe
active. Th
he SFP+ portt takes precedence—if it contains a mmodule, it is the active po
ort
and the CX4
C port is inactive.
rT

Ports 23 and
a 24 are eache multiple
exed with intterswitch linkk ports on the
e blade switcch
backplan ne. Either the
e SFP+ port on
o the front p
panel or the backplane p port can be
active, buut both cannot be active at the same time. The SFFP+ port on tthe frontplan ne
Fo

takes preecedence—if it contains a module, it is the active port and its correspondiing
backplan ne port is ina
active.

5 –4 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem

HP 612
20G/XG
G Blade Switch
S

ly
Designed d for the BladdeSystem encclosure, the H
HP 6120G/X XG Blade Sw witch providees

on
sixteen 1Gb downlinks, four 1Gb b copper upl inks, and tw
wo 1Gb SFP u uplinks, alon
ng
e 10Gb uplin
with three nks and a sin
ngle 10Gb ccross-connectt. It also inclu
udes a robusst
set of ind
dustry-standard Layer 2 swwitching funcctions, QoS metering, security, and
high-avaiilability featu
ures.

y
The 6120
0G/XG bladee switch is id
deal for data
a centers in trransition, wh
here a mix off

er
1Gb andd 10Gb netw
work connectiions are requuired.
The 61200G/XG blade e switch provvides consisttency and intteroperabilityy across
liv
existing network
n invesstments to he elp reduce thhe complexityy of network management
through resilient
r core--to-edge con nnectivity and
d automated provisioning g technologie es.
With a va ariety of con
nnection interrfaces, the 6120G/XG b blade switch o offers excelle
ent
de

investmennt protection,, flexibility, and


a scalabilitty, as well ass ease of deployment an nd
reduced operational expense.
0G/XG blade
The 6120 e switch usess a nonblockking architeccture, and it h
has wire speed
performa
ance on all downlinks and d all uplinks..
TT
rT
Fo

Rev. 12.31 5 –5
Implementing HP BladeSystem Solutions

HP 6120G/XG Ethernet Blade Switch — Front panel


 

The following table identifies the front panel components of the HP 6120G/XG Blade

ly
Switch.

on
Description
1 Port C1 (10GBASE-CX4)
2 Port X1 XFP (10GbE) slot*
3 Port X2 XFP (10GbE) slot*
4 Port S1 SFP (1GbE) slot**

y
5 Port S2 SFP (1GbE) slot**
6 Console port (USB 2.0 mini-AB connector)

er
7 Clear button
8 Ports 1–4 (10/100/1000BASE-T)
9 Reset button (recessed)
liv
* Supports 10GBASE-SR XFP and 10GBASE-LR XFP pluggable optical
transceiver modules
** Supports 1000BASE-T SFP, 1000BASE-SX SFP, and 1000BASE-LX SFP
de

optical transceiver modules


TT
rT
Fo

5 –6 Rev. 12.31
EEthernet Conne ctivity Options for HP BladeSyystem

Manag
ging HP blade sw
witches

ly
on
Menu interfacce view

Managem ment interfacces enable yo


ou to reconfiigure a blade switch and d to monitor

y
switch sta
atus and perfformance. HP offers the ffollowing inteerfaces for itts blade
switches:

er
Menu interface — A menu-drriven interfacce offering a subset of sw
ugh the built--in VT100/A
throu ANSI consolee.
witch comma
ands
liv
 Com nterface (CLI)) — An interfface offering
mmand line in g the full set o
of switch
commands through the VT10 00/ANSI connsole built into the switch h.
de

 Webb browser intterface — A switch interfa ace offering status inform


mation and a
subset of switch commands
c th
hrough a sta
andard web browser such h as Netscap
pe
Navigator or Miccrosoft Intern
net Explorer.
ProCCurve Managger (PCM) — A Windowss-based netw work manage ement solutio
on
TT

inclu
uded in-box with
w all mana ageable ProC Curve devicees. Features include
automatic device
e discovery, network
n statuus summary, topology an nd mapping,
and device mana agement.
rT

 ProC
Curve Manag M+) — A com
ger Plus (PCM mplete Wind dows-based n network
management solution that proovides both the basic fea
atures offeredd with PCM as
well as more advvanced mana agement feaatures such ass in-depth tra
affic analysiss,
grou
up and policyy manageme ent, configura
ation manag gement, devicce software
Fo

updaates, and advanced virtual LAN (VLAAN) manageement.

Rev. 12.31 5 –7
Implementing HP BladeSystem Solutions

Cisco Catalyst Blade Switch 3020 features


The Cisco Catalyst Blade Switch 3020 is an integrated Layer 2+ switch that uses
existing network investments to help reduce operational expenses. The key features of
the Cisco Catalyst Blade Switch 3020 are:
 Enhanced performance — Wire speed switching on 16 internal 1Gb ports and
on 8 external 10/100/1000BASE-T ports
 Four external 10/100/1000 SFP-based ports, which can be configured
instead of the 10/100/1000BASE-T ports, to support Fiber SX SFP modules

ly
from Cisco Systems
 One external console port

on
 One Fast Ethernet connection to the BladeSystem Onboard Administrator
 The Fa0 (port 0) is dedicated to OA management. No data is routed to
the Fa0 port

y
Note

er
Ports 23 and 24 are configured by default as external-facing ports, but they can be
configured to provide an internal crossover connection to an associated Cisco Catalyst
Blade Switch 3020. If the cross-connects are enabled, the external ports 23 and 24 are
liv
automatically disabled.

 Improved manageability — Support for CiscoWorks software, which provides


de

multilayer feature configurations such as routing protocols, Access Control Lists


(ACLs), and QoS parameters
 Support for the Internetwork Operating System (IOS) CLI, which is a
common user interface and command set included with all Cisco routers
TT

and Cisco Catalyst desktop switches


 Support for an embedded Remote Monitoring (RMON) software agent that
provides enhanced traffic management, monitoring, and analysis
rT

 Enhanced security — Compatible with Cisco Secure Access Control Server


(ACS), which enables users to access their security profiles regardless of where
they connect on the network
Fo

 Support for VLANs


 Support for Cisco Identity-Based Networking Services (IBNS), which prevent
unauthorized network access
 ACLs, which provide protection against denial-of-service and other attacks

5 –8 Rev. 12.31
EEthernet Conne ctivity Options for HP BladeSyystem

Catalysst Blade Sw
witch 302
20 front beezel

ly
on
y
er
liv
The switcch module ha as 18 LEDs. You
Y can use the switch mmodule LEDs tto monitor
switch mo odule activity
y and performance. Grap
phical repressentations off the LEDs are
e
de

visible in the device manager.


m
Eight LED
Ds are on the
e front bezel, including:
 Twelve LEDs for uplink
u port sttatus
TT

 Four switch system status LED


Ds
 Two HP specific LEDs to indiccate health a
and UID statuus
rT

System status LED ind


dicators are as
a follows:
 Off — The system
m is not pow
wered on.
 king green — The powerr-on self-test ((POST) is in p
Blink progress.
Fo

 Solid he system is operating no


d green — Th ormally.
 Amb ystem is receiiving power but is not fun
ber — The sy nctioning pro
operly.
The greenn status (STA
AT), duplex (DDLX), and sp eed (SPD) LE
EDs are used
d with the Mo
ode
button to select the diisplay mode for the port LEDs. You ca
an press the Mode buttonn to
cycle thro
ough the threee display mo odes. After 3
30 seconds p
passes witho
out the Modee
button beeing pressed,, status inform
mation displ ays.

Rev. 12.31 5 –9
Implemen
nting HP BladeS
System Solutions

Cisco Catalyst Blade Sw


witch 312
20 featurres

ly
on
The Cisco
o Catalyst Blade Switch 3120
3 Cisco Catalyyst Blade Sw
series i ncludes the C witch

y
3120G and Cisco Ca atalyst Blade Switch 3120 0X models. TThe Catalyst BBlade Switchh
3120 seriies introduce
es the Cisco stacking
s techhnology that eliminates th
he need to

er
manage multiple swittches per racck. Key featu res are:
 Cisco
o stacking te
echnology
liv
 Combine up
p to nine swittches into a ssingle logica
al switch
 Use a single
e IP address and routing domain
de

 Enable 64G
Gb stack bandwidth
 Mix and ma
atch any com
mbination of 3
3120 series sswitches
 Enha
anced perforrmance
TT

 Enable wire speed switcching on all ssixteen 1Gb downlinks


 Enable wire speed switcching on all 1
1Gb uplinks
rT

 Enable wire speed switcching on bothh 10Gb uplin


nks (3120X o
only)
 Use the sam
me IOS interfaace, Manageement Inform mation Bases (MIBs), and
d
managemen nt tools as the
e rest of the Cisco Catalyyst series
Fo

 Imprroved manag
geability
 Manage mu ultiple switche witch with a single IP address
es as a singlle logical sw
and a single
e Spanning Tree
T Protocoll (STP) node
 Support CisccoWorks software, whichh provides m multilayer featture
configurations such as ro
outing protoccols, ACLs, a
and QoS parrameters
 Support the Embedded Events Mana
ager (EEM) a
and Generic On-line
Diagnostics (GOLD)
 Support the Cisco Network Assistantt

5 –10 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem

Catalysst Blade Sw
witch 3120
0 front beezel

ly
on
y
er
liv
de

The diagram shows th


he front beze
el componennts of the Cisco Catalyst B
Blade Switch
h
3120. The
e switch mod
dule has 18 LEDs
L on the face plate:
 Twelve LEDs for uplink
u port sttatus
TT

 Four switch statuss LEDs


 Two HP specific LEDs to indiccate health a
and UID statuus
rT

Note
The preceding diagram disp
plays 13 LEDs. TThe rest of the LLEDs are visible
e in the device
manager.
Fo

You can use the switcch module LE


EDs to monito
or switch moodule activity and
ance. Graphical representations of thee LEDs are vvisible in the device
performa
managerr.

Rev. 12.31 5 –11


Implemen
nting HP BladeS
System Solutions

HP Gb
bE2c Lay
yer 2/3 Ethernet
E B
Blade Sw
witch

ly
The HP GbE2c
G Layer 2/3 Etherne et Blade Swittch provides Layer 2 swittching plus th
he

on
additiona
al capabilitie
es of Layer 3 routing.
Using Layyer 3 routingg, inter-VLAN
N routing beccomes more sscalable and d more efficieent
than equivalent Layerr 2 networks that rely on STP alone. I P forwardingg enables traaffic
to be forw
warded betw ween VLANs without an eexternal routeer or Layer 3 switch. Thiss

y
reduces traffic
t in the core
c networkk by making Layer 3 routting decision
ns within the

er
BladeSystem enclosurre. Layer 3 ro outing also reeduces the n
number of brroadcast
domains,, increasing network perfformance annd efficiency.
The Virtua
al Router Reddundancy Prrotocol (VRRPP) maximizess availability in complex
liv
network environments
e s by allowing
g multiple sw
witches to pro
ocess traffic iin an active-
active configuration. All
A switches in a VRRP grroup can pro ocess traffic ssimultaneoussly,
de

ensuring maximum pe erformance and


a fast, sea amless failoveer.
Additiona
al features off Layer 3 rou
uting include :
 128 IP interfacess
TT

 4096
6 Address Re
esolution Pro
otocol (ARP) eentries
 Glob
bal default ro
oute
 Static routing sup
pport with 12
28 routing ta
able entries
rT

 Dyna
amic routing support with
h up to 4,00
00 entries in a routing tab
ble
 Routing Informatiion Protocol (RIP) and Op
pen Shortestt Path First (O
OSPF)
The GbE22c Layer 2/3 3 switch provvides 16 inteernal downlin nks and two internal crosss-
Fo

connects in a single low-cost bladde switch. It ffeatures five uplinks, fourr of which ca
an
be coppeer or fiber ussing optional SFP fiber m odules.

Note
The HP GbE2
2c Layer 2/3 Fiiber SFP Optionn Kit (440627-BB21) contains tw wo SX SFP fiberr
modules. Only SFP moduless with this part nnumber operatee in the Layer 2
2/3 switch.

5 –12 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem

GbE2c Layer 2/3 Ethernett Blade Sw


witch frontt bezel

ly
on
y
er
liv
The front bezel of the GbE2c Laye er 2/3 Etherrnet Blade Sw witch feature
es two LEDs
de

(health and UID), one e serial port, and five Ethhernet ports.
The healtth LED in the GbE2c Laye
er 2/3 Etherrnet Blade Sw
witch can be
e in one of th
hree
states:
—Not powere
Off— ed up
TT

 Gree d up and all ports match


en—Powered
 ber—A probllem has occu
Amb urred, such a
as a port missmatch
rT

The five front panel Ethernet portss have two LE


EDs (speed a
and link/actiivity) per porrt.
Fo

Rev. 12.31 5 –13


Implemen
nting HP BladeS
System Solutions

HP 1:10Gb Ethernet BL--c Switch

ly
on
The HP 1:10Gb Ethern net BL-c Switcch is designeed specifically for the data center
transitioning from 1G
Gb to 10Gb. It enables cuustomers to uuse an existin ng
10/100/ /1000Mb inffrastructure to
o move to 10 0Gb as the n need develop ps.
Designed d for the Blad 1:10Gb Etherrnet BL-c Switch provides
deSystem encclosure, the 1

y
more than 34Gb of uplink
u bandwwidth to hand dle the most demanding applicationss. It

er
delivers sixteen
s 1Gb downlinks, four 1Gb upllinks along w with three 100Gb uplinks
(CX4, XFP), and a 10 0Gb cross-co
onnect in a siingle-bay forrm factor. Perrformance
features include low latency, wire speed perfo ormance for Layer 2 and Layer 3
liv
packets, and low pow wer consumpption.
 The XFP
X (10Gb SFP)
S Multi-Source Agreem ment (MSA) iss a specification for a
pluggable, hot-sw
wappable op ptical interfa ce for 10Gb
b SONET/SD DH, Fibre
de

Channel, Gigabiit Ethernet, and


a other ap plications.
 10GGBASE-CX4, also
a known by b its workinng group nam me of 802.3a ak, transmitss
n each direction, over co
over four lanes in opper cablingg similar to tthe variety ussed
TT

in InfiniBand tech
hnology. It iss designed to
o work up to a distance o of 15 m (49 ft.).
This technology has
h the lowe est cost per p
port of all 10G
Gb interconn nects, but at the
expeense of rangee. Each devicce capable o of supporting
g a 10GbE m module uses
some e MSA to proovide the acttual module connectivity within the device to the
rT

outsiide connecto
or.
Additiona
al features in
nclude:
Industry-standard yer 2 switchiing and Layeer 3 routing functions
d Ethernet Lay
Fo

 QoS
S
 Secu
urity
 High
h-availability features
Gb Ethernet BL-c Switch reduces cabl ing and pow
The 1:10G wer and coolling
requireme
ents compared to stand-aalone switchees.
It is comp
patible with all
a server bla
ades in a Bla
adeSystem c7
7000 enclosure.

5 –14 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem

1:10Gb
b Ethernet BL-c Switcch front beezel

ly
on
y
er
liv
The front bezel of the 1:10Gb Ethe ernet BL-c Sw
witch providees following ttwo LEDs perr
port for th
he front pane
el Ethernet ports:
p
de

 RJ-45
5 port speed
d LED
 RJ-45
5 and 10Gb
b link/activity
y LED
TT
rT
Fo

Rev. 12.31 5 –15


Implemen
nting HP BladeS
System Solutions

HP 1G
Gb Ethern
net Pass-TThru Mod
dule

The HP 1Gb Ethernett Pass-Thru Module


M for Bla
adeSystem iss a 16-port EEthernet
interconn
nect that provvides a 1:1 no
onswitched, nonblocking g path betweeen the serveer
and the network.
n Thiss connectivity
y is especiallly useful wheen nine or moore ports are
e

ly
used in an
a enclosure;; however, th he actual perrformance deepends on en nd-to-end
connectivvity.

on
The 1Gb Ethernet Passs-Thru Modu 16 internal 1Gb downlinkks and 16
ule delivers 1
external 1Gb RJ-45 copper
c uplinkks. Designedd to fit into a single I/O b
bay of the
BladeSystem enclosurre, the 1Gb Ethernet Passs-Thru module should be installed in
pairs to provide
p redundant uplinkk paths.

y
Note

er
The 1Gb Etheernet Pass-Thru module (PN: 4406740-B21) shiips as a single unit and should
d
be ordered in
n quantities of two.
t Cables aree not included.
liv
This Ethernet pass-thru
u module is designed
d forr customers w
who want an n unmanaged d
nnection betw
direct con ween each server
s blade within the enclosure and d an external
network device
d such as
a a switch, router, or huub. There is n
no need for eextra LAN
de

managem ment in the enclosure,


e and there is a full gigabit p
pipe between n the server and
the upstre
eam LAN po ort. However, the ports do o not auto-neegotiate spee
ed; the speed
on each port is fixed at 1Gb, andd the all of thhem must be connected tto a 1Gb
switch.
TT

Because of the additional cost of cabling and d extra ports on director-cclass switchees,
the 1Gb Ethernet Pass-Thru Module is an expeensive way ffor a custome er to connectt to
networks. It is targete
ed toward cusstomers limitted to direct 1:1 connectio ons between
n
rT

the server and networks. Pass-thru u modules al so offer direct pass-throu ugh for
customerss who do no ot want embe edded switchhing or an exxtra layer of LAN manag ged
switches; however, HP P Virtual Con
nnect is a moore cost-effecctive alternattive.
Fo

Important
Pass-thru app
proaches are sim
mple, but addinng many cabless could lead to reliability
problems andd risks of human error. No Virttual Connect suupport is availa
able for pass-thrru
modules.

5 –16 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem

HP 10GbE Passs-Thru Module


M

ly
The HP 10GbE Pass-TThru Module is designed for BladeSysstem and HPP Integrity
Superdomme 2 customers requiringg a nonblockking, one-to-oone connectiion between

on
each servver and the network.
n Thiss pass-thru m
module provid
des 16 uplinkk ports that
accept booth SFP and SFP+ conne ectors.
The HP 10GbE Pass-TThru Module can supportt 1Gb and 10Gb connecctions on a p port-
by-port basis.
b Optical as well as Direct Attachh Copper (DA AC) cables a
are also

y
supported d. Both standdard Etherne
et as well as Converged E Enhanced Etthernet (CEE))
traffic to an
a FCoE cap pable switch is possible w
when using tthe appropriate NIC or
adapter module.
FlexNICs.
m

er
Thiss module sup
pports all NICCs and mezzzanine adappters includin
ng
liv
This module contains following po
orts:
 Sixte
een internal 1Gb/10Gb
1 downlinks
de

 een 1Gb/10Gb uplinks supporting


Sixte s S
SFP and SFP+
+
 Mini USB configuration and managemennt port
TT
rT
Fo

Rev. 12.31 5 –17


Implemen
nting HP BladeS
System Solutions

HP 10G
GbE Pass-TThru Modu
ule compo
onents

ly
Front view of
o an HP 10Gb
bE Pass-Thru Mo
odule

on
Description
UID LED  Blue light on—The pass-tthru module is a activated.

y
 Blue light off—The pass-tthru module is d deactivated.

er
 Off—The pass-thru moduule is powered o off.
 Green—The pass-thru mo odule is powereed up, and all p ports
match.
Health LED
liv
 Amber—A An issue exists, such as a port mismatch. For more
informatioon, see the HP B BladeSystem En nclosure Setup a
and
Installation
n Guide.
 Green—Liink is 10G.
de

 Flashing green—10G
g lin k activity is dettected.
 Amber—LLink is 1G.
Ethernet
port
 Flashing amber—1G
a linkk activity is deteected.
 Flashing alternately
a gree n and amber— —A link mismatcch
condition exists. For moree information, ssee the HP
TT

BladeSyste em Enclosure S Setup and Installlation Guide.


Reset button
Mini-USB RS232 managemeent serial port
 Ports 1 throough 16
rT

 SFP+ portss to support SFPP and SFP+ trannsceiver modulees and Direct A Attach
Cables (DAACs)
Fo

5 –18 Rev. 12
2.31
Ethernet Connectivity Options for HP BladeSystem

Learning check
1. List the available interconnect modules supported for ProLiant server blades in a
c7000 enclosure.
.................................................................................................................
.................................................................................................................
.................................................................................................................
2. Match each interconnect with its description.

ly
a. Cisco Catalyst Blade ............ A 16-port Ethernet interconnect that
Switch 3020 provides 1:1 connectivity between the

on
server and an external switch port
b. GbE2c Layer 2/3 ............ An integrated Layer 2+ switch that
Ethernet Blade Switch features 16 internally facing ports
c. 1:10Gb Ethernet Switch ............ A high-performance, affordably

y
priced, low-latency switch with

er
20 ports (16 downlinks and 4 uplinks)
d. 1Gb Ethernet Pass-Thru ............ A switch with a full set of Layer 3
Module routing that uses optional SX SFP fiber
liv
modules
e. Cisco Catalyst Blade ............ A switch that provides switch stacking
Switch 3120 technology, which combines up to
de

nine switches into a single logical


switch

3. Name the key differences between the Cisco Catalyst 3120G and 3120X.
TT

.................................................................................................................
.................................................................................................................
rT
Fo

Rev. 12.31 5 –19


Implementing HP BladeSystem Solutions

4. How many VLAN IDs does the Cisco Catalyst Blade Switch 3120 support?
a. 1,024
b. 1,005
c. 1,000
d. 1,010
5. Which Ethernet module features five uplinks, four of which can be copper or
fiber using optional SFP fiber modules?

ly
a. Virtual Connect Flex-10 10Gb module
b. 1/10Gb VC-Enet module

on
c. GbE2c Layer 2/3 Ethernet Blade Switch
d. 10GbE Pass-Thru Module

y
er
liv
de
TT
rT
Fo

5 –20 Rev. 12.31


Storage Connectivity Options
for HP BladeSystems
Module 6

Objectives
After completing this module, you should be able to:

ly
 Describe the HP BladeSystem Fibre Channel interconnect modules available for
HP BladeSystems

on
 Cisco MDS 9124e Fabric Switch
 Brocade 8Gb SAN Switch
Describe the Serial-Attached SCSI (SAS) switches available for BladeSystems

y

Identify 4X InfiniBand Switch Modules available for BladeSystems

er

 Differentiate the Fibre Channel mezzanine card options available for


BladeSystems
liv
de
TT
rT
Fo

Rev. 12.31 6 –1
Implemen
nting HP BladeS
System Solutions

Fibre
e Chan
nnel inte
erconne
ect options
The BladeeSystem architecture offe
ers several c hoices for co
onnecting se
erver blades to
Fibre Cha
annel networks. These Fiibre Channeel interconnecct modules a are currently
available
e for the Blad
deSystem:
 Cisco
o MDS 9124 4e Fabric Swiitch — A Fib
bre Channel switch that ssupports link
speeeds up to 4GGb/s. The Cissco MDS 91 24e Fabric Switch can o operate in a
fabriic containing witches or as the only swiitch in a fabric.
g multiple sw
Broccade 8Gb SA AN Switch — An easy-to-m manage emb bedded Fibrre Channel

ly

switcch with 8Gb/s performance. The Bro ocade 8Gb S SAN Switch hot-plugs intto
the back
b of the BladeSystem
B enclosure. TThe integrateed design fre
ees up rack

on
spacce, enables shared
s poweer and coolinng, and reduuces cabling and the
number of small form factor pluggable (S SFP) transceivers. The Bro
ocade 8Gb
SANN Switch provvides enhancced trunking
g support and d new featurres in the Pow
wer
Packk+ option.

y
Cisco
o MDS 91
124e Fab
bric Switcch for Bla
adeSystem

er
liv
de

The Ciscoo MDS 9124e Fabric Switch for BladeeSystem featuures 16 logiccal internal
ports (numbered 1 through 16) that connect seequentially to o server bayys 1 through 16
through the enclosure
TT

e midplane. Server
S bay 1 is connecteed to switch p port 1, server
bay 2 is connected to
o switch portt 2, and so fo
orth. The exteernal ports a
are labeled
EXT1 throough EXT4 (left bank) and EXT4 throuugh EXT8 (rig ght bank).
Up to six zero-footprint switches are
a supported d per enclosure. The hot--swappable
rT

switch supports redunndant, dual-pport Fibre Chhannel mezza anine cards.


Fo

6 –2 Rev. 12
2.31
Storage Connectivity Options for HP BladeSystem

The Cisco MDS 9124e Fabric Switch is available in two port-count options as well as
with the option of an upgrade license for lower cost of entry:
Important
! These are the same physical switch; available port options are dependent on the
license purchased.

 Cisco MDS 9124e 12-port Fabric Switch (PN: AG641A)


 Eight internal 4GB ports

ly
 Four external 4GB ports
 Two preinstalled short wavelength small form-factor pluggable (SFP)

on
modules
 Licensing for port activation in eight-port increments (the first eight ports are
licensed by default)

y
 Cisco MDS 9124e 24-port Fabric Switch (PN: AG642A)

er
 Sixteen internal 4GB ports
 Eight external 4GB ports
liv
 Four preinstalled short wavelength SFPs
 Licensing available for port activation in eight-port increments (the first eight
de

ports are licensed by default)Cisco MDS 9124e Fabric Switch 12-port


Upgrade License to Use (LTU) (PN: T5169A)
 Enables eight additional internal ports and four fabric-facing ports on the
12-port model, for a total of 16 internal ports and eight fabric-facing ports
TT

 Does not include SFPs


Important
! The Cisco MDS 9124e Fabric Switch for HP c-Class BladeSystem is compatible with the
rT

BladeSystem c7000 enclosure only.


Fo

Rev. 12.31 6 –3
Implementing HP BladeSystem Solutions

Cisco MDS 9124e Fabric Switch features and components


Features of the Cisco MDS 9124e Fabric Switch include:
 Auto-sensing link speeds (Gb/s) — 4/2/1
 Fabric support — Full fabric
 Aggregate bandwidth (Gb/s, end-to-end) — 192
 PortChannel (Gb/s) — 32Gb/bundle

Note

ly
PortChannel includes up to eight ports in one logical bundle.

on
 Universal ports with self discovery
 Nondisruptive software upgrades
 SAN-OS level 3.1(2) or later

y
Standard and optional software

er
The standard software components are:
 SAN-OS — Delivers advanced storage networking capabilities.
liv
 Cisco Fabric Manager — Provides integrated, comprehensive management of
larger storage area network (SAN) environments, enabling you to perform vital
de

tasks such as topology, discovery, fabric configuration and verification,


provisioning, monitoring, and fault resolution.
Optional software components are:
Cisco Fabric Manager Server (FMS) Package — Provides historical performance
TT

monitoring for network traffic hot-spot analysis, centralized management


services, and advanced application integration.
 Cisco Enterprise Package — Contains a set of advanced traffic engineering and
rT

advanced security features recommended for all enterprise SANs. The following
additional features are bundled together in the Cisco MDS 9000 Enterprise
package:
Fo

 Quality of Service (QoS) levels


 Switch/switch and host/switch authentication
 Host to logical unit number (LUN) zoning
 Read-only zoning
 Individual port security
 Virtual SAN (VSAN)-based access control

6 –4 Rev. 12.31
S
Storage Conne ctivity Options for HP BladeSyystem

Cisco MDS 9124e Fabric Switch layout

ly
on
y
The prece
er
eding graphic shows the Cisco MDS 9124e Fabriic Switch layyout and
liv
compone ents.
de
TT
rT
Fo

Rev. 12.3
31 6 –5
Implemen
nting HP BladeS
System Solutions

Dynam
mic Ports on
o Deman
nd

ly
on
y
er
liv
Sta
atic mapping co
onfiguration

Static ma
apping descrribes the relationship betw ween the devvice bays an nd the interna
al
de

switch po
orts. Specific device bayss must be pop pulated to m
match a corre esponding
active sw
witch port. This configuration significa ntly enhancees usability fo
or low-touch
server customers.
With Dyn namic Ports ono Demand (DPOD),
( youu can map any device ba ay to an activve
TT

port. Portts are allocatted on a firstt-come-first, sserved basis to any locattion, includin
ng
external ports.
p The nuumber of pre e-reserved po orts decreasees the numbeer of ports fro
om
the pool of ports. Rem moving a serrver or externnal port (exceept a pre-resserved port)
rT

expands the available DPOD poo ol.


For exammple, if you are
a licensed forf eight inte rnal facing pports, you caan put any
combinattion of eight full-height an
nd half-heighht server bladdes with Fibrre Channel
Fo

mezzanin ne cards in any


a device bay. If you rem move one off the blades o or mezzaninne
cards, yo
ou have freedd one port on n the switch and made itt available to o another serrver
or Fibre Channel
C mez
zzanine card d. The switch senses a deevice trying too communica ate
to an inte
ernal port an
nd, if ports are available in the licensse pool, a poort will be
activated.
On the Cisco
C Fabric Switch
S for BladeSystem, a any of the eiight internal ports and
external ports
p ext1 through ext4 are
a licensed by default. A single on-d demand portt
activation
n license is re
equired to usse the remainning eight in
nternal and foour external
ports.

6 –6 Rev. 12
2.31
S
Storage Conne ctivity Options for HP BladeSyystem

Broca
ade SAN
N switches

ly
Brocade 8Gb SA
AN switch

Currently,, the HP Blad


deSystem SA
AN switch po
ortfolio includ
des the Broca
ade 8Gb SA
AN

on
Switch.
The switcch hot-plugs into the enclo
osures, uses power and ccooling provvided by the
enclosurees, and features 24 auto-sensing portts (16 interna
al and 8 exte
ernal). The
switches can be mana aged locallyy and remoteely using the HP BladeSysstem Onboard

y
Administrrator and Broocade Fabricc OS configuuration and m management tools.

er
This switcch also suppoorts DPOD, a feature tha at automatica ally discoverss online portts
and assig gns an availaable license to them. Thiss feature ena ables you to connect servver
liv
blades to
o switch portss without reggard for the sserver slot poopulated; thee associated
switch poorts automatically activate e as the servver ports are deployed. PPorts are
activated on a first-co
ome, first-servved basis forr any combin nation of locaations,
de

g external po
including orts.
The Broca
ade 8Gb SA
AN Switch features:
 8Gb
b performancce
TT

 ngle trunk grroup of eight SAN-facing


A sin g ports for up
p to 64Gb/s of balanced
d
throu
ughput
 Additional bufferr credits and 8G long-wa Channel SFP+
ave B-series 10km Fibre C
rT

 Man
nagement fea
atures in the Power Pack+
+ bundle
Fo

Rev. 12.3
31 6 –7
Implementing HP BladeSystem Solutions

Brocade SAN switch licensing


The Brocade 8Gb SAN Switch integrates the following license options that
complement existing HP product lines:
 HP B-Series 8/12c SAN Switch (PN: AJ820A)
 8Gb SAN Switch with 12 ports enabled for any combination (internal and
external)
 Two short-wave 8Gb SFPs

ly
 Full fabric connectivity
 HP B-Series 8/24c SAN Switch (PN: AJ821A)

on
 8Gb SAN Switch with 24 ports enabled (16 internal and 8 external ports)
 Four short-wave 8Gb SFPs
 Full fabric connectivity

y
 HP B-Series 8/24c SAN Switch Power Pack+ (PN: AJ822A)


er
8Gb SAN Switch with 24 ports enabled (16 internal and 8 external ports)
Four short-wave 8Gb SFPs
liv
 Full fabric connectivity
 Power Pack+ bundle
de

 HP Brocade 8/12c SAN Switch 12-port Upgrade LTU (PN: T5517A)


TT
rT
Fo

6 –8 Rev. 12.31
Storage Connectivity Options for HP BladeSystem

Brocade SAN switch software


The standard and optional software for Brocade SAN switches includes:
 Access Gateway — Enables seamless connectivity for Brocade-embedded SAN
switches to other supported SAN fabrics and enhances scalability and simplifies
manageability
 Frame filtering — Enables the switch to ”view“ the first 64 bytes of the Fibre
Channel frame and also provides advanced capabilities such as the optional
software components Advanced Zoning and Advanced Performance Monitoring

ly
(APM)
 Advanced zoning — Enables administrators to organize a physical fabric into

on
logical groups and prevent unauthorized access by devices outside the zone
 Web tools — Enable organizations to monitor and manage single Fibre Channel
switches and small SAN fabrics

y
 Dynamic Path Selection — Improves performance by routing data traffic
dynamically across multiple links and trunk groups using the most efficient path

er
in the fabric
 Secure Fabric OS — Provides policy-based security protection for more
liv
predictable change management, assured configuration integrity, and reduced
risk of downtime
 Security methods include digital certificates and digital signatures, multiple
de

levels of password protection, strong password encryption, Public Key


Infrastructure (PKI)-based authentication, and 128-bit encryption of the
private key used for digital signatures
TT

 Power Pack+ Software Bundle — Includes Adaptive Networking, inter-switch link


(ISL) trunking, Advanced Performance Monitoring, Extended Fabrics, and Fabric
Watch
 Adaptive Networking Services — Optimizes fabric behavior and ensures
rT

ample bandwidth for mission-critical applications; tools include QoS,


Ingress Rate Limiting, Traffic Isolation, and Top Talkers
 ISL trunking — Logically groups up to eight E-ports (switch mode) or F-ports
Fo

(Access Gateway mode) to provide a high-bandwidth trunk between two


Brocade or HP B-Series switches

Rev. 12.31 6 –9
Implementing HP BladeSystem Solutions

 Advanced Performance Monitoring — Enables administrators to monitor


application data traffic from a SID (Source ID) to a DID (Destination ID), so
they can fine-tune and scale the fabric more efficiently
 Extended Fabrics — Increases the scalability, reliability, and performance
benefits of Fibre Channel SANs beyond the native 10 km distance specified
by the Fibre Channel standard.
 Fabric Watch — Enables each switch to monitor the health of the SAN for
potential faults and automatically alert network managers to problems
before they become failures

ly
 Fabric Manager — Manages up to 80 switches across multiple fabrics in real
time, helping SAN administrators with SAN configuration, monitoring, dynamic

on
provisioning, and daily management—all from a single seat

y
er
liv
de
TT
rT
Fo

6 –10 Rev. 12.31


S
Storage Conne ctivity Options for HP BladeSyystem

SAS storag
ge solutions for BladeeSystem
m serverrs
HP 3G
Gb SAS BL Switch

ly
on
HP 3Gb SAS B
BL Switch

The HP 3Gb
3 SAS BL Switch
S for HP BladeSysteem enclosurees is an integ
gral part of H
HP
direct-con
nnect SAS sto orage, enabling a straighhtforward, exxternal zone
ed SAS or

y
shared SAAS storage solution.
s The SAS architeccture combinnes an HP P7700m Smart
Array conntroller in ea
ach server with 3Gb SAS BL switches connected to o either an H
HP
600 Mod dular Disk Sy
Storage Modular
M Sma
er
ystem (MDS6 600) enclosuure for zoned
art Array (MSA2000sa) for shared S
d SAS or an H
SAS storage.
HP 2000sa
liv
The 3Gb SAS BL Swittch enables two
t external architectures for BladeSyystem serverss:
 Zone ed SAS — Usse the HP Virrtual SAS Ma anager (VSMM) software oof the 3Gb S SAS
de

BL Swwitch to zonee groups of physical


p drivves in the MD
DS600 and a assign them to
indivvidual server blades. Thee drives in thee zone will a
appear as loccal drives to
that individual se
erver. A P700 0m Smart Arrray controlleer installed in
n the server
provvides RAID fuunctionality fo
or the group of physical drives that have been
TT

zoneed to that serrver.


 Sharred SAS — The
T 3Gb SAS S BL Switch ccan also be uused to acce ess shared SAAS
stora
age providedd by the MSA A2000sa. A P700m Sma art Array Controller installed
in ea
ach server accts as a passs-through, wi th RAID funcctionality for shared SAS
rT

stora
age providedd by the MSA A2000sa. Thhe MSA2000 0sa creates a shared
stora
age environm
ment where more
m than onne server blade can acce ess a storage
e
logiccal unit.
Fo

Rev. 12.3
31 6 –11
Implemen
nting HP BladeS
System Solutions

HP Virtual SA
AS Manag
ger

ly
on
y
er Example of the VSM
M Maintain tab
liv
HP Virtuaal SAS Mana ager (VSM) iss embedded d in the 3Gb SAS BL Swittch firmware
and is the
e software application ussed to createe hardware-b
based zone ggroups to
control acccess to exte
ernal SAS sto
orage enclosuures and tap
pe devices.
de

VSM ena
ables you to perform the following tassks:
 Enter switch para
ameters
Crea
ate zone groups
TT

 Assig
gn zone grou
ups to serverrs
 Rese
et the switch
rT

 Update firmware
e

Note
Storage is configured,
c forrmatted, and partitioned ussing software utilities such a
as
Fo

the HP Array Configuratioon Utility (ACU


U), the HP Sto
orage Management Utility
(SMU), and Microsoft Dissk Manager. C Configuration tools differ fo
or each storagge
enclosure and operating system enviro onment. For mo ore informatioon, see the
QuickSpecss for the storag
ge enclosure.

Note
For more infformation abo
out HP Virtual SAS Manageer, consult the HP Virtual SA
AS
Manager 2.2.4.x User Guide
G availabl e from the HPP website.

6 –12 Rev. 12
2.31
S
Storage Conne ctivity Options for HP BladeSyystem

4X InfiniBa
and switch modules

QLogic BLc 4X
X QDR IB Switcch for HP BladeeSystem

ly
The 4X In
nfiniBand Sw
witch moduless for BladeSyystem are do
ouble-wide swwitch module
es
based onn the Mellano
ox technolog
gy. The 4X InnfiniBand Sw
witch module has

on
16 downlink ports to connect up to
t 16 server blades in thee enclosure.
A subnet manager is required to manage and d control an InfiniBand faabric. The
subnet manager functionality can be provided d by either a rack-mount InfiniBand
switch with an embed dded fabric manager
m (alsso known ass an internally managed

y
switch) orr by host-bassed subnet manager
m softwware on a seerver conneccted to the

er
fabric.
The 4X In
nfiniBand sw
witch moduless available fo
or a BladeSyystem environ
nment are:
liv
 HP 4X
4 QDR Infin
niBand Switch
h Module
 Compatible with only the
e BladeSysteem c7000 en
nclosure
de

 Includes 16 internal 4X QDR


Q downlinnk ports
 Based on the Mellanox InfiniScale IV
V technologyy
 Supports 16 quad small form-factor p pluggable (Q QSFP) uplinkk ports for
TT

inter-switch links or to co
onnect to exteernal serverss
 0Gb/s (QDR
Supports 40 R) bandwidth
 HP 4X
4 DDR InfiniiBand Gen2 Switch Mod ule
rT

 Compatible with BladeS


System c7000
0 and c3000
0 enclosures
 Includes 16 internal 4X DDR
D downlinnk ports
 Based on the Mellanox InfiniScale IV
V technologyy
Fo

 Supports 16 QSFP uplink ports for innter-switch lin


nks or to con
nnect to exterrnal
servers
 0Gb/s (DDR)) bandwidth
Supports 20

Rev. 12.3
31 6 –13
Implementing HP BladeSystem Solutions

 HP 4X DDR InfiniBand Switch Module


 Compatible with BladeSystem c7000 and c3000 enclosures
 Based on the Mellanox InfiniScale III technology
 Supports 8 CX4 uplink ports for inter-switch links or to connect to external
servers
 Supports 20Gb/s (DDR) bandwidth
 QLogic BLc 4X QDR IB Switch

ly
 Includes 16 internal 4X QDR downlink ports
 Includes 16 external 4X QDR QSFP uplink ports

on
 Uses the QLogic TrueScale ASIC architecture
 Designed to cost-effectively link workgroup resources into a cluster or
provide an edge switch option for a larger fabric

y
 Supports an optional management module that includes an embedded

er
subnet manager
 Supports optional InfiniBand Fabric Suite software
liv
 Enables up to a 288-node fabric using only the management capability of
the unit
de

Depending on the mezzanine connectors used for the InfiniBand host channel
adapter (HCA), the switch module must be inserted into interconnect bays 3 and 4,
5 and 6, or 7 and 8.
TT
rT
Fo

6 –14 Rev. 12.31


S
Storage Conne ctivity Options for HP BladeSyystem

Mez
zzanine
e cards and adapterss
Similar to
o the PCI slotts and cards used in the ProLiant servvers, mezzannine slots andd
cards in the
t BladeSysstem provide a connectio on from the seerver bladess to Ethernet,
Fibre Chaannel, and InnfiniBand swwitches.

Mezz
zanine ca
ard and slot optio
ons availlable for BladeSyystem

ly
on
y
HP NC550m
N 10Gb 2-port PCIe xx8 Flex-10 Ethe rnet Adapter

Mezzanine cards and


of slots have the same
er
d slots on server blades a
e physical siz
are either Tyype I or Type II. Both type
ze but have different keyying, and pro
es
ovide differen
nt
liv
amounts of power to the mezzaniine cards. Tyype I mezzan nine cards draw less pow wer
than Type e II mezzanin
ne cards.
The type of the mezza anine card determines
d wwhere it can b be installed in the server
de

blade. Tyype I mezzannine cards ca


an be installeed in any Type I or Type II mezzanine e
slots; Typ
pe II mezzaniine cards mu
ust be installeed in Type II mezzanine slots only.
In turn, where
w you insstall the mezz
zanine card determines w where you n need to installl
TT

the intercconnect modules. The serrver blade m ezzanine po ositions (mezzanine 1, 2, or


3) connect directly through the siggnal midplanne to the app propriate inte
erconnect baays.
The intercconnect bayss are designeed to acceptt single-widee or double-wwide moduless.
rT

Note
Both Type I and Type II mezzanine
m carrds use the samme 450-pin connector (2000
signal/250 gnd) to connect the powerr and PCIe sig gnals and the connections
from the serrver blades to the interconn ect bays.
Fo

Multifuncction adapterrs include iSC


CSI network boot (iSCSI boot), which
h allows a
server to boot from a remote operrating systemm image locaated on a SAAN.

Rev. 12.3
31 6 –15
Implementing HP BladeSystem Solutions

Type I mezzanine cards and slots


Type I cards and slots can be either four-lane (x4) or eight-lane (x8). Type I
mezzanine slots:
 Are the lower-positioned slots on the server blade system board
 Accept Type I mezzanine cards only
The Type I mezzanine card is supported by all server blades and typically is used for
Gigabit Ethernet and Fibre Channel applications. It can be physically positioned in
either Type I or Type II mezzanine slot. Electronic keying by the Onboard

ly
Administrator detects any mismatch between the mezzanine card and the switch
ports and will not allow the connection if it is misconfigured.

on
The basic architecture of the Type I mezzanine card has the following specifications:
 PCIe x4 or x8 bus width
 Maximum power is 15W

y
 3.97” (100.84mm) x 4.46”(113.28mm)

Type II mezzanine cards and slots

er
Type II cards and slots are eight-lane (x8) only. Type II mezzanine slots:
liv
 Are the higher-positioned slots on the server blade system board
Accept either Type I or Type II mezzanine cards
de

The Type II mezzanine card operates only in Type II mezzanine slots and is typically
used for high-powered Gigabit applications such as 10Gb Ethernet.
All Type II mezzanine cards support eight lanes of connections to:
TT

 Four 2-lane connections to four single-wide switches


 Two 4-lane connections to a double-wide switch for redundant connections
rT

The basic architecture of the Type II mezzanine card has the following features:
 25W maximum power
 PCIe x8 bus width
Fo

 5.32 inches (135.13mm) x 4.46 inches (113.28mm)


As with Type I mezzanine cards, electronic keying by the Onboard Administrator
detects a mismatch between the mezzanine card and switch module.

6 –16 Rev. 12.31


S
Storage Conne ctivity Options for HP BladeSyystem

HBAss availab
ble
The host bus adapter (HBA) mezz
zanine cardss available fo
or BladeSyste
ems are:
 QLog
gic QMH2562 8Gb Fibrre Channel H
HBA (PN: 45
51871-B21)
 Emulex LPe1205--HP 8Gb/s Fibre
F Channeel HBA (PN: 456972-B21
1)
 Broccade 804 8G
Gb FC HBA for
f HP BladeeSystem (PN: 590647-B21)

QLog
gic QMH
H2562 8G
Gb Fibre Channeel HBA

ly
on
y
er
liv
de
TT

QLogic QM
MH2562 8Gb Fibre Channel HBA

The QLog gic QMH256 62 8Gb Fibre e Channel HHBA is a dua al-channel PC CIe mezzanin ne
form factoor card desig gned for BladeSystem so olutions. It deelivers twice tthe data
rT

throughput as the pre evious generaation 4Gb m mezzanine ca ard. It is optimized for
virtualization, low pow wer usage, management
m t, security, reeliability, ava
ailability, and
d
serviceabbility. It is also backward compatible with 4Gb and 2Gb Fibrre Channel
speeds and is compa atible with all BladeSystem
m server blades. It is opttimized for H HP
Fo

storage devices
d and isi supported by third-parrty SAN vend dors.

Rev. 12.3
31 6 –17
Implementing HP BladeSystem Solutions

This HBA features:


 Advanced embedded support for virtualized environments
 Supports virtualized servers for overall effective server utilization
 Enables multiple logical (virtual) connections to share the same physical
ports
 Supports 256 queue pairs for intensive virtualization
 Prevents conflicts between multiple queues through prioritization of queues

ly
 Reduced power consumption
 Saves power with the latest-generation technology

on
 Reduces overall power consumption by reducing the number of components
on each Fibre Channel HBA
 Requires lower airflow so it lowers power consumption

y
 Multipath support for redundant HBAs and paths, including Linux driver failover

er
 Optimized reliability, availability, and serviceability (RAS), security, and
manageability
liv
de
TT
rT
Fo

6 –18 Rev. 12.31


S
Storage Conne ctivity Options for HP BladeSyystem

Emule
ex LPe1205-HP 8G
Gb/s Fib
bre Chan
nnel HBA
A

ly
on
y
Emulex LPe1205-HP 8Gb/ss Fibre Channeel HBA

performa
er
The Emulex LPe1205-HP dual-portt Fibre Chan nel HBA pro
ance 8Gb/s connectivity. In addition to providing
ovides reliable, high-
g greater ban ndwidth, the
liv
LPe1205-HP HBA also o provides fe
eatures such a
as data integ grity, securityy, and
virtualization, which are
a all comp plementary too initiatives im
mportant to tthe enterprisee
data centter.
de

This HBAA combines higher


h transfe
er rates, enhaanced I/O p processing, a and extended d
interrupt managemen nt with Emule ex Virtual HBA
BA Technolog gy. The dual-cchannel desiign
is ideal fo
or mission-crritical applica
ations that reely on high-a
availability co
onnectivity.
TT

Fibre Cha annel—Security Protocol (FC-SP) com mpliance enables protection of


proprietaary data from
m unauthorize ed access.
Features of the Emule
ex LPe1205-H
HP HBA incluude:
rT

 Commprehensive virtualization
v n capabilitiess with suppoort for N_Portt ID
ualization (NPIV) and Virttual Fabric — Provides support for up
Virtu p to 255 VPo
orts,
whicch improves server
s consolidation capa abilities and asset utiliza
ation
Supeerior perform
mance capab ble of sustain
ning up to 20
00,000 I/Os per second per
Fo

nnel — Delivvers the perfo


chan ormance neeeded for high h-transaction database
envirronments

Rev. 12.3
31 6 –19
Implementing HP BladeSystem Solutions

 Host to Fabric FC-SP authentication—Provides advanced security, protecting the


SAN from potential threats such as WorldWide Name (WWN) spoofing,
compromised servers, and so on
 PCI Express Bus Gen I (x8), Gen II (x4)
 Uses PCIe 2.0, which provides 5Gb/s lanes (double the 2.5Gb/s data rate
of PCIe 1.0)
 Supports the faster bit rate as well as retaining backward compatibility with
existing PCIe 1.0 server blades, which enables greater flexibility and
reliability for subsequent generations of servers

ly
 Message Signaled Interrupts eXtended (MSI-X) Support for Greater Host CPU

on
Utilization — Streamlines interrupt routing to improve overall server efficiency

y
er
liv
de
TT
rT
Fo

6 –20 Rev. 12.31


S
Storage Conne ctivity Options for HP BladeSyystem

Broca
ade 804 8Gb Fib
bre Chan
nnel Hostt Bus Ada
apter

ly
on
y
Brocade 804 8Gb
8 Fibre Cha
annel Host Bus A
Adapter

The Brocaade 804 8G

er
Gb Fibre Cha annel HBA offfers high-performance co
es to the servver and appl ications, and
extends fabric feature
onnectivity,
d integrates sseamlessly wwith
liv
managem ment software e such as HP P Data Centeer Fabric Ma anager to pro
ovide a
complete
e end-to-end data
d center solution.
s
de

This dual-port Fibre Channel


C HBA
A supports 8G Gb/s connectivity. Features
Gb/s and 4G
BA include:
of this HB
 Supp
port for up to
o 255 VPorts
Up to
o 500,000 I/Os
I per seccond per cha
annel
TT

 Up to
o 1600MB/ss throughputt per port
 Fabrric-based boo
ot LUN disco
overy, which enables simplified deplo
oyment of bo
oot-
over--SAN environ
nments
rT

 Wide operating system support


 Softw
ware tools
Fo

 Host Connecctivity Manager GUI and


d command line interface
e (CLI)
 Multipathing
g
 Managemen
nt APIs

Rev. 12.3
31 6 –21
Implemen
nting HP BladeS
System Solutions

HP 4X
X InfiniBa
and Mez
zzanine HCAs
H

ly
on
y
er
QLogic 4X QDR IB Dual-Po
ort Mezzanine HCA
liv
The 4X In
nfiniBand Me
ezzanine HC
CAs for BladeeSystem encllosures includ
de:
de

 HP 4X
4 QDR IB Du
ual-Port Mezzzanine HCA
A
 Based on the ConnecX-2
2 technologyy from Mellan
nox or on the
e TrueScale
technology from
f QLogic
 Designed ass a dual-portt 4X QDR InffiniBand PCI Express G2 Mezzanine
TT

card
 Designed fo
or PCI Expresss 2.0 x8 connnectors on B
BladeSystem G6 server
blades
rT

 Supported on
o ProLiant BL280c G6, B
BL2x220c G
G6, BL460c G
G6, and
BL490c G6 server bladees
 Supported with
w the Volta
aire OFED Li nux driver sta
ack and Win
nOF 2.0 on
Fo

Microsoft Windows
W HPC
C Server 20008

6 –22 Rev. 12
2.31
Storage Connectivity Options for HP BladeSystem

 HP 4X DDR IB Dual-port Mezzanine HCA


 Designed as a dual-port 4X DDR InfiniBand PCI Express Mezzanine card
 Based on the Mellanox ConnectX technology
 Supported on HP Integrity BL860c server blades and most ProLiant server
blades
 Supported with the Voltaire OFED Linux driver stack and WinOF 2.0 on
Windows HPC Server 2008 (ProLiant blades only)
HP 4X DDR IB Mezzanine HCA

ly

 Designed as a single-port 4X DDR InfiniBand PCI Express Mezzanine card

on
 Supported on ProLiant and Integrity server blades

HP IB QDR/EN 10Gb 2P 544M Mezzanine Adaptor


The HP IB QDR/EN 10Gb 2P 544M Mezzanine Adaptor delivers low-latency and

y
up to 40Gbps (QDR) bandwidth (dual port) for performance-driven server and

er
storage clustering applications in High-Performance Computing (HPC) and enterprise
data centers. Key features include:
Based on the Mellanox ConnectX-3 IB technology
liv

 Capable of dual 10 Gb Ethernet ports when connected to a supported Ethernet


switch in a c7000 enclosure
de

 Is designed for PCI Express 3.0 x8 connectors on BladeSystem Gen8 server


blades
 Can be used in either mezzanine slot of the server blade
TT
rT
Fo

Rev. 12.31 6 –23


Implementing HP BladeSystem Solutions

Learning check
1. Advanced zoning and frame filtering are standard software with the Brocade
8Gb SAN Switch.
 True
 False
2. Which switch supports Dynamic Ports on Demand?
a. Cisco MDS 9124e Fabric Switch

ly
b. Brocade 8Gb SAN Switch

on
c. HP 4Gb VC-FC Module
d. HP 4Gb FC Pass-Thru Module
3. The QLogic QMH2462 4Gb Fibre Channel HBA fits all ProLiant server blades.

y
 True

er
 False
4. Which feature enables the Brocade 8Gb SAN Switch to facilitate interoperability
with other SAN fabrics and eliminate domain considerations while improving
liv
SAN scalability?
a. ISL Trunking
de

b. Frame filtering
c. Access Gateway mode
d. Fabric QoS
TT

5. Which HBA provides support for up to 255 VPorts, which improves server
consolidation capabilities and asset utilization?
a. QLogic QMH2562 8Gb Fibre Channel HBA
rT

b. QLogic QMH2462 4Gb Fibre Channel HBA


c. Emulex LPe1205-HP 8Gb/s Fibre Channel HBA
Fo

d. Emulex LPe1105-HP 4Gb Fibre Channel HBA


6. List the tasks that HP Virtual SAS Manager enables.
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………

6 –24 Rev. 12.31


Configuring Ethernet Connectivity Options
Module 7

Objectives
After completing this module, you should be able to explain how to configure:
 An HP GbE2c Layer 2/3 Ethernet Blade Switch

ly
 A Cisco Catalyst Blade Switch 3020 or 3120
 An HP 1:10Gb Ethernet BL-c Switch

on
 An HP 6120XG or 6120G/XG switch

y
er
liv
de
TT
rT
Fo

Rev. 12.31 7 –1
Implementing HP BladeSystem Solutions

Configuring an HP GbE2c Layer 2/3 Ethernet Blade


Switch
When planning the switch configuration, secure access to the management interface
by:
 Creating users with various access levels
 Enabling or disabling access to management interfaces to fit the security policy
Changing default Simple Network Management Protocol (SNMP) community

ly

strings for read-only and read-write access

on
User, operator, and administrator access rights
The user interface provides multilevel password-protected user accounts. To enable
better switch management and user accountability, three levels or classes of user
access have been implemented on the switch. Levels of access to the command line

y
interface (CLI), web management functions, and screens increase as needed to

er
perform various switch management tasks. Access classes are:
 User
liv
 Operators
 Administrators
de

Access to switch functions is controlled through the use of unique surnames and
passwords. After you are connected to the switch through the local console, telnet, or
Secure Shell (SSH) encryption, you are prompted to enter a password.
TT

Note
HP recommends that you change default switch passwords after the initial
configuration and as regularly as required under your network security policies.
For more information, see “Setting Passwords” in the GbE2c Ethernet Blade
rT

Switch for HP c-Class Command Reference Guide available from:


http://www.docs.hp.com
Fo

7 –2 Rev. 12.31
Configuring Ethernet Connectivity Options

Access-level defaults
The default user names and passwords for each access level are:
 User
 User interaction with the switch is completely passive. The user has no direct
responsibility for switch management. He or she can view all switch status
information and statistics, but cannot make any configuration changes to the
switch.
 The password is user.

ly
 Operator

on
 Operators can only make temporary changes on the switch. These changes
will be lost when the switch is rebooted or reset. Operators have access to
the switch management features used for daily switch operations. Because
any changes an operator makes are undone by a reset of the switch,

y
operators cannot severely impact switch operation.

er
 By default, the operator account is disabled and has no password.
 Admin
liv
 Only administrators can make permanent changes to the switch
configuration; these changes are persistent across a reboot or reset of the
switch. The administrator has complete access to all menus, information,
de

and configuration commands on the switch, including the ability to change


both the user and administrator passwords. Because administrators can also
make temporary (operator-level) changes, they must be aware of the
distinctions between temporary and permanent changes.
TT

 The password is admin.


rT
Fo

Rev. 12.31 7 –3
Implementing HP BladeSystem Solutions

Accessing the GbE2c Switch


You can access the GbE2c switch remotely through the Ethernet ports or locally
through the DB-9 management serial port.
To access the GbE2c switch locally:
1. Connect the switch DB-9 serial connector by using a null modem serial cable to
a local client device (such as a laptop computer) with VT100 terminal emulation
software.
2. Open a terminal emulation session with the following settings:

ly
 9600 baud rate, 8 data bits

on
 No parity, 1 stop bit
 No flow control
To access the GbE2c switch remotely:

y
1. By default, the switch is set up to obtain its IP address from a Bootstrap Protocol

er
(BOOTP) server existing on the attached network. From the BOOTP server, use
the interconnect media access control (MAC) address to obtain the switch
IP address.
liv
Important
! By default, BOOTP is enabled at the factory. To establish a static IP address, you
de

must disable BOOTP.

Note
The GbE2c switch can obtain its IP address from either BOOTP or DHCP.
TT

2. From a computer connected to the same network, use the IP address to access
the switch by using a web browser or telnet application, which enables you to
access the switch Browser-Based Interface (BBI) or CLI.
rT

To access the switch remotely, you must set an IP address in one of the following
ways:
 Management port access — This is the most direct way to access the switch.
Fo

 Using a Dynamic Host Configuration Protocol (DHCP) server — When DHCP is


enabled, the management interface (interface 256) requests its IP address from
a DHCP server. The default value for the /cfg/sys/dhcp command is
enabled.
 Manual configuration — If the network does not support DHCP, you must
configure the management interface (interface 256) with an IP address.

7 –4 Rev. 12.31
Configuring Ethernet Connectivity Options

Logging in through the Onboard Administrator


The HP Onboard Administrator provides a single point of contact for performing
basic management tasks on server blades and switches within the enclosure. It
enables you to perform initial configuration steps for the enclosure as well as run-time
management and enclosure component configuration.
To log in through the Onboard Administrator, follow these steps:
1. Locate the Ethernet port on the Onboard Administrator module.
2. Connect the Ethernet cable to the Onboard Administrator module and the

ly
workstation/server or to the network containing the workstation.

on
Important
! Verify that the interconnect is not being modified from any other connections
during the remaining steps.

3. Open a telnet connection by using the IP address set earlier. When the login

y
prompt displays, the connection locates the switch in the network.

er
4. Enter the password. The default password is admin. If passwords have not been
changed from the default value, you are prompted to change them. You can do
liv
one of the following:
 Enter new system passwords.
de

 Press Ctrl+C to bypass the password prompts.

Note
You can create up to two simultaneous admin sessions and four user sessions.
TT

5. Verify that the login was successful. A successful login displays the switch name
and user ID to which you are connected.
rT
Fo

Rev. 12.31 7 –5
Implementing HP BladeSystem Solutions

Configuring redundant switches


Each GbE2c switch has five external Ethernet ports and 16 internal Gigabit Ethernet
ports providing connectivity to the server blades within the enclosure.
In a dual-switch configuration, switches in shared interconnect bays provide switch
redundancy by using dedicated crosslinks (ports 17 and 18). In addition, the signal
midplane has redundant paths to the network ports on the server blades.
Each pair of switches consolidates up to thirty-two 10/100/1000 Ethernet signals
into one to eight gigabit ports (on the back of the system). This design eliminates up

ly
to 31 network cables from the back of the server blade enclosure.

on
Note
On a heavily used system, using a single uplink port for 32 Ethernet signals can
cause a traffic bottleneck. For optimum performance, HP recommends that at
least one uplink port per switch be used.

y
Redundant crosslinks

er
The two switches are connected through redundant 10/100/1000 crosslinks. These
two crosslinks provide an aggregate throughput of 2Gb/s for traffic between the
switches.
liv
Redundant paths to server bays
de

Redundant Ethernet signals from each server blade are routed through the enclosure
backplane to separate switches within the enclosure. Two Ethernet signals are routed
to Switch 1 and two are routed to Switch 2. This configuration provides redundant
paths to each server bay; however, specific switch port to server mapping varies,
TT

depending on which type of server blade is installed.


rT
Fo

7 –6 Rev. 12.31
Configuring Ethernet Connectivity Options

Manually configuring a GbE2c Switch


You can configure a GbE2c switch manually using a CLI, a BBI, or an SNMP
interface. For more information on how to use these management interfaces to
configure the switch, see the GbE2c Ethernet Blade Switch for HP c-Class
Command Reference Guide available from:
http://www.hp.com/go/bladesystem/documentation

After a switch is configured, you can back up the configuration to a TFTP server as a

ly
text file. You can then download the backup configuration file from the TFTP server to
restore the switch back to the original configuration. This restoration could be

on
necessary if:
 The switch configuration becomes corrupted during operation.
 The switch must be replaced because of a hardware failure.

y
Configuring multiple GbE2c Switches

er
You can configure multiple switches by using scripted CLI commands through telnet
or by downloading a configuration file using a TFTP server.
liv
 Using scripted CLI commands through telnet — The switch CLI enables you to
execute customized configuration scripts on multiple switches. You can tailor a
configuration script for one of the multiple switches and then deploy that
de

configuration to other switches from a central deployment server.


 Using a configuration file — If you want the base configuration of multiple
switches in your network to be the same, you can manually configure one
switch, upload the configuration to a TFTP server, and use that configuration as
TT

a base configuration template file.


Switch IP addresses are acquired by using BOOTP or DHCP; therefore, each switch
has a unique IP address. You can access each switch remotely from a central
rT

deployment server and download an individual switch configuration to meet specific


network requirements.
Fo

Rev. 12.31 7 –7
Implementing HP BladeSystem Solutions

Configuring a Cisco Catalyst Blade Switch 3020 or


3120
Cisco Catalyst Blade Switches 3020 and 3120 share the same installation
procedures.

Obtaining an IP address
IP addresses can be assigned to two of the switch interfaces:

ly
 The fa0 Ethernet interface — This Layer 3 Ethernet interface is connected to the
Onboard Administrator. It is used only for switch management traffic, not for

on
data traffic.
 The VLAN 1 interface — You can manage the switch module from any of its
external ports through virtual LAN (VLAN) 1.

Obtaining an IP address for the fa0 interface through the Onboard

y
Administrator

er
For the switch module to obtain an IP address for the fa0 interface through the
Onboard Administrator, these conditions must be met:
liv
 The c7000 enclosure must be powered on and the Onboard Administrator must
be connected to the network.
de

 The basic configuration of the Onboard Administrator must be completed, and


you must have the user name and password for the Onboard Administrator.
 A DHCP server must be configured on the network segment on which the
enclosure resides or the Enclosure-Based IP Addressing (EBIPA) feature must be
TT

enabled for the appropriate interconnect bay.

Note
rT

If the switch receives an IP address through the Onboard Administrator, the


VLAN 1 IP address is not assigned.

After you install the switch, it powers on and begins the power-on self-test (POST).
Fo

You can verify that the POST has completed by confirming that the system and status
LEDs remain green.

Important
! If the switch module fails the POST, the system LED turns amber. POST errors are
usually fatal. Call Cisco Systems immediately if your switch module fails POST.

After you install the switch module in the interconnect bay, the switch automatically
obtains an IP address for its fa0 interface through the Onboard Administrator.

7 –8 Rev. 12.31
Configuring Ethernet Connectivity Options

Using a console session to assign a VLAN 1 IP address


You must assign the IP address before you can manage the switch. To assign an
IP address to the switch, you need the following information:
 IP address
 Subnet mask (IP netmask)
 Default gateway IP address
 Names of the SNMP read and write community strings

ly
 Host name, system contact, and system location
After completing the initial setup, you can configure these optional parameters

on
through the Cisco Express Setup program:
 Local access password
 Telnet access password

y
 SNMP read and write community strings (if you plan to use a network-

er
management program such as CiscoWorks)
When you first set up the switch module, you can use Express Setup to enter the
initial IP information. Doing this enables the switch to connect to local routers and the
liv
Internet. You can then access the switch through the IP address for further
configuration.
de

Cisco Express Setup


Express Setup enables you to set basic configuration parameters such as the
IP address, default gateway, host name, and the system, enable mode
TT

(configuration), and telnet passwords. Cisco recommends using TCP/IP to manage


your switch. You can use Express Setup to configure your switch to be managed
through TCP/IP.
To run Express Setup, you need a PC and an Ethernet (Cat 5) straight-through cable.
rT
Fo

Rev. 12.31 7 –9
Implemen
nting HP BladeS
System Solutions

Assig
gning the VLAN 1 IP addrress

ly
on
Mode button
b location on Cisco switch

To assign
n the VLAN 1 IP address::

y
1. Veriffy that no de
evices are connected to thhe switch, beecause during Express

er
Setup, the switchh listens for a DHCP serveer.
2. If your laptop ha
as a static IP address, beffore you beg
gin, change yyour laptop
liv
settin
ngs to tempo
orarily use DHHCP.

Important
!
de

You must iniitiate this proccess immediattely after instaalling the switcch module in tthe
server bladee. If you miss the opportuni ty to assign th he IP address this way, you
will need to remove and then t reinstall tthe switch mo
odule.

3. Wheen the switch


h module pow
wers on, it beegins the PO
OST. You can n verify that
TT

POST has completed by conffirming that tthe system an


nd status LED
Ds remain
gree
en.
4. Presss and hold th
he Mode buttton until the four LEDs neext to the Mo
ode button tu
urn
rT

gree
en. This takess approximattely three secconds. Relea
ase the Mode e button.

Note
If you have held the Modde button for m
more than two o minutes and the LEDs have e
Fo

not turned green,


g obtaining the VLAN 1 IP address through Expre ess Setup is no
o
longer possible and you must remove a and then reinsstall the switch
h module. If th
he
LEDs next to
o the Mode buutton begin too blink after yo
ou press the bbutton, release
e it.
Blinking LED
Ds mean that the
t switch mo odule has alrea ady been con nfigured and
cannot go into Express Seetup mode.

7 –10 Rev. 12
2.31
Configuring Ethernet Connectivity Options

5. Connect a CAT-5 Ethernet cable to any Ethernet port on the switch module front
panel. Connect the other end to the Ethernet port on the laptop or workstation.

Caution
Do not connect the switch module to any device other than the laptop or
workstation being used to configure it.

6. Verify that the port status LEDs on both connected Ethernet ports are green.
7. After the port LEDs turn green, wait at least 30 seconds and launch a web
browser on your laptop or workstation.

ly
8. Enter the IP address 10.0.0.1 (or 10.0.1.3 or 10.0.2.3, depending on the
firmware version).

on
9. Continue the configuration by completing the Express Setup fields.

y
er
liv
de
TT
rT
Fo

Rev. 12.31 7 –11


Implementing HP BladeSystem Solutions

Obtaining an IP address for the fa0 interface through the


Onboard Administrator
For the switch to obtain an IP address for the fa0 interface through the Onboard
Administrator, the BladeSystem enclosure must be powered on and connected to the
network. Then follow these steps:
1. Complete the basic configuration of the Onboard Administrator and have the
user name and password for the Onboard Administrator.
2. A DHCP server must be configured on the network segment on which the server

ly
blade resides. The Onboard Administrator must be configured to run as a DHCP
server, or the EBIPA feature must be enabled for the appropriate interconnect

on
bay.
3. Install the switch in the interconnect bay. After approximately two minutes, the
switch automatically obtains an IP address for its fa0 interface through the
Onboard Administrator.

y
4. After you have installed the switch, it powers on. When it powers on, the switch

er
begins the POST, which might take several minutes. Verify that the POST has
completed by confirming that the system and status LEDs remain green. If the
liv
switch fails the POST, the system LED turns amber. POST errors are usually fatal.
Call Cisco Systems immediately if the switch fails the POST.
5. Wait approximately two minutes for the switch to get the software image from its
de

flash memory and begin the autoinstallation.


6. Using a PC, access the Onboard Administrator through a browser window.
7. Open the Interconnect Bay Summary window, where you can find the assigned
TT

IP address of the switch fa0 interface in the Management URL column.


8. Click the IP address hyperlink for the switch from the Management URL column
to open a new browser window. The Device Manager window for the switch
rT

displays.
9. On the left side of the Device Manager GUI, click Configuration  Express
Setup. The Express Setup home page displays.
Fo

7 –12 Rev. 12.31


Configuuring Ethernet C
Connectivity Op
ptions

Configuring
g an HP 1:10G
Gb Etheernet BL-c Switch
HP 1:10G
Gb Ethernet BL-c
B Switch iss a single-wid
de switch witth 10Gb uplinks.

Plann
ning the 1:10Gb
1 Ethernet
E BL-c
B switcch config
guration

ly
1:1
10Gb Ethernet BL-c Switch

on
HP recommmends that you plan the e configuratio
on before yoou actually co onfigure the
switch. When
W you deevelop your plan,
p consideer your defauult settings and assess thee
particular server environment to determine
d any requiremen nts.
ettings are:
Default se

y
 All downlink
d and
d uplink portss enabled


efault VLAN assigned to each port
A de
The VLAN
V
er
ID (VID) set to 1
liv
This default configura
ation enabless you to connnect the serve
ver blade encclosure to the
e
network by
b using a siingle uplink cable from aany external Ethernet connnector.
de

Switch port mapping


The 1:10G
Gb Ethernet BL-c Switch does
d not deteermine NIC eenumeration and mappin
ng
NIC interrfaces to swittch ports. NIC numbering g is determin
ned by:
TT

 Servver type
 Ope
erating system
m
NICss enabled on
n the server
rT

Note
Port 18 is re
eserved for the connection to the Onboa ard Administra
ator module. TThe
Onboard Administrator
A performs
p the fo
ollowing funcctions:
Fo

 Enables you
y perform fuuture firmwaree upgrades
 Controls all
a port enabling by matchiing ports betw
ween the serve
er and the
interconnect bay
 Verifies thhat the server NIC option m
matches the swwitch bay that is selected an
nd
enables alla ports for the NICs installled before pow
wer up

For detailedd port mappin ng informationn, see the HP BladeSystem enclosure


installation poster or the HP BladeSysttem enclosuree setup and installation guid
de
on the HP website:
w http://www.hp.co om/go/bladeesystem/docum mentation

Rev. 12.3
31 7 –13
Implementing HP BladeSystem Solutions

Accessing the 1:10Gb Ethernet BL-c switch


After installing the switch, you can access it remotely or locally. The switch is
accessed remotely using the Ethernet ports or locally using the DB-9 management
serial port.
To access the switch remotely:
1. Assign an IP address. By default, the switch obtains its IP address from a BOOTP
server on the attached network.
2. From the BOOTP server, use the switch MAC address to obtain the switch

ly
IP address.

on
3. Use the IP address to access the switch BBI or CLI, from a computer connected to
the same network, using a web browser or telnet application.
To access the switch locally:
1. Connect the switch DB-9 serial connector, using a null modem serial cable to a

y
local client device with VT100 terminal emulation software.

er
2. Open a VT100 terminal emulation session with these settings: 9600 baud rate,
eight data bits, no parity, one stop bit, and no flow control.
liv
de
TT
rT
Fo

7 –14 Rev. 12.31


Configuuring Ethernet C
Connectivity Op
ptions

User, operator, and ad


dministra
ator acceess rights

ly
on
y
To enablee better switcch managemment and useer accountability, three levvels or classe
es
of user acccess have been
b implemented on thee switch. Leveels of access to CLI, web

er
managem ment function
ns, and scree
ens increase as needed to o perform vaarious switch
managem ment tasks. Conceptually,
C , access classses are defin
ned as:
liv
 Userr interaction with
w the switch is compleetely passive.. Nothing ca an be change ed
on th
he switch. Ussers can disp
play informattion that has no security or privacy
implications, such as switch statistics
s and
d current operational state
e information
n.
de

 Opeerators can only


o mporary cha nges on the switch. Thesse changes w
effect tem will
be lo
ost when the switch is rebbooted or resset. Operatoors have acce
ess to the sw
witch
management fea atures used foor daily switcch operation
ns. Because a
any changes an
operrator makes are
a undone by a reset off the switch, operators ca annot severelly
TT

impaact switch op
peration.
 Admministrators are the only ones
o that cann make perm
manent chang ges to the sw
witch
configuration, wh hich are changes that aree persistent aacross a rebooot or reset o
of
rT

the switch.
s Administrators can access swiitch functionss to configure
e and
troubbleshoot probblems on the e switch. Bec ause adminiistrators can also make
tempporary (opera ator-level) changes as weell, they mustt be aware oof the
interractions betw
ween tempora ary and perm manent chan
nges.
Fo

Access too switch functtions is contrrolled throug


gh the use of unique user names and
password ds. After you connect to the
t switch thrrough the loccal console, telnet, or SSH,
a passwo ord prompt appears.
a The default userr name and p password forr each accesss
level are listed in the preceding taable.

Rev. 12.3
31 7 –15
Implementing HP BladeSystem Solutions

Manually configuring a switch


The switch is configured manually by using a CLI, BBI, or an SNMP interface. After a
switch is configured, you have to back up the configuration as a text file to a TFTP
server. The backup configuration file is then downloaded from the TFTP server to
restore the switch back to the original configuration. This restoration is necessary if
one of these conditions apply:
 The switch configuration becomes corrupted during operation.
 The switch must be replaced because of a hardware failure.

ly
Note
See the HP 1:10Gb Ethernet BL-c Switch Command Reference Guide available on

on
the HP website for more information on using these management interfaces to
configure the switch.

Configuring multiple switches

y
Configure multiple switches by using scripted CLI commands through telnet or by

er
downloading a configuration file by using a TFTP server.

Using scripted CLI commands through telnet


liv
The CLI, provided with the switch, executes customized configuration scripts on
multiple switches. A configuration script is tailored to one of the multiple switches,
de

and then that configuration can be deployed to other switches from a central
deployment server.

Using a configuration file


TT

If you are planning for the base configuration of multiple switches in a network to be
the same, manually configure one switch, upload the configuration to a TFTP server,
and use that configuration as a base configuration template file. Switch IP addresses
are acquired by default using BOOTP; therefore, each switch has a unique IP
rT

address. Each switch is remotely accessed from a central deployment server and an
individual switch configuration is downloaded to meet specific network requirements.

Note
Fo

See the HP 1:10Gb Ethernet BL-c Switch Command Reference Guide on the HP
website for additional information on using a TFTP server to upload and
download configuration files.

7 –16 Rev. 12.31


Configuring Ethernet Connectivity Options

Configuring an HP 6120XG or 6120G/XG Switch


HP 6120XG and 6120G/XG switches are interconnect modules designed for the
BladeSystem infrastructure.

Switch IP configuration
Configuring the switch with an IP address expands your ability to manage the switch
and use its features. By default, the switch is configured to automatically receive
IP addressing on the default VLAN from a DHCP/BOOTP server that has been

ly
configured correctly with information to support the switch. However, if you are not
using a DHCP/BOOTP server to configure IP addressing, use the menu interface or

on
the CLI to manually configure the initial IP values. After you have network access to a
device, you can use the web browser interface to modify the initial IP configuration if
needed.
The switch IP address can be assigned by:

y
 Using the CLI Manager-level prompt

er
 Using a web browser interface

Using the CLI Manager-level prompt


liv
If you just want to give the switch an IP address so that it can communicate on your
network, or if you are not using VLANs, use the Switch Setup screen to quickly
de

configure IP addressing. To do so, either:


 Enter setup at the CLI Manager-level prompt:
ProCurve# setup
TT

 Select 8. Run Setup in the Main Menu of the menu interface.

Configuring the IP address by using a web browser interface


You can only use the web browser interface to access IP addressing if the switch
rT

already has an IP address that is reachable through your network.


1. Click the Configuration tab.
2. Click IP Configuration.
Fo

3. If you need further information on using the web browser interface, click [?] to
access the web-based help available for the switch.

Rev. 12.31 7 –17


Implementing HP BladeSystem Solutions

Accessing a blade switch from the Onboard Administrator


These instructions assume that you have already set up the BladeSystem Onboard
Administrator by using the First Time Setup Wizard.

See the HP BladeSystem Onboard Administrator User Guide for details on OA


setup. For information on OA command line interface (CLI) commands, see the
HP BladeSystem Onboard Administrator Command Line Interface User Guide.
Both guides are available at:
http://www.hp.com/go/bladesystem/documentation

ly
To connect to the CLI interface through the Onboard Administrator:
1. Connect a workstation or laptop computer to the serial port on the

on
HP BladeSystem c3000 or c7000 OA module using a null-modem serial cable
(RS-232).
2. Using a terminal program such as HyperTerminal or TeraTerm, open a

y
connection to the serial port using connection parameters of 9600, 8, N, 1.

er
3. Press Enter. OA prompts you for administrator login credentials.
4. Enter a valid user name and password. The OA system prompt displays.
liv
5. Enter the command:
connect interconnect <bay_number>
de

where <bay_number> is the number of the bay containing the blade switch. OA
connects you to the initial screen of the blade switch CLI.
6. Press Enter. The blade switch CLI prompt displays. You can now enter blade
switch CLI commands.
TT
rT
Fo

7 –18 Rev. 12.31


Configuring Ethernet Connectivity Options

Accessing a blade switch through the mini-USB interface (out of band)


The blade switch console supports out-of-band access through direct connection to
the mini-USB console port of a Windows computer. To communicate with the blade
switch:
1. Download the USB driver to the PC. To find the driver:
2. Go to: http://www.hp.com/#Support
3. Click the Download drivers and software radio button.

ly
4. In the text box, enter 6120XG and then click Go.
5. Click the link for correct operating system.

on
6. Download the Utilities package.
7. Install the driver by double-clicking the HPProCurve_USBConsole.msi file.
Connect the small end of the supplied USB console cable to the mini-USB port.

y
8. Connect the standard end of the supplied USB console cable to a workstation or
laptop computer. The computer will recognize the presence of a new USB device

er
and will load the driver for it.
9. Using a terminal program such as HyperTerminal or TeraTerm, open a
liv
connection to the USB port. (By default, this port will appear as COM4.)
10. Press Enter twice. The blade switch CLI prompt displays. You can now enter
de

blade switch commands.

Accessing a blade switch from the Ethernet interface (in band)


The blade switch console supports in-band access through the data ports using telnet
TT

from a PC or UNIX computer on the network, and a VT100 terminal emulator. This
method requires the blade switch to have an IP address, subnet mask, and default
gateway. The IP address, subnet mask, and default gateway can be supplied by a
Dynamic Host Configuration Protocol (DHCP) or Bootp server, or you can manually
rT

configure them using the CLI. By default, the blade switch gets its IP address through
DHCP or Bootp; see the next section for instructions on manually configuring a static
IP address.
Fo

To communicate with a blade switch that has an IP address, subnet mask, and
default gateway:
1. Use a ping command to verify network connectivity between the blade switch
and your workstation or laptop computer.
2. Using a terminal program such as HyperTerminal or TeraTerm, open a
connection using the IP address, telnet protocol, and port 23 of the blade switch.
3. Press Enter twice. The blade switch CLI prompt displays. You can now issue
blade switch commands.

Rev. 12.31 7 –19


Implementing HP BladeSystem Solutions

Assigning an IP address to a blade switch


By default, the blade switch tries to acquire an IP address from a DHCP or Bootp
server. The IP address for the blade switch can be configured using the CLI, through
the Onboard Administrator, or through a mini-USB port on the blade switch.
To set a static IP address manually:
1. From the operator’s CLI prompt (>) on the blade switch, enter:
enable

Supply a user name and password if you are prompted to do so.

ly
2. From the manager’s CLI prompt (#) on the blade switch, enter:

on
config

3. Specify the VLAN of the port that attaches to the network. By default, all ports
are in VLAN 1.

y
vlan <vlan_id>

4. Enter an IP address and subnet mask for the switch. Both the IP address and

er
subnet mask are in the x.x.x.x format.
ip address <ip_address> <subnet_mask>
liv
5. Enter a default gateway IP address in the x.x.x.x format.
ip default-gateway <ip_address>
de

6. Return to the operator or manager prompt by using a series of exit commands.


TT
rT
Fo

7 –20 Rev. 12.31


Configuring Ethernet Connectivity Options

IP addressing with multiple VLANs


In the factory-default configuration, the switch has one permanent default VLAN
(named DEFAULT_VLAN) that includes all ports on the switch. Thus, when only the
default VLAN exists in the switch, if you assign an IP address and subnet mask to the
switch, you are actually assigning the IP addressing to the DEFAULT_VLAN.
 If multiple VLANs are configured, then each VLAN can have its own IP address.
This is because each VLAN operates as a separate broadcast domain and
requires a unique IP address and subnet mask. A default gateway (IP) address
for the switch is optional, but recommended.

ly
 In the factory-default configuration, the default VLAN (named DEFAULT_VLAN) is

on
the primary VLAN of the switch. The switch uses the primary VLAN for learning
the default gateway address. The switch can also learn other settings from a
DHCP or BOOTP server, such as (packet) Time-To-Live (TTL) and TimeP or SNMP
settings.

y
Note

er
Other VLANs can also use DHCP or BOOTP to acquire IP addressing. However,
the gateway, TTL, and TimeP or SNTP values of the switch, which are applied
globally and not per-VLAN, will be acquired through the primary VLAN only,
liv
unless manually set by using the CLI, Menu, or web browser interface. If these
parameters are manually set, they will not be overwritten by alternate values
received from a DHCP or BOOTP server.
de

 The IP addressing used in the switch should be compatible with your network.
That is, the IP address must be unique and the subnet mask must be appropriate
for your IP network.
TT

 If you change the IP address through either telnet access or the web browser
interface, the connection to the switch will be lost. You can reconnect by either
restarting telnet with the new IP address or entering the new address as the URL
in your web browser.
rT
Fo

Rev. 12.31 7 –21


Implementing HP BladeSystem Solutions

IP Preserve: Retaining VLAN-1 IP addressing across configuration file


downloads
IP Preserve enables you to copy a configuration file to multiple switches while
retaining the individual IP address and subnet mask on VLAN 1 in each switch and
the gateway IP address assigned to the switch. This enables you to distribute the
same configuration file to multiple switches without overwriting their individual
IP addresses.
Operating rules for IP Preserve

ly
When ip preserve is entered as the last line in a configuration file stored on a
TFTP server, the following conditions are true:

on
 If the current IP address for VLAN 1 was not configured by DHCP/BOOTP,
IP Preserve retains the current IP address, subnet mask, and IP gateway address
of the switch when the switch downloads the file and reboots. The switch adopts
all other configuration parameters in the configuration file into the startup-config

y
file.

er
 If the current IP addressing for VLAN 1 of the switch is from a DHCP server,
IP Preserve is suspended. In this case, whatever IP addressing the configuration
file specifies is implemented when the switch downloads the file and reboots. If
liv
the file includes DHCP/BOOTP as the IP addressing source for VLAN 1, the
switch will configure itself accordingly and will use DHCP/BOOTP. If instead, the
de

file includes a dedicated IP address and subnet mask for VLAN 1 and a specific
gateway IP address, the switch will implement these settings in the startup-config
file.
 The ip preserve statement does not appear in the show config listings. To
TT

verify IP Preserve in a configuration file, open the file in a text editor and view
the last line.
rT
Fo

7 –22 Rev. 12.31


Configuring Ethernet Connectivity Options

Learning check
1. On the GbE2c Layer 2/3 Ethernet Blade Switch, the operator account is
disabled by default and has no password.
 True
 False
2. List the conditions that must be met for the switch module to obtain an IP address
for the fa0 interface through the Onboard Administrator.

ly
.................................................................................................................
.................................................................................................................

on
.................................................................................................................
.................................................................................................................
3. List the available management interfaces for the HP 6120XG and 6120G/XG

y
switches.

er
.................................................................................................................
.................................................................................................................
liv
.................................................................................................................
.................................................................................................................
de

.................................................................................................................
4. List three types of privileges available on 1:10Gb Ethernet BL-c switch.
.................................................................................................................
TT

.................................................................................................................
.................................................................................................................
rT
Fo

Rev. 12.31 7 –23


Implementing HP BladeSystem Solutions

ly
on
y
er
liv
de
TT
rT
Fo

7 –24 Rev. 12.31


Configuring Storage Connectivity Options
Module 8

Objectives
After completing this module, you should be able to explain how to configure the
following switches:
Brocade 8Gb SAN Switch for HP BladeSystem

ly

 Cisco MDS 9124e Fabric Switch for HP BladeSystem

on
 HP 3Gb SAS BL Switch for HP BladeSystem

y
er
liv
de
TT
rT
Fo

Rev. 12.31 8 –1
Implementing HP BladeSystem Solutions

Configuring a Brocade 8Gb SAN switch


The Brocade 8Gb SAN Switch for HP BladeSystem is a Fibre Channel switch that
supports link speeds of up to 8 Gb/s. The 8Gb SAN switch can operate in a fabric
containing multiple switches or as the only switch in a fabric.

Setting the switch Ethernet IP address


To set the Ethernet IP address on the 8Gb SAN switch:
 Verify that the enclosure is powered on

ly
 Verify that the switch is installed

on
 Choose one of the following methods to set the Ethernet IP address:
 Using Enclosure Bay IP Addressing (EBIPA)
 Using the external Dynamic Host Configuration Protocol (DHCP)

y
 Setting the IP address manually

er
Using EBIPA
To set the Ethernet IP address using EBIPA:
liv
1. Open a web browser and connect to the active HP Onboard Administrator.
2. Enable EBIPA for the corresponding interconnect bay.
de

3. Click Apply to restart the switch.


4. Verify the IP address using a Telnet or Secure Shell (SSH) encryption login to the
switch, or by selecting the switch in the Rack Overview window of the Onboard
TT

Administrator GUI.

Using external DHCP


To set the Ethernet IP address using the external DHCP:
rT

1. Connect to the active Onboard Administrator through a web browser.


2. Document the DHCP-assigned address by selecting the switch from the Rack
Overview window of the Onboard Administrator GUI.
Fo

3. Verify the IP address using a telnet or SSH login to the switch, or select the
switch in the Rack Overview window.

8 –2 Rev. 12.31
Configuring Storage Connectivity Options

Setting the IP address manually


To set the IP address manually:
1. Obtain the following items to set the IP address with a serial connection:
 Computer with a terminal application (such as HyperTerminal in a Microsoft
Windows environment or TERM in a UNIX environment)
 Null modem serial cable
2. Replace the default IP address (if present) and related information with the

ly
information provided by your network administrator. By default, the IP address is
set to 10.77.77.77 for switches with revision levels earlier than 0C.

on
3. Verify that the enclosure is powered on.
4. Identify the active Onboard Administrator in the BladeSystem enclosure.
5. Connect a null modem serial cable from your computer to the serial port of the
active Onboard Administrator.

y
6. Configure the terminal application as follows:

er
In a Windows environment, enter:
liv
 Bits per second — 9600
 Databits — 8
de

 Parity — None
 Stop bits — 1
 Flow control — None
TT

 In a UNIX environment, enter: tip /dev/ttyb–9600


7. Log in to the Onboard Administrator.
8. Press Enter to display the switch console.
rT

9. Identify the interconnect bay number where the switch is installed. At the
Onboard Administrator command line, enter:
connect interconnect x
Fo

where x is the interconnect bay slot where the switch is installed.

Rev. 12.31 8 –3
Implementing HP BladeSystem Solutions

10. Enter the following login credentials:


 User: admin
 Password: password
Alternatively, follow the onscreen prompts to change your password.
The Onboard Administrator connects its serial line to the switch in the specified
interconnect bay. A prompt displays, indicating that the escape character for
returning to the Onboard Administrator is Ctrl __ (underscore).
11. At the command line, enter: ipaddrset

ly
12. Enter the remaining IP addressing information, as prompted.

on
13. Optionally, enter ipaddrshow at the command prompt to verify that the
IP address is set correctly.
14. Record the IP addressing information, and store it in a safe place.

y
15. Enter Exit, and press Enter to log out of the serial console.

er
16. Disconnect the serial cable.
liv
de
TT
rT
Fo

8 –4 Rev. 12.31
Configuring Storage Connectivity Options

Configuring the 8Gb SAN switch


The 8Gb SAN switch must be configured to ensure correct operation within a
network and fabric.

Note
For instructions about configuring the switch to operate in a fabric containing
switches from other vendors, refer to the HP SAN Design Reference Guide
available from: http://www.hp.com/go/sandesignguide

Items required for configuration

ly
The following items are required for configuring and connecting the 8Gb SAN

on
switch for use in a network and fabric:
 8Gb SAN switch installed in the enclosure
 IP address and corresponding subnet mask and gateway address recorded

y
during the setting of the IP address

er
 Ethernet cable
 Small form-factor pluggable (SFP) transceivers and compatible optical cables, as
required
liv
 Access to an FTP server for backing up the switch configuration (optional)

Setting the date and time


de

The date and time are used for logging events. The operation of the 8Gb SAN
switch does not depend on the date and time; a switch with an incorrect date and
time value will function properly. To set the date and time, use the command line
TT

interface (CLI).

Verifying installed licenses


To determine the type of licensing included with your 8Gb SAN switch, enter
rT

licenseshow at the command prompt.


Fo

Rev. 12.31 8 –5
Implementing HP BladeSystem Solutions

Modifying the Fibre Channel domain ID (optional)


If desired, you can modify the Fibre Channel domain ID. The default Fibre Channel
domain ID is domain 1. If the 8Gb SAN switch is not powered on until after it is
connected to the fabric, and the default Fibre Channel domain ID is already in use,
the domain ID for the new switch is automatically reset to a unique value. If the
switch is connected to the fabric after is has been powered on and the default
domain ID is already in use, the fabric segments.
Enter fabricshow to determine the domain IDs that are currently in use. The
maximum number of domains with which the 8Gb SAN switch communicates is

ly
determined by this fabric license of the switch.

on
Disabling and enabling a switch
By default, the switch is enabled after power on and after the diagnostics and
switch initialization routines complete. You can disable and re-enable the switch as
necessary.

y
Using DPOD

er
Dynamic Ports On Demand (DPOD) functionality does not require a predefined
assignment of ports. Port assignment is determined by the total number of ports in
liv
use as well as the number of purchased ports.
In summary, the DPOD feature simplifies port management by:
de

 Automatically detecting server ports or cabled ports connected through a host


bus adapter (HBA)
 Automatically enabling ports
TT

 Automatically assigning port licenses


To initiate DPOD, use the licensePort command.
For the 8Gb SAN Switch, DPOD works only if the server blade is installed with an
rT

HBA present. A server blade that does not have a functioning HBA will not be
treated as an active link for the purpose of initial DPOD port assignment.

Backing up the configuration


Fo

To back up the switch configuration to an FTP server, enter configupload and


follow the prompts. The configupload command copies the switch configuration to
the server, making it available for downloading to a replacement switch, if
necessary.

8 –6 Rev. 12.31
Configuuring Storage C
Connectivity Op
ptions

Reset button

ly
on
y
er Reset button lo
ocation
liv
The Resett button on th he Brocade SAN
S switchees is located to the left off the status
LEDs. It iss a small, reccessed micro
o switch that is accessed by inserting a pin or
similarly sized object in the small hole to pushh the button.
de

The Resett button enab


bles you to re
eboot the sw
witch when th
he switch is n
not respondin
ng
or if you have forgotte
en the passw
word.
To reboot the switch, press the Reset button fo
or up to five sseconds.
TT

Note
The Reset buttton does not re
eturn the switch to factory-defa
ault settings.
rT
Fo

Rev. 12.3
31 8 –7
Implemen
nting HP BladeS
System Solutions

Mana
agement tools

ly
on
y
The mana agement too
er
ols built into the
t 8Gb SAN N switch can n be used to monitor fabbric
liv
topology, port status, physical staatus, and othher informatioon used for performancee
analysis and
a system debugging.
d When
W runni ng IP over Fiibre Channeel, these
managem ment tools must be run on n both the Fi bre Channel host and th he switch, an
nd
de

they mustt be supporte


ed by the Fib bre Channel host driver.
TT
rT
Fo

8 –8 Rev. 12
2.31
Configuring Storage Connectivity Options

Configuring a Cisco MDS 9124e Fabric Switch


The Cisco MDS 9124e Fabric Switch for HP BladeSystem is a Fibre Channel switch
that supports link speeds of up to 4Gb. The Cisco MDS 9124e Fabric Switch can
operate in a fabric containing multiple switches or as the only switch in a fabric.

Setting the IP address


To set the IP address by means of a serial connection, you need:
 A computer with a terminal application (such as HyperTerminal in a Windows

ly
environment or TERM in a UNIX environment
A null modem serial cable

on

To set the IP address:


1. Verify that the enclosure is powered on.
2. Identify the active Onboard Administrator in the BladeSystem enclosure.

y
3. Connect a null modem serial cable from the computer to the serial port of the

er
active Onboard Administrator.
4. Configure the terminal application as follows:
liv
 In a Windows environment, enter:
 Baud rate: 9600 bits per second
de

 8 data bits
 None (No parity)
 1 stop bit
TT

 No flow control
 In a UNIX environment, enter: tip /dev/ttyb –9600
rT

5. Log in to the Onboard Administrator.


6. Identify the interconnect bay number where the switch is installed.
7. Enter the following command:
Fo

OA> connect interconnect x

where x is the interconnect bay number where the switch is installed.


If you are using the switch for the first time, the switch setup utility starts
automatically. If this is not the first time the switch has been used, enter the setup
command at the system prompt.

Rev. 12.31 8 –9
Implementing HP BladeSystem Solutions

8. Enter a password for the system administrator. (There is no default password.)


9. Follow the instructions in the switch setup utility to configure the IP address, the
netmask, and other parameters for the switch.
10. When you have finished with the switch setup utility, log out and disconnect the
serial cable.

Configuring the fabric switch


The Cisco MDS 9124e Fabric Switch must be configured to ensure correct operation
within a network and fabric.

ly
Items required for configuration

on
To configure and connect the Cisco MDS 9124e Fabric Switch for use in a network
and fabric, you need:
 Switch installed in a BladeSystem enclosure

y
 IP address and corresponding subnet mask and gateway address

er
 Ethernet cable
 SFP transceivers and compatible optical cables, as required
liv
 Access to an FTP server for backing up the switch configuration (optional)

Setting the date and time


de

The date and time are used for logging events. The operation of the Cisco MDS
9124e Fabric Switch does not depend on the date and time; a switch with an
incorrect date and time value will function properly. Use the CLI to set the date and
time.
TT

Verifying installed licenses


To determine the type of licensing included with the Cisco MDS 9124e Fabric Switch,
enter show license usage at the command prompt using the following syntax:
rT

switch# show license usage


Fo

8 –10 Rev. 12.31


Configuring Storage Connectivity Options

Modifying the Fibre Channel domain ID (optional)


If desired, you can modify the Fibre Channel domain ID. If the Cisco MDS 9124e
Fabric Switch is not powered on until after it is connected to the fabric and the
default Fibre Channel domain ID is already in use, the domain ID for the new switch
is automatically reset to a unique value. If the switch is connected to the fabric after is
has been powered on and the default domain ID is already in use, the fabric
segments.

Recovering the administrator password

ly
You might need to recover the administrator password on the Cisco MDS 9124e
switch if the user does not have another user account on the switch with network-

on
administrator privileges. Refer to the Cisco MDS 9000 Family Fabric Manager
Configuration Guide and to the Cisco MDS 9000 Family CLI Configuration Guide
for detailed instructions.

y
er
liv
de
TT
rT
Fo

Rev. 12.31 8 –11


Implemen
nting HP BladeS
System Solutions

Fabricc switch management too


ols

ly
on
Ciscco MDS 9124e Fabric Switch management feeatures table

y
The manaagement tools built in to the Cisco MMDS 9124e Fa abric Switch can be used d to
monitor fabric topology, port statu
us, physical sstatus, and o
other informaation used fo
or

er
ance analysiss and system debugging.. When runn
performa ning IP over FFibre Channeel,
these management to ools must be run on both the Fibre Ch hannel host aand the switcch,
and they must be suppported by thhe Fibre Cha nnel host driiver.
liv
You can connect
c a management station
s to on e switch thro ough Etherneet while
managing other switcches connectted to the firsst switch thro ough Fibre Channel. To d
do
de

so, set the


e Fibre Chan nnel gatewayy address of each of the other switch hes to be
managed d to the Fibre
e Channel IP address of tthe first switcch.
TT
rT
Fo

8 –12 Rev. 12
2.31
Configuring Storage Connectivity Options

Configuring an HP 3Gb SAS BL Switch


The HP 3Gb SAS BL Switch is a single-wide interconnect module for HP BladeSystem
enclosures. The 3Gb SAS BL Switch is a key component of HP Direct-Connect
External SAS Storage for HP BladeSystem Solutions, with firmware and hardware
capabilities that enable the connection of external storage and tape devices to
BladeSystem enclosures.

Configuration rules for the 3Gb/s SAS Switch

ly
The 3Gb/s SAS Switch is only supported in BladeSystem enclosures (c7000 and
c3000). You can install the 3Gb SAS BL Switch in up to four interconnect bays in the

on
c7000 and in up to two interconnect bays in the c3000.
Two SAS switches are required in the same BladeSystem enclosure interconnect bay
row for redundancy. Single-switch configurations are supported as nonredundant.
For the c3000 enclosure, the SAS switch can be placed in interconnect bays 3

y

and 4 only.

er
 For the c7000 enclosure, the SAS switch can be placed in interconnect bays 3
and 4, 5 and 6, or 7 and 8 only.
liv
For the c3000 enclosure, using mezzanine slot 1 of a server along with enclosure
interconnect bay 2 is not supported. Use the 3Gb/s SAS BL Switch Virtual SAS
Manager (VSM) software to configure external SAS storage.
de

Note
Supported Internet browser versions are Microsoft Internet Explorer 6.0/7.0 and Mozilla
Firefox 3.
TT

A half-height server blade must have a P700m controller installed in server


mezzanine slot 1 or mezzanine slot 2. A full-height blade server must have a P700m
controller installed in server mezzanine slot 1, mezzanine slot 2, or mezzanine slot 3.
rT

A double-density server blade must have a P700m controller installed in server


mezzanine slot 2 of each server. Four SAS switches in interconnect bays 5, 6, 7, and
8 must also be included in this configuration.
Fo

Rev. 12.31 8 –13


Implemen
nting HP BladeS
System Solutions

Confiiguring th
he 3Gb SAS
S BL Switch
S

ly
on
y
Zoning proceedures

er
Key configuration tassks include:
liv
 Enab
bling or disa
abling multi-in
nitiator modee.
 Crea
ating the follo
owing zone groups:
de

 Switch-port zone groupss — For sharred SAS storrage enclosures and tape
e
libraries
 one groups — For zoned
Drive-bay zo d SAS storag
ge enclosures
Assig
gning zone groups
g to servers.
TT

 Capturing the co onfiguration for ngly recommends this ste


f safekeepiing. HP stron ep,
ecially in sing
espe gle-domain configuration
c ns (available only in the V
VSM CLI).
rT

For firmw
ware versionss earlier than n 2.0.0.0, no e available. The
o configuration tasks are
switch is configured using
u the VSM applicatio on. As shown in the precceding table,
configuraation (zoning
g) procedure es are the sam
me for shareed SAS stora
age enclosure es
and tapee devices, buut differ for zoned SAS sttorage encloosures.
Fo

8 –14 Rev. 12
2.31
Configuuring Storage C
Connectivity Op
ptions

Accessing the
e 3Gb SA
AS BL Sw
witch
The switcch is configured and man
naged throug
gh the Onbo
oard Adminisstrator and
VSM app plications.
To accesss VSM:
1. Acce
ess the Onbo
oard Administrator of thee enclosure. (The 3Gb SA
AS BL Switch is
supp
ported on On
nboard Adm
ministrator 2.4 40 and laterr.)
2. In the Onboard Administrato
A or Systems annd Devices trree, expand the Interconnect
Bayss option and select the 3G
Gb SAS BL SSwitch.

ly
3. Afterr selecting th
he SAS switch
h to managee, click Mana nsole and wa
agement Con ait

on
a few
w moments for the VSM application
a tto open.

Confiirming the firmwa


are versio
on

y
er Firmware versio n position
liv
Firmware e is preinstalled on each switch
s in thee factory, but updated, allternative, or a
preferred version mig ght be available. The follo
owing types of firmware are available
de

for the 3GGb SAS BL Switch:


S
 Firmwware version an 2.0.0.0 — Provide sin
ns earlier tha ngle zone suppport and
suppport connections only to shared
s SAS sstorage enclo osures such a
as the HP
Stora
age 2000sa Modular Sm mart Array (MMSA2000sa) and tape d devices such as
TT

the MSL
M G3 tape e libraries. All
A server blad de bays havve access to a all storage
encloosures conneected to the switch.
s Thesee settings aree preconfigured and cannot
be altered.
a To restrict access to the storag
ge, use the feeatures proviided with the
e
stora
age management softwarre.
rT

 Firmw ware version


ns 2.0.0.0 annd later — PProvide multizzone supportt for use with
h
share ed SAS stora
age enclosures such as thhe MSA2000 0sa, tape de
evices such aas
MSL G3 tape lib braries, and zoned
z SAS sstorage encloosures such a
as the MDS6 600.
Fo

Servver bays acceess the storag


ge enclosurees through zo
one groups ccreated in the e
VSMM application n embedded in the switchh firmware. TThe zone gro oups provide
user--defined assignment of seerver bays to
o one or morre desired zoone groups.
The curre
ently installed
d firmware veersion is disp
played in thee VSM near tthe center off the
HP Virtua
al SAS Mana ager banner. Access the V VSM and ma ake note of tthe installed
firmware version on each
e 3Gb SA AS BL Switchh. As needed d, update firm
mware on the e
switches to the desireed version. Firmware is innstalled using
g the VSM application.
When twwo 3Gb SAS BL Switches are installed d in the samee row of an eenclosure,
ensure that they are running
r the same
s firmwa re version.

Rev. 12.3
31 8 –15
Implementing HP BladeSystem Solutions

Learning check
1. List the tools used to configure the 3Gb SAS switch.
…………………………………………………………………………………………
…………………………………………………………………………………………
2. How do you set the IP address on a Cisco MDS switch?
…………………………………………………………………………………………
…………………………………………………………………………………………

ly
…………………………………………………………………………………………

on
…………………………………………………………………………………………
…………………………………………………………………………………………
3. With Dynamic Ports on Demand (DPOD), port assignment is determined by the

y
total number of ports in use as well as the number of purchased ports.

er
 True
 False
liv
de
TT
rT
Fo

8 –16 Rev. 12.31


Virtual Connect
Installation and Configuration
Module 9

Objectives
After completing this module, you should be able to:

ly
 Describe the HP Virtual Connect portfolio and the basic technology

on
 Plan and implement a Virtual Connect environment
 Configure a Virtual Connect module
 Manage a Virtual Connect domain

y
 Explain how to use Virtual Connect modules in a real-world environment

er
liv
de
TT
rT
Fo

Rev. 12.31 9 –1
Implemen
nting HP BladeS
System Solutions

HP Virtual
V Connecct portffolio
Virtual Connect is an industry-stan ndard-basedd implementaation of serve
er-edge I/O
ation. It puts an abstractio
virtualiza on layer betw
ween the serrvers and the
e external
networks so that the LANL and sto
orage area n etwork (SAN N) see a poo ol of servers
rather tha
an individual servers.

HP 1/
/10Gb VC
V Ethernet

ly
on
y
er
Simplify and
a make th he customer’ss data centerr change-rea ady. The inno
ovative HP
1/10Gb Virtual Conn nect Ethernett Module forr the HP BladdeSystem is th
he simplest,
most flexiible connectiion to networks. The Virtuual Connect Ethernet Mo odule is a neww
liv
class of blade
b interco
onnect that simplifies servver connectio
ons by cleanlly separating
g
the server enclosure from
f LAN, simmplifies netw
works by reducing cabless without
adding sw witches to manage, and allows a chhange in servvers in just m minutes, not
de

days.

HP 1/
/10Gb-FF VC Ethe
ernet
TT
rT
Fo

The HP 1/10Gb Virtu ual Connect Ethernet Mo odule for the HP BladeSysstem is the
simplest, most flexible
e network coonnection. Th e Virtual Co onnect Ethern
net Module iss a
class of blade
b interco
onnect that simplifies servver connectio
ons by cleanlly separating
g
the server enclosure from
f LAN, sim mplifies netw
works by reducing cabless without
adding sw witches to manage, and allows channging serverss in just minu utes, not dayss.
This model is similar to the HP 1/ /10Gb VC Etthernet Module, but offerrs optical
uplinks.

9 –2 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

HP Virtual Co
onnect Fle
ex-10 10G
Gb Etherrnet

ly
on
The HP Virtual
V Conneect Flex-10 10
0Gb Ethernett Module is a class of bla ade
interconn
nects that sim
mplifies server connection s by cleanly separating tthe server
enclosure
e from LAN. It simplifies networks
n by reducing cables without adding
switches to manage, allowing a server
s chang e in just minutes, and taiilors networkk

y
ons and spee
connectio eds based on n applicationn needs.
HP Flex-10 technology
number of
er
o NICs per connection
c
y reduces inffrastructure ccosts by incre
y significantly
without
w addinng extra blad
easing the
de I/O modules, and
liv
reducing cabling upliinks to the da
ata center neetwork.
de
TT
rT
Fo

Rev. 12.3
31 9 –3
Implemen
nting HP BladeS
System Solutions

HP Virtual Co
onnect 4G
Gb Fibre Channeel Module
e

ly
on
The HP Virtual
V Conne ect 4Gb FC Module
M expa ands existingg Virtual Con nnect
capabilities by allowiing up to 1288 virtual macchines (VMs)) running on the same
physical server
s to acccess separatee storage ressources. Provvisioned stora age resourcee is
associate
ed directly to a specific VM,
V even if thhe virtual serrver is re-allo
ocated within
n

y
the Blade
eSystem. Storrage manage ement is no longer consttrained to a ssingle physiccal

er
HBA on a server blad de. SAN adm ministrators ccan manage virtual HBAss with the sa ame
methods and viewpoiint of physica al HBAs.
The HP Virtual
V Conne
ect 4Gb Fibree Channel MModule cleannly separatess the server
liv
enclosure
e from the SA
AN, simplifie
es SAN fabriccs by reducin
ng cables wiithout addingg
switches to the domain, and allow
ws a fast cha
ange in serveers.
de

HP Virtual Co
onnect 8G
Gb 20-po
ort Fibre Channel Module
e
TT
rT
Fo

The HP Virtual
V Conne ect 8Gb 20-p port FC Mod dule enables up to 128 V VMs running on
the same physical serrver to accesss separate sstorage resouurces. Provisioned storagee
resource is associated
d directly to a specific VM
M, even if thee VM is re-allocated withhin
the Blade
eSystem. Storrage manage ement of VM Ms is no long
ger limited byy the single
physical HBA on a se erver blade: SAN adminiistrators can manage virttual HBAs w with
the same methods an nd viewpoint of physical HBAs.

9 –4 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

HP Virtual Co
onnect 8G
Gb 24-po
ort Fibre Channel Module
e

ly
on
HP Virtua
al Connect 8Gb 24-port Fibre Channnel Module kkey features include:
 Eight 2/4/8Gb Auto-negotia
ating Fibre C
Channel uplin
nks connecte
ed to externa
al
SANN switches
Two Fibre Chann
nel SFP+ Tran
nsceivers included with th
he Virtual Co
onnect Fibre

y

Channel Modulee

er
 Sixte
een 2/4/8G
Gb Auto-nego
otiating Fibree Channel do
ownlink portss for maximu
um
HBAA performancce
liv
 HBAA aggregation on uplinks ports using ANSI T11 sta
andards-based N_Port ID
D
Virtu
ualization (NPIV) technolo
ogy
Up too 255 VMs running
r on th
he same phyysical server can access sseparate
de

stora
age resourcess
 Extre
emely low-lattency through
hput for switcch-like performance
This module is compa atible with cu
urrent releasees of ProLian
nt and Integriity servers
TT

blades th
hat support th
he QLogic QMH2462
Q 4GGb FC HBA and QMH2562 8Gb FC C
HBA or Emulex
E LPe1105-HP 4Gb HBA and LPee1205 8Gb HBA for HP BladeSystem m.
rT
Fo

Rev. 12.3
31 9 –5
Implemen
nting HP BladeS
System Solutions

HP Virtual Co
onnect Fle
exFabric moduless

ly
on
Flex
xFabric connecction options

HP Virtua
al Connect FllexFabric mo
odule is a log
gical combinnation of Flexx-10 technolo
ogy

y
with indu
ustry standard
d VC Fibre Channel
C techhnology in a single intercconnect modu ule.

er
The VC FlexFabric
F Moodule and FlexFabric Ad apters conveerge Ethernet and Fibre
Chanel trraffic within the
t BladeSysstem enclosu re and then separate the e two at
liv
enclosuree edge. Conn nectivity to both
b the exterrnal Ethernett and native Fibre Channnel
from the same
s module allows custtomers to red duce the com mplexity without disruptin
ng
existing LAN
L and SAN infrastructure and elim minates the neeed for Fibree Channel
de

modules and adapterrs.


The intern
nal facing po
orts (downlinnks) on the FllexFabric mo
odule can ad
dapt to whate
ever
they are connected
c to
o in the serve
ers:
G7 LOM
L — 10GGb CEE with three FlexNIICs and one FlexHBA or four FlexNIC
Cs if
TT

you do not config


gure a storag
ge connectio
on in the pro
ofile
 G6 LOM
L — 10G
Gb Ethernet with
w four FlexxNICs
rT

 G1/
/G5 LOM — 1Gb Ethern
net, one NIC only
You can connect
c the VC
V FlexFabric uplinks to 1GbE netwo
orks using SFFP transceive
ers
for an ea
asy transition to 10Gb latter.
Fo

9 –6 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

FlexFa
abric adap
pter — Phy
ysical funcctions

ly
on
FlexFabric LOM overview

y
Each FlexxFabric adap pter has two 10Gb physiccal ports tha
at can be parrtitioned into
o

er
four physsical functions (PF):
 PF 1,, 3, and 4 on each port are always a
and can onlyy be Etherne
et.
liv
 The second
s PCIe
e function (PFF) can be Ethhernet, Fibre Channel ove er Ethernet
(FCo
oE), or iSCSI.. It absolutely
y must have the same co onfiguration b
between ports 1
and 2 on the samme FlexFabric adapter.
de

 There
efore, port 1 FCoE and port
p 2 iSCSI cannot be o
on the same a
adapter.
TT
rT
Fo

Rev. 12.3
31 9 –7
Implemen
nting HP BladeS
System Solutions

Flex-10 adapter ma
apping with
h VC Flex-10
0 modules

ly
on
FlexFabric LOM and VC
C Flex-10 moduule

With the Flex-10 modules, FlexFabbric LOM funnctions the sa


ame as Flex-1
10 network

y
cards. Fo
our Ethernet ports
p are ava
ailable from any LOM.
FlexFabrric adapter mapping with
w VC Flex
xFabric mod
dules

er
liv
de
TT

FlexFabricc LOM and VC FlexFabric mod


dule
rT

You can extend


e this te
echnology to
o combine wwith Fibre Chaannel and iS
SCSI data
storage connectivity
c all
a in a single
e interconnecct module fro
om a single FlexFabric
adapter: One of the physical functions now hhas multi-con nfiguration an
nd can be
Fo

utilized as
a a NIC, FC CoE, or iSCSI device and is recognizeed as such byy the server
operatingg system.

9 –8 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

Rx side allocation
a fo
or FlexFabrric
Individua
al Ethernet, iS
SCSI, or FCooE functions rreceived trafffic (Rx) flowss are not limited
and couldd consume up u to the full line rate of 1
10Gb. With FCoE, howe ever, Enhance ed
Transmisssion Selection flow controol managem ent guaranteees minimum m bandwidth set
by the Virtual Connecct Manager. Thus, when there is no ccongestion, FFCoE or LAN N
bandwidth can excee ed the specified data ratees for traffic fflowing from
m VC to
FlexFabric adapter. Under
U congessted conditioons, the VC m module will e enforce a fair
n of bandwid
allocation dth as determ
mined by thee FCoE functiion rate limitt defined in tthe
server pro
ofile. The rem
mainder will be set as thee aggregate rate limit for the FlexNIC Cs.

ly
On the trransmitted tra
affic (Tx) side
e, FlexNIC iss limited by tthe server pro
ofile definitio
on
and set as
a the maxim mum in the ne etwork definiition.

on
FlexFabrric adapter mapping with
w 10G Pa
ass-Thru mo
odules

y
er
liv
de

FlexFabric LOM and 10G


Gb Pass-Thru mo
odule

When co onnecting to 10G Pass-Thru Modules, the FlexFabric adapters lose most off
TT

their adva
anced featurres:
 The PCIe function
ns have fixed
d configuratio
ons and can
nnot be easilyy changed o
or
disabled.
rT

 The only
o two avaailable config
gurations aree one NIC and one stora
age (either
FCoEE or iSCSI de
epending on n the server m
model).
 The only
o adjustable bandwid
dth control is between NIC and FCoE
E
Fo

 There
e is no:
 Virtualization of Ethernett MAC addreesses or Fibrre Channel W
World Wide
Name (WW WNs)
 Centralized managemen
nt of SAN (Fiibre Channeel or iSCSI) boot parametters
 Integration with
w BladeSy ystem Matrix or upper layyer software tools such as
HP Infrastruccture Orchestration

Rev. 12.3
31 9 –9
Implementing HP BladeSystem Solutions

Planning and implementing Virtual Connect


Before beginning installation, complete the following tasks:
 Determine which mezzanine cards and interconnect modules will be used and
where they will be installed in the enclosure.
 Determine the Ethernet stacking cable layout, and ensure that the proper cables
are ordered. Stacking cables allow any Ethernet NIC from any server to be
connected to any of the Ethernet networks defined for the domain.

ly
 Determine which Ethernet networks will be connected to or contained within the
domain. Most installations have multiple Ethernet networks, each typically
mapped to a specific IP subnet. The VC Manager enables definition of up to 64

on
different Ethernet networks that can be used to provide network connectivity to
server blades. Each physical NIC on a server blade can be connected to any
one of these Ethernet networks.

y
 Virtual Connect Ethernet networks can be completely contained within the
domain for server-to-server communication or connected to external networks

er
through rear panel port cable connections (uplinks). For each network, the
administrator must use the VC Manager to identify the network by name and to
liv
define any external port connections.
 Determine the Ethernet MAC address and Fibre Channel WWN range to be
used for the servers within the enclosure. Server and networking administrators
de

should fully understand the selection and use of MAC address ranges before
configuring the enclosure.
 Name the fabric that servers will connect to. The setup wizard enables you to
specify the Fibre Channel fabrics that will be made available. Each VC-FC
TT

module supports a single SAN fabric and is connected to a Fibre Channel


switch that has been configured to run in NPIV mode.
 Set the Fibre Channel oversubscription rate using the Virtual Connect setup
rT

wizard. Oversubscription degrades Fibre Channel performance and occurs


when hosts require more bandwidth than a port can provide. As devices send
frames through more switches and hops, other data traffic in the fabric routed
through the same interswitch link (ISL) or path can cause oversubscription. It is
Fo

also referred to as the ratio of potential port bandwidth to available backplane


slot bandwidth.
 Identify the administrators for the Virtual Connect environment and identify
which roles and administrative privileges they require. VC Manager classifies
each operation as requiring server, network, domain, or storage privileges. A
single user can have any combination of these privileges.

9 –10 Rev. 12.31


Virtual Connect Installation and Configuration

Building a Virtual Connect environment


Typically, a Virtual Connect environment is built in the following manner:
 The lab technician sets up the enclosure by:
 Installing the Virtual Connect modules
 Cabling the stacked modules
 Running the enclosure setup wizard
 Running the Virtual Connect setup wizard

ly
 The LAN administrator defines the Ethernet networks and connections.

on
 The SAN administrator defines the storage fabrics and connections.
 The network administrator:
 Ensures that the appropriate uplink cables are dropped to the rack (for

y
example, using two 10Gb links or a bundle of two 8 x 1Gb links, primary
and standby)

er
Configures the data center switch so that selected networks are made
available to the enclosure
liv
 Documents the network names and VLAN IDs
 The server administrator:
de

 Defines the server profiles and connections


 Makes additions, changes, and moves whenever needed
 Confirms that the enclosure is properly installed in the rack
TT

 Configures stacking links


 Obtains the list of network names and VLAN IDs from the network
administrator
rT

 Connects the data center network cables to the enclosure


 Uses VC Manager to set up a share uplink set
Fo

 Can define private or dedicated networks

Rev. 12.31 9 –11


Implementing HP BladeSystem Solutions

Virtual Connect out-of-the-box steps


The steps to install Virtual Connect are:
1. Install the interconnect modules.
2. Install the stacking links.

Note
Stacking links are used to interconnect VC-Enet modules when more than two modules
are installed in a single enclosure. This feature enables all Ethernet NICs on all servers
in the Virtual Connect domain to have access to any VC-Enet module uplink port. By

ly
using these module-to-module links, a single pair of uplinks can function as the data
center network connections for the entire Virtual Connect domain.

on
3. Cable the Virtual Connect Ethernet uplinks to the data center networks.
4. Connect the data center Fibre Channel fabric links (if applicable).

y
5. Note the default network settings for VC-Enet module in bay 1 (from the tear-off
tag).
6.
7.
er
Note the default network settings for the Onboard Administrator.
Apply power to the enclosures.
liv
8. Use Onboard Administrator for basic setup of the enclosures (enclosure name,
passwords, and so forth).
de

9. Access Virtual Connect (through the Onboard Administrator or dynamic DNS


name from the tear-off tag).
TT
rT
Fo

9 –12 Rev. 12.31


Virtual Connect Installation
n and Configura
ation

Virtua
al Conne
ect Ethern
net stackiing

ly
on
y
er
VC with stackiing links

Virtual Co
onnect stackking rules:
liv
 Any port can be used for stacking. Stackking cables a
are auto-dete
ected.
 All VC
V Ethernet modules
m have e at least on e internal sta
acking link th
hrough the
de

midpplane. The 10
0/10Gb VC--Ethernet mo odule has two o internal sta
acking links ffor
a tottal of 20Gb of cable-free
e stacking.
 Best practice for stacking is to
o connect eaach Ethernet module to tw
wo different
Ethernet moduless. In the precceding graphhic, every moodule is conn
nected to twoo
TT

different moduless. Each module connectss to the adjaccent bay usinng the internal
midpplane path (tthe orange lines). Then, eeither 1Gb o
or 10Gb cab bles are usedd to
stackk to another module (the blue lines).
rT
Fo

Rev. 12.3
31 9 –13
Implemen
nting HP BladeS
System Solutions

Virtual Connect Ethernet module stacking

ly
on
y
erVirtual Connect
C modul es stacking linkks
liv
In the preeceding grap phic, stacking links are sshown to be both externa al and intern
nal.
The intern nal 10Gb linnks are connnected by wa ay of the sign
nal midplane e inside the
enclosure e and connecct the modules horizonta ally. The exteernal links arre both 10Gb
de

(CX4) and 1Gb (RJ-45) and can extende a serrver’s networrk connection ns across
multiple VC
V modules.. These exterrnal links cann also conneect to the exte ernal
infrastruccture switches.
TT

Notice th
hat all the mo
odules in the
e graphic aree Ethernet-ba
ased indicating that the V
VC
Fibre Chaannel modules do not pa articipate in the stacking example.
rT
Fo

9 –14 Rev. 12
2.31
Virtual Connect Installation and Configuration

Using VC-FC modules


HP offers a few Virtual Connect modules for virtualization of the SAN environment.
To configure a VC-FC module, you must use a VC-Ethernet module.

Virtual Connect Fibre Channel WWNs


A Fibre Channel WWN is a 64-bit value used during login to uniquely identify a
Fibre Channel HBA port and get a port ID.
Each server blade Fibre Channel HBA mezzanine card ships with factory-default port

ly
and node WWNs for each Fibre Channel HBA port. Although the hardware ships
with default WWNs, Virtual Connect can assign WWNs that will override the

on
factory default WWNs while the server remains in that Virtual Connect enclosure.
When configured to assign WWNs, Virtual Connect securely manages the WWNs
by accessing the physical Fibre Channel HBA through the enclosure Onboard
Administrator and the iLO interfaces on the individual server blades.

y
When assigning WWNs to a Fibre Channel HBA port, Virtual Connect assigns both

er
a port WWN and a node WWN. Because the port WWN is typically used for
configuring fabric zoning, it is the WWN displayed throughout the Virtual Connect
user interface. The assigned node WWN is always the same as the port WWN
liv
incremented by 1.
Configuring Virtual Connect to assign WWNs in server blades maintains a
de

consistent storage identity even when the underlying server hardware is changed.
This method allows server blades to be replaced without affecting the external Fibre
Channel SAN administration.
The naming convention is as follows:
TT

 The first 4 bits identify the naming authority.


 When the first two bytes are either hex 10:00 or 2x:xx (where the xs are vendor-
specified), they are then followed by the 3-byte vendor identifier and the 3-byte
rT

vendor-specified serial number.


 When the first nibble is either 5 or 6, it is then followed by a 3-byte vendor
identifier (IEEE OUI) and 4.5 bytes for a vendor-specified serial number.
Fo

HP has set aside a dedicated range of Fibre Channel WWNs. You can set each
Virtual Connect domain to either a WWN defined by Virtual Connect or a factory-
default WWN.
 50:06:0B:00:00:C2:62:00 to 50:06:0B:00:00:C3:61:FF
 Equals 64KB WWNs

Rev. 12.31 9 –15


Implemen
nting HP BladeS
System Solutions

Virtua
al Conne
ect Fibre Channel port typ
pes and lo
ogins

ly
on
y
er
Config
guration of SAN
N with VC mod
dules and witho
out VC modules

Key Fibre
e Channel po
ort types:
liv
 N_Po
ort (End Port)
 ort (Fabric Port) addressab
F_Po ble by the NN_Port attacheed to it with a common
de

well--known addrress identifierr (hex ‘FF FF FE’)


 FL_Po
ort—An F_Po
ort that conta
ains arbitrateed loop functtions
 NL_P
Port (Loop En
nd Port)—An N_Port that contains arb
bitrated loop
p support
TT

 E_Po
ort (Expansio
on Port)—A sw
witch port ussed for switch-to-switch co
onnections

Fibre Channel
C lo
ogins
rT

N_Port devices must log in to the fabric. Loginns enable a n


node to dete
ermine the
fabric/topology type and enable assignment of an N_Porrt Identifier. TThey also sett up
buffer-to-b
buffer creditss.
The login
n sequence iss as follows:
Fo

1. Link is establishe
ed.
2. N_Poort sends a Fabric
F Login (FLOGI) fram
me to the well-known fabric address ((FF
FF FE
E).
3. Fabrric responds with an Acce
ept (ACC) fra
ame.

9 –16 Rev. 12
2.31
Virtual Connect Installation and Configuration

Fibre Channel zoning and SSP


Fabric zoning enables a Fibre Channel fabric to be separated into different
segments. It is performed within the switched fabric. Zoning types are by node or by
port.
Selective Storage Presentation (SSP) is implemented in storage targets. SSP enables a
target to show only certain logical unit numbers (LUNs) to certain initiators (typically
World Wide Port Names).

Important

ly
! Each switch must have its own domain between 1 and 254 with no duplicate IDs in the
same fabric.

on
y
er
liv
de
TT
rT
Fo

Rev. 12.31 9 –17


Implemen
nting HP BladeS
System Solutions

N_Po
ort_ID virttualizatio
on

ly
on
y
er
liv
de

N_Port_ID
N virtu alization

A VC-FC module funcctions as an HBA aggreg gator and usees NPIV, whiich assigns
multiple N_Port_IDs
N to
o a single N_Port,
N thereb
by enabling multiple distiinguishable
TT

entities.
NPIV funcctions within a Fibre Cha
annel HBA a nd enables uunique WW WNs and IDs for
each virtu
ual machine within a server. A VC-FC C module funnctions as a transparent
rT

HBA agg gregator deviice; NPIV enables it to reeduce cabless in a vendorr-neutral


fashion.
NPIV is defined
d by th
he ANSI T11 Fibre Channnel standardss:
Fo

 Fibre
e Channel De
evice Attach (FC-DA) Speecification, S
Section 4.13
 Fibre
e Channel Lin
nk Services (FC-LS)
( Speciification

9 –18 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

Fabricc login using the HB


BA aggreg
gator’s WW
WN

ly
on
 1—FFabric login using HBA aggregator
a W
WWN (WW
WN X)
 Establishes the
t buffer cre
edits for the o
overall link

y
 Receives an overall Port ID
 2a to
o 4a—Serve

er
er HBA logs in
i normally uusing the WW
WNs
liv
 2b to
o 4b—Serve
er HBA fabricc logins are ttranslated to
o Fabric Disco
overy (FDISC
C)
 5—TTraffic for all four N_Port IDs are carrried on the sa
ame link
de
TT
rT
Fo

Rev. 12.3
31 9 –19
Implemen
nting HP BladeS
System Solutions

N_Porrt_ID virtua
alization

ly
on
y
NPIV is in
ndependent of both operrating system
ms and device drivers, an
nd the standa
ard

er
Qlogic and Emulex Fibre Channe el HBAs supp port NPIV.
NPIV doees not interfe
ere with serve
er/SAN commpatibility. Affter the serve
er is logged in,
liv
Fibre Cha
annel framess pass throug gh unchangeed.

Important
!
de

When installiing a VC-FC mo


odule, you mustt enable NPIV o on the fabric sw
witch that is
attached to th
he VC-FC moduule uplinks befo
ore the server b lade HBAs can n log in to the
fabric.

Configuring
g Virtua
al Conn
nect
TT
rT
Fo

Virrtual Connect cconnections

Each Virtual Connect Ethernet mo


odule has sevveral numberred Ethernet connectors. All
of these connectors
c can be used to
t connect to
o data centerr switches, or they can be

9 –20 Rev. 12
2.31
Virtual Connect Installation and Configuration

used to stack Virtual Connect modules and enclosures as part of a single Virtual
Connect domain.
Networks must be defined within the VC Manager so that specific, named networks
can be associated with specific external data center connections. These named
networks can then be used to specify networking connectivity for individual servers.
A single external network can be connected to a single enclosure uplink, or it can
make use of multiple uplinks to provide improved throughput or higher availability. In
addition, multiple external networks can be connected over a single uplink (or set of
uplinks) through the use of VLAN tagging.

ly
The simplest approach to connecting the defined networks to the data center is to
map each network to a specific external port. An external port is defined by the

on
following:
 Enclosure name
 Interconnect bay containing the Virtual Connect Ethernet module

y
 Selected port on that module (1-8, X1, X2, . . .)

er
liv
de
TT
rT
Fo

Rev. 12.31 9 –21


Implemen
nting HP BladeS
System Solutions

Virtua
al Conne
ect logica
al flow
The Virtua
al Connect configuration
c n process usees a consistent methodolo
ogy.

Create
e a VC do
omain

ly
on
y
One of th
er
he first requirrements in se
etting up a V
VC environmeent is to establish a VC
liv
domain through the web-based
w VC
V Manager interface.
A Virtual Connect do osure and a set of associated module
omain consistts of an enclo es
and serveer blades tha
at are manag ged togetherr by a singlee instance of the VC
de

Managerr. The Virtuall Connect do omain contai ns specified networks, seerver profiless,
and user accounts thaat simplify th
he setup and administratiion of server connectionss.
Establishiing a Virtual Connect do omain enablees administraators to upgrrade, replace
e,
or move servers
s within their enclo
osures withouut changes being visible to the extern
nal
TT

LAN/SAN N environme ents.


rT
Fo

9 –22 Rev. 12
2.31
Virtual Connect Installation and Configuration

Virtual Connect multi-enclosure VC domains


Starting with firmware 2.10, a VC domain can contain more than one enclosure.
A multi-enclosure VC domain requires:
 One base enclosure (primary VC Managers in bays 1 and 2). This rule does not
have to be followed when FlexFabric modules are used.
 Onboard Administrator and VC modules must be on the same management
network.

ly
 All Ethernet modules interconnected.
 All enclosures must have the identical VC-FC configuration (no stacking of Fibre

on
Channel modules).
It supports:
 Up to four c7000s enclosures

y
 Up to 16 Virtual Connect Ethernet modules

er
 Up to 16 Virtual Connect Fibre Channel modules
Stacking cable options:
liv
 Fibre cables (SFP+)
 10Gb copper Ethernet cables with CX-4 connectors. (Do not use InfiniBand
cables because they are tuned differently.)
de

 1Gb Ethernet cables


 DAC cables (SFP+)
TT

Note
HP currently limits each domain to 16 Ethernet modules and 16 Fibre Channel modules. If
more than 16 are detected, the domain will be degraded with a
DOMAIN_OVERPROVISIONED statement.
rT
Fo

Rev. 12.31 9 –23


Implementing HP BladeSystem Solutions

Ethernet stacking connections


For each Virtual Connect Ethernet network (vNet), Virtual Connect creates a loop-free
tree with the uplink as the root. Each VC-Ethernet hop adds latency between 2 and 4
milliseconds per hop. Extra links can reduce hops and provide additional
redundancy.

Note
Latency is a function of the bridge chip used in the module. Both of the VC 1/10 modules
use the same bridge chip and, therefore, will have identical latency. The Flex-10 module
uses a bridge chip with much lower latency.

ly
Important

on
! A switch that does not understand Link Aggregation Control Protocol (LACP) or Link Layer
Discovery Protocol (LLDP) (such as Nortel 8500-series switches) can introduce a loop. If
the switch does not support LACP, change the uplink port mode from Auto to Failover.

y
PortFast

er
The Spanning Tree PortFast feature was designed for Cisco switch ports connected to
edge devices, such as server NIC ports. This feature allows a Cisco switch port to
bypass the “listening” and “learning” stages of spanning tree and quickly transition
liv
to the “forwarding” stage. By enabling this feature, edge devices are allowed to
immediately begin communicating on the network instead of having to wait for
Spanning Tree to determine whether it needs to block the port to prevent a loop—a
de

process that can take 30+ seconds with default Spanning Tree timers. Because edge
devices do not present a loop on the network, Spanning Tree is not needed to
prevent loops and can be effectively bypassed by using the PortFast feature. The
benefit of this feature is that server NIC ports can immediately communicate on the
TT

network when plugged in rather than timing out for 30 or more seconds. This
strategy is especially useful for time-sensitive protocols such as PXE and DHCP.

Important
rT

! Using features such as Portfast and BPDU Guard enable uplink failover to occur more
quickly and offer protection against the possibility of a loop.
Fo

Because VC uplinks operate on the network as an edge device (like teamed server
NICs), Spanning Tree is not needed on the directly connected Cisco switch ports.
Thus, PortFast can be enabled on the Cisco switch ports directly connected to VC
uplinks.

Note
The interface command to enable PortFast on a Cisco access port is: spanning-tree
portfast
The interface command to enable PortFast on a Cisco trunk port is: spanning-tree
portfast trunk

9 –24 Rev. 12.31


Virtual Connect Installation and Configuration

BPDU Guard
BPDU Guard is a safety feature for Cisco switch ports that have PortFast enabled.
Enabling BPDU Guard allows the switch to monitor for the reception of Bridge
Protocol Data Unit (BPDU) frames (spanning tree configuration frames) on the port
configured for PortFast. When a BPDU is received on a switch port with PortFast and
BPDU Guard enabled, BPDU Guard will cause the switch port to err-disable (shut
down). Since ports with PortFast enabled should never be connected to another
switch (which transmits BPDUs), BPDU Guard protects against PortFast-enabled ports
from being connected to other switches. This arrangement prevents:

ly
 Loops caused by bypassing Spanning Tree on that port
Any device connected to that port from becoming the root bridge

on

Because Virtual Connect behaves as an edge device on the network, and because
VC does not participate in the data center spanning tree (that is, does not transmit
BPDUs on VC uplinks), BPDU Guard can be used, if desired, on Cisco switch ports

y
connected to VC uplinks.

er
Note
The interface command to enable BPDU Guard on a Cisco port is: spanning-tree
bpduguard enable.
liv
de
TT
rT
Fo

Rev. 12.31 9 –25


Implemen
nting HP BladeS
System Solutions

VC base
e enclosure

ly
on
y
er
Multiplle enclosures sttacked togetherr

Each staccked domain n consists of a base enclo osure and on ne or more re


emote
liv
enclosure es. The base enclosure prrovides impo ortant functionality becau
use it houses the
primary and
a second VC V Manage er. Therefore, all the mana agement ope erations for tthe
domain come
c from one enclosure e. If that entirre enclosure fails and goes offline, the
de

remote en o operate no rmally, but yyou will be un


nclosures willl continue to nable to makke
administrrative change es to the dom
main becausse the two ma anagement m modules will be
offline. Th
his situation is analogouss to losing bo oth the prima
ary and secoondary VC
Managerr in a single--enclosure do omain. The o other modulees in that dommain continu ue
TT

to operatte normally, but


b you will beb unable to o make changes to the co onfiguration.
Merging existing dom mains into on ne is not sup ported. To crreate a multiiple-enclosure
e or
stacked domain,
d startt with an unimported encclosure and ccreate a dom main on the
rT

base encclosure. Havin ng a base en nclosure withh a Virtual C


Connect doma ain in place will
not affectt this; there iss no reason to delete thee domain on the enclosurre you will usse
as your base
b enclosure. However,, remote encclosures must be unimporrted and anyy
domains deleted on the t remote enclosures you want to ad dd to a stackked domain.
Fo

9 –26 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

Enclosurre removal
Before removing enclo osures, you need
n to knoww the locatio
on of the upliinks and whiich
e. If the activve uplinks are
are active e in the basee enclosure, the following
g steps shoulld
be non-disruptive:
 Fromm the VC Manager’s Dom
main Settingss window, deelete the enclosure (be su
ure
to re
emove the rig
ght one)
 Unplug the intere
enclosure sta
acking links
VC-Fibre
e Channel co
onfiguration

ly
on
y
er
liv
VC-FC configuration with multiple enclossures
de

VC-FC is not a Fibre Channel


C swittch, so you ccannot stack VC-FC moduules. If one
enclosure
e in a stack iss connected to four Fibree Channel SAANs, as show
wn here, the
second enclosure
e musst be conneccted to the sa ame four SANNs and so on. In other
words, all enclosures must have thhe same connfiguration.
TT

Enclosure an minimize Ethernet upli nk cables, b


e stacking ca but it will not save Fibre
Channel uplink cablees beyond a single-enclossure VC dom main.
rT
Fo

Rev. 12.3
31 9 –27
Implemen
nting HP BladeS
System Solutions

VC-FC does not stacck

ly
on
y
er VC-FC
C connected to tthe same SAN
liv
Although VC-FC doess not stack, th
he same nummber of conn nections to ea
ach SAN is nnot
required. In the exam
mple, enclosure 1 and encclosure 2 aree both conneected to
SAN_A, but enclosure 1 has four connectionss to that SAN N, while enclosure 2 has
de

only one connection to


t that SAN. This set-up iis acceptablee because both enclosure
es
are connected to the same SANs..

Note
TT

The uplink po
ort assignment within
w the VC D anager.
Domain is enforrced by VC Ma

Note
rT

A single-enclo
osure VC Doma ain that containns multiple VC-FFC modules witthin the SAME
chassis does not have this re
estriction, as lonng as it is not w
within a VC Dom
main stack.
Fo

9 –28 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

c3000 sttacking

ly
on
c3000 encl osure

You can connect


c a Fleex10 Module e directly to another Flexx10 Module in another

y
c3000 enclosure if th
he following requirementss are met:
There
e is a maxim
mum of two enclosures.
e

er

 Both enclosures have


h their ow
wn Virtual Co
onnect doma
ain.
liv
 The data
d center is
i self-contained with no external uplinks.
Stacking c3000 enclo
osures as if they
t were c7
7000 enclosuures is not po
ossible.
de
TT
rT
Fo

Rev. 12.3
31 9 –29
Implemen
nting HP BladeS
System Solutions

Define
e Ethernet networks

ly
on
y
Defining Ethernet nnetworks flow

er
After the domain has been create ed, you can define the Etthernet netw
works. The
Network Setup Wiza ard establishe
es external E
Ethernet netw
work connecttivity for a
BladeSysstem enclosure using Virtual Connect . A user account with ne
liv
etwork
privilegess is required to perform these
t operattions.
Use this wizard
w to:
de

 Identify the MAC


C addresses to be used o
on the serverrs deployed w
within this
Virtu
ual Connect domain
d
 Set up
u connectio
ons from the BladeSystem
m enclosure tto the externa
al Ethernet
TT

netw
works
These connections caan be uplinkss dedicated tto a specific Ethernet nettwork or sha
ared
hat carry multiple Etherne
uplinks th et networks w
with the use of VLAN tag gs.
rT
Fo

9 –30 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

Define
e Fibre Ch
hannel SAN connecctions

ly
on
Defining FC SAN co
onnections flow

y
The Virtuaal Connect Fibre
F Channeel Setup Wizzard configurres external FFibre Channel

er
connectivvity for a BladeSystem en nclosure using
g Virtual Connect. A useer account wiith
storage privileges
p is required
r to perform
p thesee operations..
liv
Use this wizard
w to:
 Identify WWNs to be used on
o the serverr blades dep
ployed within this Virtual
Connect domain
de

 Defin
ne available SAN fabricss
TT
rT
Fo

Rev. 12.3
31 9 –31
Implemen
nting HP BladeS
System Solutions

Create
e server profiles

ly
on
y
er
The Virtua
al Connect Manager
M Serrver Profile W
Wizard allowws you to quickly set up aand
configure
e network/SAAN connectio
ons for the seerver bladess within your enclosure.
With the wizard, you can define a server proffile template that identifie es the serverr
liv
connectivvity to use on
n server blad
des within thee enclosure. The template e can then be
used to automatically
a y create and apply serverr profiles to uup to 16 servver blades. TThe
de

individua
al server proffiles can be edited
e indeppendently.
Before be
eginning the server profille wizard, do
o the followin
ng:
 Com
mplete the Ne
etwork Setup
p Wizard.
TT

 Com bre Channel Setup Wizarrd (if applica


mplete the Fib able).
 Ensu
ure that any blades
b to be configured using this wiizard are pow
wered off.
This wiza
ard walks you
u through the
e following ta
asks:
rT

 Defin
ne a server profile
p templa
ate.
 Assig
gn server pro
ofiles.
Nam
me server pro
ofiles.
Fo

9 –32 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

Implem
menting th
he server profile
p

ly
on
y
er Settting up a profille workflow
liv
Follow these steps to set up the se
erver profile:
1. Conffigure the server profile using
u the VC Manager usser interface.
de

2. Inserrt the server blade.


3. VC Manager
M dettects that a server
s blade was inserted
d and reads the field-
repla
aceable unit (FRU) data for
f each inteerface.
TT

4. VC Manager
M wriites the serve
er profile info
ormation to tthe server.
5. Powe
er on the serrver.
6. CPU BIOS and NIC/HBA
N op
ption ROM so
oftware writee the profile information tto
rT

ace.
the interfa
7. The successful
s write is commu
unicated to tthe VC Mana h the Onboard
ager through
Administrrator.
Fo

8. The server
s boots using the se
erver profile p
provided.
Important
! When a blad de is inserted in
nto a bay that hhas a VC Mana ager profile assigned, the VC
Manager dettects the insertio on through com mmunications w with the Onboarrd Administratorr
and must gen nerate profile in
nstructions for thhat server beforre the server is allowed to pow wer
on. If VC Manager is not co ommunicating w with the Onboa ard Administrato or at the time th
he
server is inserted, the Onboard Administra tor will continuee to deny the se erver iLO powe er
request until the
t VC Manage er has updated d the profile. If a server is not p
powering on,
verify that the
e VC Manager has established d communicatio ons with that OOnboard
Administratorr.

Rev. 12.3
31 9 –33
Implemen
nting HP BladeS
System Solutions

Manage data center


c changes

ly
on
y
er
Now thatt the VC Dom
main has bee en created w
with Ethernet networks, Fibre Channel
SANs, an
nd assigned server profile
es, you can:
Replace a failed server witho out logging i n to VC Man use the server
nager becau
liv

profiile is assigne
ed to the bay
y
 Copy
y a server prrofile from on
ne bay to annother
de

 o SAN conn ections whilee the system is running


Change a serverr’s network or
 Movve a profile fo
or a failed se
erver to a sp are server
 Assig
gn a profile to
t an empty server bay ffor future gro
owth
TT
rT
Fo

9 –34 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

Virtua
al Conne
ect – Servver profile migration

ly
on
y
Virtual Coonnect can take
t
er
a serverr profile from
m server A an
nd migrate th
hat profile to a
liv
spare serrver if server A were to fa
ail or go offliine.
The profile contains th
he “personallity” of the seerver, includiing:
de

 Virtu
ual Connect MAC
M addressses
 Virtu
ual Connect Fibre
F Channe
el WWNs
 LAN and SAN assignments
a
TT

 Boott parameters
rT
Fo

Rev. 12.3
31 9 –35
Implemen
nting HP BladeS
System Solutions

Serverr profile migration


m fo
or a failed
d server

ly
on
y
er
Server
S profile m
migration

After the migration ha


as completed
d, the spare blade assum
mes the settin
ngs of the failed
liv
blade inccluding the MAC
M addressses, Fibre Chhannel WWNs, SAN, an nd network
connectioons.
de

In a boott from SAN situation,


s the spare bladee then boots to the logicaal unit numbe er
(LUN) thaat contains th
he failed servver’s operati ng system. In
n a local booot situation, tthe
hard drivves of the failled server ca
an be broughht over to thee spare for lo
ocal booting,,
provided the hard driives were not the cause o of the failoveer.
TT
rT
Fo

9 –36 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

Virtu
ual Con
nnect Manage
M er

ly
on
y
er
Virtual Connect Mana
ager homepagee

The VC Manager
M runs embedded d on the VC-EEthernet Mod dule in bay 1 or 2 of the
e
liv
base encclosure and iss accessible through the Onboard Ad dministrator managemen nt
interface.. The VC Maanager conne ects directly to the active Onboard A Administrator
module in n the enclosu
ure and has the following
g functions:
de

 nages enclosure connectivvity


Man
 Defin
nes available
e LANs and SANs
Sets up enclosure
e connection
ns to the LAN
N or SAN
TT

 Defin
nes and man
nages server I/O profiless
The VC Manager
M conntains utilitiess and a Profiile Wizard to
o develop templates to
rT

create an
nd assign pro
ofiles to multiple servers a at one time. The I/O pro
ofiles include the
physical NIC MAC addresses, Fib bre Channel HBA WWN Ns, and the SSAN boot
ations.
configura
The VC Manager
M pro
ofile summary y page inclu des a view o of server stattus, port, and
d
Fo

network assignments.
a . You can alsso edit the prrofile details,, reassign the profile, and
examine how HBAs and
a NICs are e connected..

Rev. 12.3
31 9 –37
Implemen
nting HP BladeS
System Solutions

Accesssing the
e Virtual Connect
C Manageer

ly
on
Accessing
A VC Manager
M from Onboard Adm
ministrator

y
Access to
o the VC Manager is ove er the same E Ethernet conn nection used to access thhe
enclosure
VC Mana
A
ager for the first
f
er
e Onboard Administrator r and server blade iLO co
time, you
onnections. TTo access the
u can either log in using a web brow wser to the
e
liv
Onboard d Administrattor and then select the VC C Manager link, or use tthe dynamic
DNS nam me printed on n the tear-offf tag for the V
VC-Ethernet Module in In nterconnect bbay
1 (enter the
t DNS nam me in the bro owser addresss text field).
de

Optionally, you can set


s up a static IP address for the VC M
Manager, wh hich will ena
able
you to maaintain access to the VC Manager inn the event thhat it fails ove
er to the VC--
Ethernet Module
M in bay 2.
TT

Note
The VC Mana ager typically runs
r on the Virtuual Connect Ethhernet module iin bay 1 unlesss
that module is
i unavailable, causing a failo over to the VC MManager runnin ng in bay 2. If yyou
cannot conne ect to the VC Manager
M ard Administrator
in Interrconnect bay 1,, use the Onboa
rT

to obtain the IP address of the Virtual Connnect module in bay 2.


Fo

9 –38 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

Virtua
al Conne
ect Mana
ager login page

ly
on
y
Log on ussing the userr name (Adm
ministrator) annd password
d from the De efault Netwoork

er
Settings toe
t tag for Innterconnect bay
b 1. After yyou log in fo
or the first tim
me, the Virtua
al
Connect Manager Se etup Wizard screen displ ays.
liv
To set up the Virtual Connect
C dom
main and nettwork, follow
wing these ste
eps:
 Log in and run th
he Domain Setup
S Wizard
d.
de

 Import the enclosure.


e (Th
he user must provide Onb
board Admin
nistrator login
information to enable ennclosure impoort.)
 Name the Virtual
V Conne
ect domain.
TT

 Set up a static IP addresss for the VC Manager (o


optional).
 Set up local user accoun
nts and privil eges.
 onnectivity and redundan
Confirm the stacking linkks provide thhe needed co ncy.
rT

 Laun
nch the Netw
work Setup Wizard.
W
 Select a MA
AC address ra
ange.
Fo

 Confirm the stacking linkks (if steps 1 and 2 are p


performed byy different
administrato
ors).
 Set up the networks.
 Laun
nch the SAN Setup Wizard.
 Laun e Setup Wizard, depend
nch the Profile ding on the fiirmware rele
ease.

Rev. 12.3
31 9 –39
Implemen
nting HP BladeS
System Solutions

After an enclosure
e is imported into a Virtual C
Connect dom main, server bblades that
have not been assign ned a server profile are issolated from all networkss to ensure th
hat
only prop
perly configuured server blades are atttached to da ata center ne
etworks.
A predep
ployment pro ofile can be defined
d bay so that the server bla
for eeach device b ade
can be powered
p on and
a connecte ed to a deplo oyment netw
work. These pprofiles can la
ater
be modiffied or replacced by anoth
her server prrofile.

Virtua
al Conne
ect Mana
ager hom
me page

ly
on
y
er
liv
de

This scree
en provides access
a for the managemeent of enclossures, serverss, and
TT

networkinng. It also se
erves as the launch point for the initia
al setup of VC
C Manager.
The VC Manager
M navvigation syste
em consists o
of a tree view
w on the left side of the
page tha at lists all of the system deevices and avvailable actiions. The tree
e view remaiins
visible at all times.
rT

The right side of the page


p display
ys details for the selected
d device or a
activity, which
h
includes a pull-down menu at the top. To view w detailed prroduct inform
mation, selectt
About HPP VC Manage er from the Help
H pull-dowwn menu.
Fo

Note
The Home Pa
age will look slig depending on tthe firmware revvision.
ghtly different d

9 –40 Rev. 12
2.31
Virtual Connect Installation and Configuration

Virtual Connect role-based privileges


Virtual Connect supports four levels of role-based access:
 Domain
 Define local users, set passwords, and set roles
 Name the Virtual Connect domain
 Import enclosures
 SNMP configuration, HP SIM configuration

ly
 Update firmware (Virtual Connect Ethernet and Virtual Connect Fibre

on
Channel)
 Networking
 Configure network default settings

y
 Select the MAC address range to be used by the Virtual Connect domain

er
 Create/delete/edit networks
 Create/delete/edit shared uplink sets
liv
 Storage (SAN)
 Configure storage-related default settings
de

 Select the WWN range to be used by the Virtual Connect domain


 Create/delete/edit Fibre Channel SAN fabrics
 Server bay
TT

 Create/edit/delete server Virtual Connect profiles


 Select and use available networks
 Select and use available Fibre Channel fabrics
rT

 Set Fibre Channel SAN boot settings for a server


 Enable/disable Preboot Execution Environment (PXE) is on each server NIC
Fo

By default, all users have read privileges in all roles (not being in any of the privilege
classes gives read-only access). Each user can have any combination of the four
privileges. The Administrator account is defined by default, and additional local user
accounts can be created.

Note
The VC Manager user account is an internal Onboard Administrator account created and
used by VC Manager to communicate with the Onboard Administrator. This account can
appear in the Onboard Administrator system log and cannot be changed or deleted.

Rev. 12.31 9 –41


Implemen
nting HP BladeS
System Solutions

Virtua
al Conne
ect Mana
ager failo
over

ly
on
y
Redundant pair of VC modules

The VC Manager
M
Ethernet modules
m in bay
er
runs as a high-a
b 1 and ba
availability p
pair and runss on Virtual C
ay 2. The acctive VC Man nager is usua
Connect
ally on bay 1
1.
liv
Redundancy daemon ns on modulees 1 and 2 d determine thee active manaager.
Heartbea
ats can be maintained ovver multiple p
paths:
de

 Backkplane
 Ethernet link
Each time
e a configuraation changees, it is writteen to local fla
ash memory and check-
TT

pointed to
o the standbby module (and written to o flash memo ory). Configu
urations can
also be backed
b up to
o a workstatio
on.
A failover will cause a restart of th
he VC Mana ager, a restore from the ssaved
ation, and wiill require re--login by anyy web users.
configura
rT

Note
A single static IP address may be configured for thee VC Manager.
Fo

Note
The c7000/cc3000 enclosurre configurationn and setup is sstored on the O Onboard
Administratorr (in flash). Thesse include encloosure name, En nclosure Bay IP Address (EBIPA A)
settings, powwer mode, SNM MP settings, and so on. If you h have a second O OA, that
information iss also kept there, so that if youu lose or replacce an OA, you do not lose you ur
settings. In ad
ddition, the OAA does keep som me VC profile-rrelated signaturres. VC Ethernett
Modules store e VC-related coonfigurations suuch as networkss and profiles. O Other Ethernet
(Cisco, BNT) and Fibre Channel modules sstore their config gurations in the
eir own flash.

9 –42 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

Virtu
ual Con
nnect Enterprisse Man
nager

ly
on
y
er
liv
HP Virtuaal Connect Enterprise Ma anager (VCEM M) is a softw
ware applicaation that
de

centralizees connection n manageme ent and workkload mobility for BladeS System server
blades thhat use Virtuaal Connect to
o control acccess to LANs,, SANs, and d converged
network infrastructure
i es. VCEM helps organiza ations increasse productivity, respond
faster to infrastructure
i e and worklooad changes,, and reducee operating ccosts. VCEM
TT

seamlessly integrates with existing d discovers and


g Virtual Connnect infrastrructures, and
aggregattes Virtual Co onnect resou
urces into a ccentral consoole.
Built on the Virtual Co
onnect archittecture integ rated into evvery BladeSyystem enclosuure,
rT

the VCEM M central connsole enablees you to pro grammatically administe er LAN and
SAN add dress assignmments, perforrm group-bassed configura ation managgement, and
rapidly deploy, move,, and fail over server connnections an nd their workkloads for up
p to
250 Virtuual Connect domains (1,0 000 enclosurres and 16,0 000 servers wwhen used w with
Fo

Virtual Coonnect Etherrnet enclosure


e stacking).

Rev. 12.3
31 9 –43
Implementing HP BladeSystem Solutions

The central VCEM address repository is an extension of the HP Systems Insight


Manager (HP SIM) database. This repository provides programmatic administration
of MAC addresses and WWNs to establish server connections to LANs and SANs.
The VCEM repository reduces manual management overhead and eliminates the risk
of address conflicts. Within the VCEM repository, you can use the unique HP defined
addresses, create your own custom ranges, and also establish exclusion zones to
protect existing MAC and WWN assignments. The central VCEM repository supports
128K address ranges for MAC and WWN assignments, for a total of 256K network
addresses per VCEM console.

ly
For more information, visit: http://www.hp.com/go/vcem/

on
VCEM presents its own dedicated homepage to perform the following core tasks:
 Discover and import existing Virtual Connect domains
 Aggregate individual Virtual Connect address names for LAN and SAN

y
connectivity into a centrally administered VCEM address repository
Create Virtual Connect domain groups

er

 Assign and unassign Virtual Connect domains to Virtual Connect domain groups
liv
 Define server profiles and link to available LAN and SAN resources
 Assign server profiles to BladeSystem enclosures, enclosure bays, and Virtual
Connect domain groups
de

 Change, reassign, and automatically failover server profiles to spare servers


 Rapidly install new bare-metal BladeSystem enclosures by assigning to a Virtual
Connect domain group
TT

Additional management tasks are available from VCEM:


 Managing bays — Administrator can power down a server inside bay, assign a
profile or designate a spare bay.
rT

 Managing MAC and WWN addresses — Administrator can choose between


VCEM-defined MAC address ranges or user-defined MAC address ranges.
Working with Logical Serial Numbers — Administrator can use virtual serial
Fo


numbers inside server profiles
 Tracking VCEM job status — The Jobs list provides detailed information about
jobs that have occurred and are related to VCEM.

9 –44 Rev. 12.31


Virtual Connect Installation
n and Configura
ation

VCEM
M compa
ared with
h VC Manager

ly
on
y
Virtual Coonnect Mana
er
ager is a weeb console b uilt into the ffirmware of V
Virtual Connect
liv
Ethernet modules,
m dessigned to coonfigure and manage a ssingle Virtuall Connect
domain, up to 64 serrvers. This co ould be a sinngle enclosurre, or a multi enclosure
domain containing
c upp to four phy
ysically linked
d enclosuress in the samee or adjacentt
de

racks. This option is id


deal for enviironments wiith up to fourr enclosures tthat have no
o
plans to expand
e furth
her. VC Mana ager does no ot work acrooss multiple ddomains. It
configurees and mana ages only its local domainn.
In contrasst, Virtual Co
onnect Enterp
prise Manag
ger is the prim
mary HP appplication for
TT

managing smaller or larger infrasstructures and


d groups of VVirtual Conn
nect domainss
e data cente
across the er.
VCEM also enables you y to createe domain gro oups that usee a master co onfiguration
rT

profile for multiple Virtual Connecct domains thhat connect tto the same networks. W With
VCEM, administrators
a s can move profiles
p and server worklloads betwee en any
enclosurees that belong to the samme or differennt domain grroup, which ccould be the
same racck, across the e data centerr or even a d n. The domain
different physsical location
Fo

group fun nctionality in VCEM also simplifies thhe addition oof new/bare metal
enclosurees, helping organizations
o s develop mo ore consisten
nt infrastructure
configuraations as the datacenter expands.
e

Important
! If a customer plans to only use
u the freely in cluded VC Ma nager instead o of purchasing
VCEM to manage a Virtual Connect Fibre C Channel module, then at leastt one Virtual
Connect Ethe ernet module mu ust also be insta
alled in the Virttual Connect do
omain. The reason
use the VC Mannager software only runs on a Virtual Connecct
for this requirrement is becau
Ethernet module.

Rev. 12.3
31 9 –45
Implemen
nting HP BladeS
System Solutions

VCEM
M licensing

ly
VCEM is licensed perr BladeSystem m enclosure, with separa ate options fo
or-c3000 and

on
c7000 en nclosures. One VCEM liccense is requuired for each h enclosure tto be managged
in both siingle and muulti-enclosuree domain connfigurations. Licenses are e non-
transferab
ble. Full deta
ails are conta ained in the End User Liccense Agreem ment. License
es
are addittive such thatt multiple lice
enses can bee combined together for the total
number ofo BladeSyste em enclosure e licenses youu have purchhased.

y
For each purchased license, a lice ment certificate is delivere
ense entitlem ed. The licensse

er
entitlement certificate contains infformation neeeded to redeeem license a activation ke
eys
online or via fax. Thiss electronic redemption
r p
process enab bles easy liceense
liv
managem ment and serrvice and sup pport trackin g.

For available license option


ns and more liccensing informa
ation, see the VC
CEM
QuickSpecs at: http://www w.hp.com/go//vcem
de

VCEM includes one year


y of 24 x 7 HP Softwa are Technical Support an nd Update
Service. This
T service provides
p acccess to HP tecchnical resouurces for assistance in
resolving software implementation n or operatio ons problemss. The service e also provid
des
TT

access too software up


pdates and re eference ma nuals either in electronicc form or on
physical media as the ey are madee available frrom HP. With h this service
e, customers w
will
benefit fro
om expediteed problem re esolution pluus proactive n
notification a
and delivery of
software updates.
rT

For more infformation about 24 x 7 HP Sooftware Technica


al Support and Update Service,
visit: http://
/www.hp.com//services/insighht
Fo

9 –46 Rev. 12
2.31
Virtual Connect Installation and Configuration

Installing VCEM
VCEM can be installed in a variety of configurations that include a physical stand-
alone console, as a plug-in to HP SIM 6.0 or later, and as a virtual machine.
Use the Insight Software DVD to install VCEM. Run the Insight Software Advisor to
test and evaluate the hardware and software configuration before beginning the
installation.
When an upgrade to a new and different central management server (CMS) is
performed, or is moved to a 64-bit CMS, it may be necessary to migrate old data

ly
using the HP SIM data migration tool. If an upgrade to a new version of VCEM on
the same CMS is performed, data migration with the HP SIM data migration tool is

on
not necessary.

Note
Installation of VCEM requires Virtual Connect firmware 2.10 or later. For complete
hardware and software minimum requirements and other information, see the HP Systems

y
Insight Manager Installation and Configuration Guide for Microsoft Windows available
from the HP website.

Typical environments for VCEM


er
liv
VCEM is designed to scale as the infrastructure grows and simplifies the addition of
new and bare metal enclosures. Small environments with goals to expand should use
VCEM from the beginning to get the most benefit. Ideal environments include:
de

 Multiple-rack or distributed BladeSystem environments with more than one rack


of enclosures and with plans to expand
 Medium to large BladeSystem environments that use Virtual Connect
TT

 BladeSystem environments that extend to multiple locations


 Organizations that require centralized control of server-to-network connectivity
rT

 Organizations that require rapid server movement between enclosures

Note
For more information about Virtual Connect and Virtual Connect Manager, see
Fo

the HP Virtual Connect for c-Class BladeSystem User Guide.

Rev. 12.31 9 –47


Implemen
nting HP BladeS
System Solutions

VCEM
M user in
nterfaces

ly
on
y
You can access
a VCEM

er
M through either a graphhical user inteerface (GUI)) or a comma
line interfface (CLI). Th
he VCEM GU UI enables yyou to:
and
liv
 Man mains and d omain group
nage Virtual Connect dom ps
Man
nage server profiles
p and profile failovver
de

 Perfo
orm central address
a mana
agement (MA
AC, WWN, serial numb
bers)
The CLI can
c be used as an alterna ative method
d or if no bro
owser is avaiilable.
Available
e operations from the CLII include:
TT

 Perfo
orm profile fa
ailover on sp
pecified Virtua
al Connect d
domain bay server
 List details
d for specified VCEM
M job
rT

 Show
w CLI usage online help
Using the
e CLI can be useful in the
e following sccenarios:
 HP management
m t applications, such as H P SIM or Insight Control tools, can
Fo

querry for informaation; these tools


t need too present a ccomplete management view
of BladeSystem enclosures
e an
nd devices. TThe CLI is alsso used by th
he managem ment
toolss to execute provisioning
p and configuuration tasks to devices in
n the enclosu
ure.
 Userrs can develoop tools that use VCEM fuunctions for data collectiion and for
execcuting provisioning and configuration
c tasks.
The CLI re
eturns a num hat indicates success or a particular e
meric value th error or failure.
The CLI also
a displays an associate ed error messsage. A zero o numerical returned valu ue
indicates success. Some values grreater than zzero indicatee an error or failure.

9 –48 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation

VCEM
M profile fa
ailover

ly
on
Using
g a spare serveer with VCEM

System ad dministratorss can use Virrtual Connecct Profile Failoover to perfo


orm the rapid
d
and cost-- effective reccovery of phy
ysical serverss within the ssame Virtual Connect

y
domain group
g with minimal
m admiinistrator inteervention.

er
Virtual Co
onnect Profile Failover is a VCEM fea ables the automated
ature that ena
movemen nt of Virtual Connect
C servver profiles a
and associateed network cconnections tto
customer--defined spare servers in a Virtual Co onnect doma ain group. Thhe manual
liv
movemen nt of a Virtua
al Connect se eps to complete
erver profile requires the following ste
the opera over combines these separate steps into
ation, but Virrtual Connecct Profile Failo
one seammless task:
de

1. Powe
er down the original or source
s serverr.
4. Selecct a new targ
get server.
5. Movve the Virtual Connect serrver profile to
o the target sserver.
TT

6. Powe
er up the new
w server.
When se electing a targ
get server fro
om a pool off defined spa
are systems, Virtual Conn nect
rT

Profile Fa
ailover autommatically choooses the sam
me server model as the so ource server.
The proce ess can be innitiated from the VCEM G GUI as a onee-button ope eration or from
the CLI. When
W used with
w the auto omatic event handling funnctionality in HP SIM,
Virtual Coonnect Profile Failover opperations cann be automaatically trigge
ered-based,
Fo

user-defin
ned events.
Precondittions for a Viirtual Connect Profile Faiilover are:
 Sourrce and desig e servers must be part off the same Virtual Conne
gnated spare ect
domain.
 The source
s and target
t serverss must be co
onfigured to b
boot from SA
AN.
 The designated
d spare
s serverss must be po
owered off.
 A sp
pare server must
m be the sa
ame model a
as the sourcee server.

Rev. 12.3
31 9 –49
Implementing HP BladeSystem Solutions

Learning check
1. List the components of Virtual Connect technology.
.................................................................................................................
.................................................................................................................
.................................................................................................................
.................................................................................................................
7. To install Fibre Channel in a Virtual Connect environment, the enclosure must

ly
have at least one Virtual Connect Ethernet module because the VC Manager
software runs on a processor resident on the Ethernet module.

on
 True
 False
8. Match each item with its correct description.

y
a. FL_Port ......... A switch port used for switch-to-

er
switch connections
liv
b. NL_Port ......... An F_Port that contains arbitrated
loop functions
de

c. E_Port ......... An N_Port that contains arbitrated


loop support
TT

9. A single user can have any combination of server, network, domain, or storage
privileges.
rT

 True
 False
Fo

9 –50 Rev. 12.31

You might also like