Professional Documents
Culture Documents
Delivery Only: Stude Ent G Guide
Delivery Only: Stude Ent G Guide
g HP Bla
adeSysstem
Solutio
ons
ly
on
y
er
liv
de
TT
rT
Stude
ent Guide
G
Fo
Volume 1
Rev. 12.31
1
Fo
rT
TT
de
liv
er
y
on
ly
Implemmenting
g HP Bla
adeSysstem
Solutio
ons
ly
on
y
er
liv
de
TT
rT
Stude
ent Guide
G
Fo
y
This is an HP copyrighted work that may not be reproduced without the written permission of HP.
er
You may not use these materials to deliver training to any person outside of your organization
without the written permission of HP.
Printed in USA
liv
Implementing HP BladeSystem Solutions
Student guide
July 2012
de
TT
rT
Fo
Contents
Volume 1
Module 1 — Portfolio Introduction
Objectives ...................................................................................................... 1
ly
HP BladeSystem positioning .............................................................................. 2
BladeSystem evolution................................................................................ 3
on
Transitioning to the ProLiant Gen8 servers .................................................... 4
Key Gen8 technologies ....................................................................... 4
BladeSystem portfolio ....................................................................................... 6
BladeSystem enclosures.............................................................................. 7
y
BladeSystem c3000 enclosure .............................................................. 7
BladeSystem c7000 enclosure .............................................................. 7
er
BladeSystem server blades ......................................................................... 8
HP ProLiant Blade Workstation Solutions ...................................................... 9
liv
HP ProLiant WS460c G6 Blade Workstation ........................................ 10
HP ProLiant xw2x220c Blade Workstation ........................................... 10
BladeSystem storage and expansion ........................................................... 11
de
Rev. 12.31 i
Implementing HP BladeSystem Solutions
ly
Learning check .............................................................................................. 44
on
Objectives ...................................................................................................... 1
BladeSystem enclosure family ............................................................................ 2
BladeSystem enclosure features ................................................................... 3
y
BladeSystem enclosure comparison ............................................................. 4
er
BladeSystem c7000 enclosure .................................................................... 6
BladeSystem c3000 enclosure .................................................................... 7
BladeSystem c3000 enclosure — Rear view ........................................... 8
liv
BladeSystem enclosure management hardware and software ............................... 9
HP Onboard Administrator ......................................................................... 9
Onboard Administrator module components............................................... 10
de
ii Rev. 12.31
Contents
ly
The c3000 enclosure ........................................................................ 34
Fan failure rules ................................................................................ 35
on
Fan quantity versus power.................................................................. 36
Self-sealing BladeSystem enclosure...................................................... 37
Cooling multiple enclosures................................................................ 38
Thermal Logic ......................................................................................... 39
y
Power Regulator technologies ................................................................... 40
Power Regulator for ProLiant ...............................................................41
er
Power Regulator for Integrity .............................................................. 42
iLO 4 power management ................................................................. 43
liv
Dynamic Power Saver........................................................................ 47
Dynamic Power Capping ................................................................... 48
Power delivery modes .............................................................................. 49
de
ly
Integrity BL890c i2 .................................................................................. 10
Learning check ...............................................................................................12
on
Module 4 — HP BladeSystem Storage and
Expansion Blades
y
Objectives ...................................................................................................... 1
er
HP BladeSystem storage and expansion blades ................................................... 2
HP storage blades ..................................................................................... 2
HP D2200sb Storage Blade ................................................................. 3
liv
HP X1800sb G2 Network Storage Blade ............................................... 4
HP X3800sb G2 Network Storage Gateway Blade................................. 5
Direct Connect SAS Storage for HP BladeSystem..................................... 6
de
iv Rev. 12.31
Contents
ly
Managing HP blade switches ..................................................................... 7
Cisco Catalyst Blade Switch 3020 features .................................................. 8
on
Catalyst Blade Switch 3020 front bezel ................................................ 9
Cisco Catalyst Blade Switch 3120 features ................................................ 10
Catalyst Blade Switch 3120 front bezel ............................................... 11
HP GbE2c Layer 2/3 Ethernet Blade Switch................................................12
y
GbE2c Layer 2/3 Ethernet Blade Switch front bezel ..............................13
HP 1:10Gb Ethernet BL-c Switch ................................................................14
er
1:10Gb Ethernet BL-c Switch front bezel ..............................................15
HP 1Gb Ethernet Pass-Thru Module ............................................................16
liv
HP 10GbE Pass-Thru Module.....................................................................17
HP 10GbE Pass-Thru Module components ............................................18
Learning check .............................................................................................. 19
de
Objectives ...................................................................................................... 1
Fibre Channel interconnect options .................................................................... 2
Cisco MDS 9124e Fabric Switch for BladeSystem .......................................... 2
Cisco MDS 9124e Fabric Switch features and components ....................... 4
rT
Rev. 12.31 v
Implementing HP BladeSystem Solutions
ly
HP IB QDR/EN 10 Gb 2P 544M Mezzanine Adaptor ................................ 23
Learning check .............................................................................................. 24
on
Module 7 — Configuring Ethernet Connectivity
Options
y
Objectives ...................................................................................................... 1
Configuring an HP GbE2c Layer 2/3 Ethernet Blade Switch ................................. 2
er
User, operator, and administrator access rights ............................................. 2
Access-level defaults ............................................................................ 3
liv
Accessing the GbE2c switch ....................................................................... 4
Logging in through the Onboard Administrator ............................................. 5
Configuring redundant switches .................................................................. 6
de
Administrator ...........................................................................................12
vi Rev. 12.31
Contents
ly
Switch IP configuration ..............................................................................17
Using the CLI Manager-level prompt .....................................................17
on
Configuring the IP address by using a web browser interface ..................17
Accessing a blade switch from the Onboard Administrator .....................18
Accessing a blade switch through the mini-USB interface (out of band) ... 19
Accessing a blade switch from the Ethernet interface (in band) .............. 19
y
Assigning an IP address to a blade switch ........................................... 20
IP addressing with multiple VLANs ...................................................... 21
er
IP Preserve: Retaining VLAN-1 IP addressing across configuration file
downloads ....................................................................................... 22
liv
Learning check .............................................................................................. 23
Options
Objectives ...................................................................................................... 1
Configuring a Brocade 8Gb SAN switch ............................................................ 2
TT
ly
Configuration rules for the 3Gb/s SAS Switch .............................................13
Configuring the 3Gb SAS BL Switch ...........................................................14
on
Accessing the 3Gb SAS BL Switch .............................................................15
Confirming the firmware version .................................................................15
Learning check ...............................................................................................16
y
er
Module 9 — Virtual Connect Installation and
Configuration
liv
Objectives ...................................................................................................... 1
HP Virtual Connect portfolio.............................................................................. 2
de
ly
Create a VC domain ......................................................................... 22
Virtual Connect multi-enclosure VC domains ......................................... 23
on
Define Ethernet networks.................................................................... 30
Define Fibre Channel SAN connections ................................................31
Create server profiles ........................................................................ 32
Implementing the server profile ........................................................... 33
y
Manage data center changes ............................................................ 34
Virtual Connect – Server profile migration .................................................. 35
er
Server profile migration for a failed server ........................................... 36
Virtual Connect Manager ............................................................................... 37
liv
Accessing the Virtual Connect Manager .................................................... 38
Virtual Connect Manager login page ........................................................ 39
Virtual Connect Manager home page........................................................ 40
de
Rev. 12.31 ix
Implementing HP BladeSystem Solutions
Volume 2
Module 10 — Introduction to HP SAN Solutions
Objectives ...................................................................................................... 1
HP MSA2000/P2000 portfolio ......................................................................... 2
P2000 G3 MSA ....................................................................................... 2
Key features.............................................................................................. 5
HP 2000i MSA ......................................................................................... 6
ly
Management tools .................................................................................... 7
EcoStore technology .................................................................................. 7
Active/active controllers ............................................................................. 8
on
Unified LUN presentation ........................................................................... 8
HP P4000 overview ......................................................................................... 9
P4000 product suite ................................................................................ 10
HP SAN/iQ software ........................................................................ 10
y
P4000 centralized management console ............................................. 10
er
Storage software ........................................................................................... 20
HP P4000 snapshots ............................................................................... 20
HP P4000 SAN/iQ SmartClone ............................................................... 21
liv
HP P4000 SAN Remote Copy .................................................................. 23
Learning check .............................................................................................. 24
Objectives ...................................................................................................... 1
How does virtualization work? .......................................................................... 2
What is a virtual machine? ........................................................................ 3
TT
x Rev. 12.31
Contents
ly
BladeSystem
on
Objectives ...................................................................................................... 1
Placement rules and installation guidelines.......................................................... 2
c7000 enclosure zoning ............................................................................ 2
c7000 enclosure placement rules—Half-height server blades .......................... 4
y
c7000 enclosure placement rules—Full-height server blades ........................... 5
c7000 interconnect bays ............................................................................ 6
er
c3000 enclosure zoning ............................................................................ 7
c3000 enclosure placement rules—Half-height server blades .......................... 8
liv
c3000 enclosure placement rules—Full-height server blades ........................... 9
c3000 interconnect bays ......................................................................... 10
Installation rules for partner blades ............................................................. 11
de
Rev. 12.31 xi
Implementing HP BladeSystem Solutions
ly
Enclosure settings .....................................................................................41
Enclosure information .............................................................................. 42
on
Verifying the firmware version ................................................................... 43
Rebooting the Onboard Administrator ....................................................... 44
Blade and port information ...................................................................... 45
Blade information ............................................................................. 46
y
Port Info view from Insight Display ....................................................... 47
USB Menu .............................................................................................. 48
er
iLO Management Engine ................................................................................ 49
Configuring iLO ...................................................................................... 49
liv
iLO RBSU ......................................................................................... 49
Browser-based setup.......................................................................... 49
HPONCFG ...................................................................................... 50
de
ly
Insight Control virtual machine management .......................................... 8
Insight Control performance management ............................................ 10
on
Insight Control remote Management ..................................................... 11
Insight Control power Management .....................................................12
Hardware and software requirements .........................................................14
Insight Software server hardware requirements ......................................14
y
Database..........................................................................................16
er
Web browser ....................................................................................16
Virtualization platform ........................................................................16
HP Systems Insight Manager ............................................................................17
liv
HP SIM overview ......................................................................................17
HP SIM architecture ..................................................................................18
Central Management Server ...............................................................18
de
ly
Deployment Server database.............................................................. 10
PXE server ......................................................................................... 11
on
Deployment Share .............................................................................12
DHCP server .....................................................................................12
Client components .............................................................................13
Scripted and imaged installation ......................................................................15
y
Jobs and tasks .........................................................................................15
er
Jobs .................................................................................................15
Tasks ................................................................................................15
Jobs and tasks working together ................................................................15
liv
Building jobs .....................................................................................15
Scheduling jobs .......................................................................................16
Job categories .........................................................................................17
de
Software .......................................................................................... 21
Scripted deployment ................................................................................ 21
Windows configuration file ................................................................ 22
Configuration flow for scripting........................................................... 23
rT
Imaging ................................................................................................. 24
Advantages and disadvantages.......................................................... 25
Imaging preparation ......................................................................... 25
Fo
ly
HP UPS features ........................................................................................ 5
UPS options .............................................................................................. 6
on
Enhanced battery management .................................................................. 6
HP rack and power management software ................................................... 8
HP Power Manager ............................................................................. 8
HP Power Protector UPS Management Software ...................................... 8
y
Rack and Power Manager ................................................................... 9
HP UPS Management Module .................................................................. 10
er
HP Modular Cooling System G2 ................................................................12
Data Protection software ..................................................................................14
liv
HP Data Protector.....................................................................................14
Key benefits ......................................................................................14
Key features ......................................................................................15
de
Rev. 12.31 xv
Implementing HP BladeSystem Solutions
ly
Extended support duration ................................................................. 24
General best practices ............................................................................. 25
on
HP Services for BladeSystem ........................................................................... 26
Important safety information ..................................................................... 26
Safety symbols ................................................................................. 26
Server warnings and cautions ............................................................ 27
y
Preventing electrostatic discharge ........................................................ 28
Grounding methods to prevent electrostatic discharge ........................... 28
er
Troubleshooting flowcharts ....................................................................... 29
Example of troubleshooting power-on problems .......................................... 29
liv
Implementing preventive measures ............................................................. 30
Learning check ...............................................................................................31
de
TT
rT
Fo
Objectives
After completing this module, you should be able to:
Describe the HP BladeSystem positioning
ly
Identify the components of the BladeSystem portfolio
List the key HP BladeSystem Generation 8 (Gen8) server technologies
on
Name the BladeSystem management and deployment tools
y
er
liv
de
TT
rT
Fo
Rev. 12.31 1 –1
Implemen
nting HP BladeS
System Solutions
HP BladeSy
B ystem position
p ning
ly
on
Three ma
ajor features off HP BladeSysteem
y
BladeSystem solutionss provide com mplete infrasstructures tha
at include serrvers, storagee,
er
ng, and pow
networkin wer to facilitate data centeer integration and transfo ormation. Thhey
enable data center cuustomers to respond
r moree quickly and d effectively to changingg
liv
business conditions, lighten the lo
oad on the ITT staff, and ccut total owneership costs.
BladeSystem has keptt pace with the
t changing
g needs of da
ata center cu
ustomers. The
ese
business requirementss include:
de
Redu
uce connectivvity complex
xity and costss
Lowe
er purchase and
a operatio ons costs wheen adding o
or replacing
compute/storagee capacity
TT
witho
out disrupting local area network (LA
AN) and storaage area neetwork (SAN))
domains
Allow
w faster mod
dification or addition
a of a
applications
Fo
Supp
port grid com
mputing and service-oriennted architeccture (SOA)
Supp port third-parrty componeent integration with well-d
defined interffaces, such a
as
Ethernet NICs/sw witches, Fibre
e Channel ho ost bus adappters (HBAs)/ /switches, and
InfiniBand host channel
c adap pters (HCAs) /switches
1 –2 Rev. 12
2.31
HP BladeSystem Portfolio Introduction
ly
BladeSystem evolution
on
Many changes have been made since BladeSystem was first introduced to the
market. The BladeSystem infrastructure was designed to reduce the number of cables,
centralize management, and reduce space occupied by servers. All these features
were enabled to reduce the operational and maintenance costs of server
y
environment.
er
In 2007, HP introduced Virtual Connect, which simplified connection management
(both Ethernet and Fibre Channel). Using Virtual Connect, administrators can design
liv
networks and SANs on a virtual level. This means that cabling is done only once,
and all other changes are made on the Virtual Connect level. Virtual Connect is able
to replace the physical MAC address and WWN number of a server blade with a
de
virtual one. The server is visible to the external world using these virtual addresses.
When a network card or Fibre Channel card has to be replaced, administrators have
nothing else to change in the configuration because new MAC and WWN numbers
will be overwritten with the virtual addresses previously assigned to that blade.
TT
In 2008, HP announced Virtual Connect Flex-10, which has all the features of the
original Virtual Connect, but one 10Gb network port is seen as four independent
network ports. Administrators can assign bandwidth to a single port from 100Mb to
10Gb. ProLiant G6 servers are equipped with a dual-port Flex-10 network card. As a
rT
result, customers using Virtual Connect Flex-10 have eight NICs integrated into a half-
height server blade with flexible speeds instead of two 1Gb ports.
In 2010, HP announced Virtual Connect FlexFabric. This technology was designed
Fo
for converging LAN and SAN connections into a single interconnect module.
In 2012, HP introduced a refresh to its ProLiant server blade line of products. Updates
to the ProLiant Gen8 server blades include a faster memory chipset, a lower voltage
memory option, and HP SmartMemory for enhanced support through HP Active
Health.
Rev. 12.31 1 –3
Implementing HP BladeSystem Solutions
ly
Key Gen8 server technology includes:
on
Multicore processors — Multi-core Intel Xeon, AMD Opteron, or Intel Itanium 2
processors enable greater system scalability. Customers benefit from software
applications that are developed to take advantage of multi-core processor
technology.
y
HP SmartMemory — Lower voltage DIMMs allow faster operation speeds and
er
greater DIMM counts. SmartMemory enhances memory performance and can
be managed through the HP Active Health system. SmartMemory verifies that the
memory has been tested and performance- tuned specifically for HP ProLiant
liv
servers. Types of HP SmartMemory include:
Registered DIMMs (RDIMM)
de
HP iLO – Is the core foundation for the iLO Management Engine. iLO
management simplifies server setup, health monitoring, as well as power
and thermal control. iLO enables you to access, deploy, and manage
servers anytime from anywhere.
HP Agentless Management – Begins to work as soon as the server has
power and data connections. The base hardware monitoring and alerting
capability is built into the iLO chipset.
1 –4 Rev. 12.31
HP BladeSystem Portfolio Introduction
ly
Multifunction network interface cards (NICs) — HP multifunction NICs provide a
high-performance network interface with support for TCP/IP Offload Engine
on
(TOE), iSCSI, and Remote Direct Memory Access (RDMA) over a single network
connection. Previously, the typical server environment required separate
connectivity products for networking, storage, interconnects, and infrastructure
management. HP multifunction NICs present a single connection supporting
y
multiple functions, enabling you to manage an entire infrastructure as a single,
er
unified fabric. They provide high network performance with upgrade options to
enhance memory and storage utilization. Multifunction NICs support multiple
fabric protocols, including Ethernet and iSCSI Fibre Channel.
liv
Note
The NICs in Integrity BL860c/BL870c server blades are not multifunction.
de
Internal card ports and the TPM provide expansion security options in Gen8
server blades.
Power Regulator for ProLiant and Dynamic Power Capping — Power Regulator
Fo
and Dynamic Power Capping double the capacity of servers in the data center
through dynamic control of power consumption.
Rev. 12.31 1 –5
Implementing HP BladeSystem Solutions
BladeSystem portfolio
ly
on
y
er
liv
The BladeSystem portfolio offers multiple server options, different enclosures for server
de
blades, and a wide choice of interconnect options including Fibre Channel, Ethernet,
SAS, and InfiniBand options.
The BladeSystem portfolio consists of server blades, blade workstations,
interconnects, and multiple storage options such as tape drives and storage blades.
TT
The two BladeSystem enclosures can accommodate any type of server blade that is
available on the market. Any of the server blades can be enhanced with a variety of
mezzanine cards including Ethernet, SAS, Fibre Channel, and InfiniBand options. For
each type of connection, HP offers appropriate interconnect modules including
rT
1 –6 Rev. 12.31
HPP BladeSystem PPortfolio Introduction
Blade
eSystem enclosure
e es
BladeS
System c3
3000 enclo
osure
ly
The BladeeSystem c30 000 enclosuree can scale ffrom a singlee enclosure h
holding up to
o
on
eight blades, to a racck containing
g seven enclo
osures holdinng up to 56 blades.
BladeS
System c7
7000 enclo
osure
y
er
liv
de
TT
The BladeeSystem c70000 enclosure e holds up to o 16 servers and storage e blades pluss
redundan nt network annd storage sw
witches. It inncludes a shaared, multi-teerabit high-
speed miidplane for wire-once
w connectivity of server bladees to networkk and shared d
rT
Rev. 12.3
31 1 –7
Implemen
nting HP BladeS
System Solutions
Blade
eSystem server
s bla
ades
ly
on
y
erBladeS
System server b
blades portfolio
o
liv
BladeSystem server blades
b are deelivered in tw
wo form facto
ors: half-heig
ght and full-
height. Se
erver bladess can be insta
alled (and m mixed with other server bllades) in c30000
and c700 00 enclosure
es. Different series
s are deesigned for d
different usag
ge models.
de
All mode
els can be ca
ategorized into four group
ps:
2xx series – High-density, low
w-cost serverrs optimized for high-perfformance
computing (HPC)) clusters
TT
proccessing
HP has seerver blades that meet cu ustomer needds, from a sm mall businesss to the large
est
enterprise
e firm. ProLia
ant server bla
ades supportt the latest A
AMD Opteron n and Intel X Xeon
Fo
processors and a wid de variety of I/O options. Integrity seerver blades feature Intel
Itanium processors.
p HP
H server blades also feaature:
Virtu
ual Connect technology
t
A va
ariety of netw
work intercon
nnect alternattives
Integ
grated Lights Out (iLO) 4 (Gen8 serveers)
Multtiple redunda
ant features
Embedded RAID
D controllers
1 –8 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
HP PrroLiant Blade Wo
orkstation
n Solution
ns
ly
on
y
With an HP Blade Workstation
W So
olution, the ccomputing po
ower, in the form of blad
de
er
workstatio
ons, is move
ed to the data
a center wheere the worksstations can be more eassily,
securely, and inexpen
nsively manaaged.
liv
The HP Blade
B Worksttation Solutio
on consists off three prima
ary compone
ents:
ProLiiant xw460cc Blade Workkstation or P roLiant xw2xx220c Blade
e Workstation
n
de
(based on ProLia
ant server bla
ade architectture)
The client
c compu
uter (the HP Compaq
C t5730 Thin Clieent is shown in the graphic;
an HP
H dc73 Blad de Workstation Client is a
also supportted)
HP Remote
R Grap
phics Softwarre (HP RGS)
TT
Blade wo orkstations ca
an be installe
ed in c30000 or c7000 eenclosures. O
Other
positionin
ng rules and configuratioons are the sa
ame as for sserver bladess, including
managem ment procedu ures.
rT
Fo
Rev. 12.3
31 1 –9
Implemen
nting HP BladeS
System Solutions
HP Pro
oLiant WS
S460c G6
6 Blade Workstation
W n
ly
HP ProLia
ant WS460c G6 Worksta ation Blade iis ideal for d
desktop powe er users with
h
on
computinng environmeents that requ
uire the use oof high-perfoormance grap phics
applicatio
ons from rem
mote locationns. The small form factor of the HP Pro oLiant xw460c
Blade Workstation
W allows installation of up to
o 64 blade w workstations iin a single 4
42U
rack.
y
ProLiant WS460c
W G6
6 Blade Worrkstations sup
pport the following opera
ating systemss:
er
Micrrosoft Windo
ows
Red Hat Enterprise Linux (RHEL)
liv
The optio
onal HP Grap phics Expanssion Blade m module is an expansion b blade that
attaches to the top off the ProLiantt xw460c blaade and enaables use of ffull-size
standard PCIe graphiics card such h as NVIDIA Quadro FX 5600. With hout the
de
HP Pro
oLiant xw2
2x220c Blade Worrkstation
TT
workstatio
ons in a c70
000 enclosurre and 128 w workstations in a standard d 42U rack.
A single HP xw2x220 ally two workstations in tterms of softw
0c is essentia ware licensin ng.
If you purchase a Windows opera ating system with the bla ade workstatiion, you will be
Fo
purchasinng two licensses and receeive two certi ficates of auuthenticity sticckers on it. A
All
software, both HP and third-party y, treats one H
HP xw2x220 0c Blade Wo orkstation ass
two systems.
1 –10 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
Blade
eSystem storage
s and
a expa
ansion
BladeSystem is built not
n only on servers, but aalso on storag
ge and expaansion moduules.
BladeSystem can also o consolidate
e other netwo
ork equipmeent including storage and
d
backup options.
o
HP sto
orage blad
des
ly
on
y
er
liv
D2200sb
D Stora
age Blade
de
HP Storage
S D2200sb Storage Blade
HP Storage
S X380
00sb G2 Ne
etwork Storag
ge Gatewayy Blade
HP Storage
S X180
00sb G2 Ne
etwork Storag
ge Blade
rT
HP Storage
S IO Accelerator
A
Direcct Connect SAS
S Storage for HP BladeeSystem
Fo
Rev. 12.3
31 1 –11
Implemen
nting HP BladeS
System Solutions
Ultrium
m Tape Bla
ades
ly
on
Ultrrium SB3000c Tape Blade
y
The HP Storage Ultriu um Tape Blad des offer a c omplete data
a protection,, disaster
recovery, and archiving solution for BladeSysttem customerrs who need an integrate ed
er
data prottection solutio
on. These ha
alf-height tap
pe blades proovide direct a
attach data
protection
n for the adjaacent server and network backup pro otection for a
all data resid
ding
liv
within the
e enclosure.
Each HP Storage Ultrium Tape Bla ade solution ships standa ard with HP D Data Protecto
or
Express Software
S Sing
gle Server Ed
dition softwa are. In addition, each tappe blade
de
Fo
1 –12 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
PCI Ex
xpansion Blade
B
ly
on
y
er
PCI Expansionn Blade
The HP BladeSystem
B PCI Expansio on Blade pro ovides PCI caard expansio on slots to an
n
liv
adjacent server blade e. This blade
e expansion unit uses thee midplane to o pass standard
PCI signa
als between adjacent
a encclosure bays,, to allow a sserver blade
e to add off-the-
shelf PCI--X or PCI-E ca
ards. Custommers need onne PCI Expan nsion Blade ffor each servver
de
blade needing PCI ca ard expansioon. Any PCI ccard from third-party manufacturers tthat
works in HP ProLiant ML and HP ProLiant
P DL sservers shouldd work in thiis PCI
Expansioon Blade.
Note
TT
Rev. 12.3
31 1 –13
Implemen
nting HP BladeS
System Solutions
Ethernet intercconnects
ly
HP 10GbE Pass-T hru Module
on
To conne
ect embedded d and added d network caards to the p roduction ne
etwork, HP
provides a number off Ethernet inte
erconnects fo
or BladeSysteem.
Ethernet interconnects
i s allow admiinistrators to connect servver blades in
n a variety of
different methods. Mo ost interconnects reduce cabling, withh internal do
ownlinks to
y
individuaal server blad
des and conssolidated up links.
er
The HP portfolio
p nterconnects include:
of in
HP 10Gb
1 Pass Th
hru module
liv
HP GbE2c
G switch
h and HP Gb
bE2c Layer 2
2/3
Cisco
o Catalyst 30
020 Blade Switch
S and C
Cisco Catalysst Blade Swittch 3120G/X
X
de
HP 1:10Gb
1 Etherrnet switch
HP ProCurve
P 612
20XG
HP ProCurve
P 612
20G/XG
TT
rT
Fo
1 –14 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
Ethern
net mezzanine cardss
ly
on
HP NC360m Dual Port 1
1GbE BL-c Ada pter
y
Mezzanine cards are e used to add
d more netw
work connections to a servver blade. Th
he
er
ortfolio of HP
current po P Ethernet mezzanine ca
ards include:
HP NC325m
N PCI Express Qu
uad Port Gig
gabit Server A
Adapter
liv
HP NC326m
N PCI Express Du
ual Port 1Gb Server Adap
pter
HP NC360m
N Dual Port 1GbE
E BL-c Adaptter
de
HP NC364m
N Qu
uad Port 1Gb
bE BL-c Adap
pter
HP NC382m
N Dual Port 1GbE
E Multifunctio
on BL-c Adap
pter
HP NC522m
N Dual Port Flex-10 10GbE M ultifunction B
BL-c Adapter
TT
HP 530m
5 Dual Port
P Flex-10 10GbE Ethernnet Adapter
HP NC532m
N Dual Port Flex-10 10GbE M
Multifunction B
BL-c Adapter
rT
HP NC542m
N Dual Port Flex-10 10GbE M
Multifunction B
BL-c Adapter
HP NC550m
N Dual Port Flex-10 10GbE M
Multifunction B
BL-c Adapter
HP NC552m
N Dual Port Flex-10 10GbE M ultifunction B
BL-c Adapter
Fo
HP 554FLB
5 Dual Port FlexFab
bric 10GbE A
Adapter
HP 554m
5 Dual Port
P FlexFabriic 10Gb Ada
apter
HP 10GbE
1 nine Adapterr
Dual Port Mezzan
Important
! You must insstall an appro
opriate interco
onnect for a m
mezzanine carrd.
Rev. 12.3
31 1 –15
Implemen
nting HP BladeS
System Solutions
Storage intercconnects
ly
To conneect server bla
ades to extern
nal SAN or o other storagee solutions, sspecific stora
age
interconn
nects must bee used. HP offfers a full po
ortfolio of succh devices, including Cissco
on
and Broccade Fibre Channel switcches.
The HP sttorage intercconnects inclu
ude:
Broccade 8Gb SA
AN switch
y
Cisco
o MDS 9124
4e
er
HP In
nfiniBand sw
witch
3Gb
b SAS switch
liv
de
TT
rT
Fo
1 –16 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
Storag
ge mezzan
nine cardss
ly
on
y
HP Sm
mart Array P70
00m Controller
Storage mezzanine
hardware
m cards
c
er used to
o connect serrver blades to
e iSCSI contrrollers in a mezzanine
m fo
orm factor annd the P700m
ANs. HP offe
o external SA
m Smart Arra
ers
ay
liv
Controller to connect an MDS600 0 to the enclo
osure. 3Gb S SAS switchess are require
ed
to use P7
700m and ex xternal storagge devices.
The curre
ent portfolio includes:
i
de
Broccade 804 8G
Gb FC HBA for
f HP c-Classs BladeSysteem
Emulex LPe1105-H
HP 4Gb FC HBA for HP c-Class Blad
deSystem
TT
Rev. 12.3
31 1 –17
Implemen
nting HP BladeS
System Solutions
Integrrity NonS
Stop Blad
deSystem
m
ly
on
y
The Integrity NonStop
er
p BladeSystem offers douuble the perfo ormance andd increased
liv
response time using multicore
m and
d storage sub bsystem tech
hnology (whe
en compared d to
NonStop NS16000 in HP labs). The T cost per transaction is cut in half,, and respon
nse
time and throughput is improved with standarrds-based IP communicattions and
de
data
a
Leverages Intel im
mprovementss in chip-leveel data integrrity and also prevents data
uption end-to
corru o-end (with Fletcher Checck Sum)
Fo
Better performance, lo
ower cost peer transactionn, and improoved scalabillity make the
e
NonStop BladeSystem m ideal for in
ncreasing traansaction volumes in finance,
healthcarre, telecomm
munications, and
a other ap pplications.
1 –18 Rev. 12
2.31
HP BladeSystem Portfolio Introduction
ly
BladeSystem uses NonStop Multi-core Architecture (NSMA)—a performance-oriented
architecture that runs relational database and transaction processing software. In
on
addition, the NonStop operating system named the J-series has been integrated with
and customized for use in a multi-core architecture environment.
Together, the NSMA and NonStop Operating System J-series help you achieve
double the performance of other Integrity NonStop NS-series systems. To achieve
y
such high levels of performance, both cores in a dual-core Integrity logical processor
er
are deployed resulting in improved performance.
The Integrity NonStop BladeSystem uses a novel I/O Infrastructure with a standard
liv
SAS storage adapter called Cluster I/O Module (or Storage CLIM) and a standard
Ethernet controller called IP Cluster I/O Module (or IP CLIM). The Storage CLIM
supports more storage capacity at a lower cost, provides fault tolerance, and delivers
de
improved performance.
Integrity NonStop BladeSystem NB5000c — The NB5000c features Itanium
9100 series dual-core 1.66 GHz processors with 18 MB L3 cache.
Integrity NonStop BladeSystem NB54000c — The NB5000c features Itanium
TT
scalability, and latency features. It also includes an improved I/O offload engine
(incorporating dual CLIM OS disks) with an SAS 2.0 storage subsystem that is
aligned with current industry advancements in disk technology.
Integrrity Supe
erdome 2
ly
on
y
er
liv
The Integrity Superdo ome 2 represents a categ gory of moduular, mission--critical systems
de
that scale
es up, out, annd within to consolidate all tiers of crritical applications on a
common platform. De esigned arou und the BladeeSystem arch hitecture for tthe Convergeed
Infrastructure, the Sup
perdome 2 usses modular building blo ocks that ena able customeers
to “pay as
a they grow”” from mid-ra ange to highh-end. The mo odular desig gn, supportin
ng
TT
up to 1,500 nodes, le everages stan ndard eight-ssocket and 1 16-socket building blockss,
and is ma anaged from m a single coonsole.
The Supe erdome 2 use es a 19-inch standard racck and features a bladedd design, witth
rT
1 –20 Rev. 12
2.31
HP BladeSystem Portfolio Introduction
ly
Some of the business needs that the Superdome 2 addresses include:
Meets researcher’s needs for a high-performance computing environment
on
Can accommodate peak workload requirements
Provides an always-on infrastructure without going to redundant and failover
configurations
y
Reduces the number of software licenses required to do business
er
Reduces the cost and complexity of the infrastructure
Positions the company to rapidly accommodate and leverage dynamic market
liv
conditions
Provides the agility that the existing mainframe environment cannot offer
de
Virtu
ual Con
nnect te
echnolo
ogy
ly
on
y
er
liv
Virtua
al Connect map
pping concept
virtualization. It puts an
a abstractio
on layer betw
ween the servvers and the external
networks so that the LANL and storage area neetwork (SAN N) see a pool of servers
rather tha
an individual servers.
After the LAN and SA AN connectio ons are mad de to the poo
ol of servers, the server
TT
administrrator uses a VC
V Manage er user interfa
ace to createe an I/O connnection pro ofile
for each server. Instea
ad of using the
t default M Media Accesss Control (M MAC) addressses
for all NICs and defaault World-WWide Names (WWNs) fo or all host bu
us adapters
(HBAs), the VC Mana ager creates bay-specificc I/O profilees, assigns un
nique MAC
rT
establish all LAN andd SAN conne ections oncee during deployment and d need not mmake
connectioon changes later
l if servers are chang ged. When sservers are d deployed,
added, oro changed, Virtual Conn nect keeps thhe I/O profile for that LA
AN and SAN N
connectioon constant.
1 –22 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
Virtua
al Conne
ect FlexFa
abric
ly
on
y
er FlexFabric po
ortfolio
liv
Fibre Chaannel over Etthernet (FCoE) maps Fibrre Channel n natively over Ethernet while
being inddependent off the Etherneet forwarding
g scheme. Th he FCoE proto ocol
specificattion replacess the FC0 an
nd FC1 layerrs of the Fibree Channel sttack with
de
Ethernet. By retaining
g the native Fibre
F Channeel constructs, FCoE allow ws a seamlesss
integratio
on with existing Fibre Chaannel netwo rks and man nagement sofftware.
Computers connect to o FCoE with Converged N Network Ada apters (CNAAs), which
contain both
b Fibre Chhannel HBA and Ethernet NIC functio onality on the
e same adappter
TT
VC FlexFa
abric:
Connects data, Fibre
F Channe
el, and iSCSI
Worrks with existing LAN and
d SAN
Rev. 12.3
31 1 –23
Implemen
nting HP BladeS
System Solutions
ly
Virtual Co
onnect FlexFabric feature
es include:
on
Embedded dual-port 10Gb Converged
C N
Network Ada
apter (CNA) with
SI/FCoE on ProLiant
iSCS P G7 server
s bladees
Eight connections on the syste
em board
y
Emulex-based CN
NA
er
Flex-10 LAN/Acccelerated iSC
CSI/FCoE
Virtua
al Conne
ect Flex-10 technology
liv
Flex-10 te
echnology co
omprises two componentss:
HP VC
V Flex-10 10
0Gb Ethernet module
de
10G
Gb Flex-10 serrver NICs
HP VC Flex-10 10Gb Ethernet mod
dule:
Man nages the serrver FlexNIC connectionss to the data center netw
work. Each
TT
FlexN
NIC is part of
o a server profile
Inclu
udes
Single wide form factor
rT
Full-duplex 240Gb/s
2 briidging fabricc, with nonbllocking archiitecture
Sixteen interrnal facing10
0GBASE-KR E Ethernet portts connect to
o the system
board NIC in i each devicce bay proviiding supporrt for up to eight Flex-10
Fo
Note
Because of a hardware liimitation, the Broadcom 10
0Gb devices d
do not supportt
9Kb jumbo frames—4Kb is the largestt jumbo framee size.
1 –24 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
How Flex-
F 10 wo
orks
VC Flex--10 device discovery
d
ly
on
y
er
liv
The opera
ating system discovers up
p to four PCI functions peer Flex-10 port
Indivvidual send/receive queu
ue
de
Rev. 12.3
31 1 –25
Implemen
nting HP BladeS
System Solutions
Flex-10 configuratio
on before boot
b
ly
on
y
er
For each FlexNIC, the
e VC profile configures thhe:
Band
dwidth (from 0.1 to 10Gb
b/s)
liv
Link state
MAC
C address
de
TT
rT
Fo
1 –26 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
ly
on
y
er
Each Flexx-10 network card can be e mapped to any Etherneet network de
efined on the
e
Virtual Coonnect. They
y function as completely iindependentt devices.
liv
de
TT
rT
Fo
Rev. 12.3
31 1 –27
Implemen
nting HP BladeS
System Solutions
NIC conffiguration
ly
on
y
Eight Flex
er
xNICs share two 10Gb pipes,
p and yoou can individually assig
gned bandwiidth
liv
per FlexNNIC from 0.1Gb to 10Gbb. The minimuum bandwid dth is 100Mb b/s. You cannot
have a FllexNIC withoout a bandw
width assigned
d.
de
TT
rT
Fo
1 –28 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
Bandwid
dth and netw
work alloca
ation screen
n
ly
on
y
er
liv
Band
dwidth is pro
ogrammed th
hrough the VC Manager.
Every
y connection
n gets a minimum of100M
Mb.
de
those
e connectionns get a prop
portional piecce of the pip
pe.
rT
Fo
Rev. 12.3
31 1 –29
Implemen
nting HP BladeS
System Solutions
Virtua
al Conne
ect modules
ly
The Virtua
al Connect Ethernet
E Mod
dule is a blad
de interconnect that
on
Simp
plifies server connections by cleanly sseparating th
he server encclosure from the
LAN
Strea
amlines netw
works by redu
ucing cables without add
ding switchess to manage
y
Allow
ws technician
ns to change
e servers in juust minutes, n
not days.
HP Virtua
al Connect offers converg ged LAN and d storage connectivity. Flex-10
networkin
continuess to expand this
er
ng simplifies data connecctions and co
t technolo
onsumes thee least amoun
ogy and its ca
apabilities a
across ProLian
nt of power. HP
nt, Integrity a
and
liv
Storage product
p liness. Virtual Con
nnect can simmplify and coonverge your server edge
connectio
ons, integratee into any staandards bassed networkin ng infrastructure and red duce
complexity while cuttiing costs.
de
The HP Virtual
V Conne
ect modules include:
i
HP 1/10Gb
1 VC Ethernet
HP 1/10Gb-F
1 VC
C Ethernet
TT
HP Virtual
V ect FlexFabric
Conne
HP Virtual
V Conne
ect Flex-10 10
0Gb Etherneet
rT
HP Virtual
V ect 8Gb 20-port Fibre Chhannel Module
Conne
HP Virtual
V Conne port Fibre Chhannel Module
ect 8Gb 24-p
Fo
1 –30 Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
Important
! To install Fib
bre Channel in a Virtual Coonnect enviro
onment, the en
nclosure must
have at leasst one Virtual Connect Etheernet module, because the V
VC Manager
software run ns on a proce
essor resident on the Ethern
net module.
ly
Virtual Connect environm
ment — Thrree key co
omponentss
on
y
er
liv
de
TT
rT
Three key
y components o
of VC Environm
ment
Rev. 12.3
31 1 –31
Implementing HP BladeSystem Solutions
ly
on a Fibre Channel uplink using N_Port ID virtualization (NPIV)
Connect enclosure to Brocade, Cisco, McDATA, or QLogic data center Fibre
on
Channel switches
Display as a set of HBA ports to external Fibre Channel switches
Virtual Connect Manager (embedded)
y
Manages server connections to the data center without impacting the LAN or
er
SAN
Moves/upgrades/changes servers without impacting the LAN or SAN
liv
Virtual Connect FlexFabric does not require VC-FC.
de
TT
rT
Fo
HP Bla
adeSystem
m 10Gb KR Ethernett
ly
KR Ethernet co nnection
on
KR iss the current IEEE 10Gb standard
s
One
e-lane techno
ology
One transmit pair
y
e pair
One receive
er
N on motherboard (LOM ) and a dual-port mezza
Avaiilable as LAN anine
Auto
o-sensing 1G
Gb/10Gb
liv
Compatib
bility notes:
Not compatible with
w XAUI-ba
ased c-Class 10Gb switch
h
de
interrconnects:
HP 1Gb Ethernet switche
es and pass--thru
Cisco 1Gb Ethernet
E swittches
rT
Rev. 12.3
31 1 –33
Implementing HP BladeSystem Solutions
ly
HP Advanced Error Detection Technology Early Video Progress Indicators
Early Fault Detection Messaging
on
y
System ROM can detect third-party DIMMs
er
Active Health System Log Support
liv
Error Fault Logging without Health Driver
de
TT
rT
Fo
Onbo
oard Adm
ministrato
or modules
c7000 Onboard
O Adminnistrator with KV
VM
ly
on
y
er
liv
c3000 enclosure
e with OA tray markeed
Rev. 12.3
31 1 –35
Implementing HP BladeSystem Solutions
With the Onboard Administrator with KVM for c7000, you can directly access
Onboard Administrator and server video from the VGA connections on the rear
Onboard Administrator.
The rear of each module has an LED (blue UID) that can be enabled (locally and
remotely) and used to identify the enclosure from the back of the rack.
The Onboard Administrator features enclosure-resident management capability and
is required for electronic keying configuration. It performs initial configuration steps
for the enclosure, enables run-time management and configuration of the enclosure
components, and informs users of problems within the enclosure through email,
ly
SNMP, or the Insight Display.
The Onboard Administrator monitors and manages elements of the enclosure such as
on
shared power, shared cooling, I/O fabric, and iLO.
The Onboard Administrator can be managed locally, remotely, and through HP SIM
tools. The Onboard Administrator also provides local and remote management
capability through the Insight Display and browser access.
y
er
liv
de
TT
rT
Fo
Insigh
ht Display
y
ly
on
Insight Displa
ay view
y
The Blade eSystem Insig
ght Display panel
p is designed for connfiguring and d
er
troubleshooting whilee standing ne ext to the encclosure in a rrack. It provides a quick
ew of enclosu
visual vie ure settings and
a at-a-glannce health sta atus. Green indicates tha at
liv
everything in the encllosure is properly configuured and run nning within specification n.
Main Menu
de
From the Insight Displlay Main Meenu you can navigate to the main sub bmenus. For
example, if you want to look at th
he enclosure settings, preess the Down
n button to the
next menu item. The Main
M Menu items includee:
Health Summary
y
TT
Enclo
osure Setting
gs
Enclo
osure Info
rT
Blad
de or Port Info
o
Turn Enclosure UID on
View
w User Note
Fo
Chatt Mode
USB Menu
Rev. 12.3
31 1 –37
Implementing HP BladeSystem Solutions
ly
Rack Name
Insight Display Lockout PIN#
on
y
er
liv
de
TT
rT
Fo
ly
on
HP SmartMemory (Gen8 DIMMs) will include special identifier
System ROM can detect non-HP DIMMs
Active Health System Log Support
y
Error Fault Logging without Health Driver
er
The hardware monitoring and alerting capability is built in to the system. It starts
working as soon as a power cord and an Ethernet cable are connected to the server.
liv
The iLO management processor is embedded on the system board and ships
standard in every ProLiant Gen8 server, including the ProLiant BL, DL, ML, and SL
Series. It is the core foundation of the iLO Management Engine, which is a set of
de
embedded management features that support the complete lifecycle of the individual
server, from initial deployment through ongoing management to service alerting and
remote support. The iLO Management Engine enables you to access, deploy, and
manage a server anytime from anywhere with a Smartphone device.
TT
HP In
nsight Co
ontrol
ly
on
y
Delivered
configure
d on DVD me
er
edia, Insight Control usess an integrateed installer to deploy and
e HP Systemss Insight Man nager (HP SIM M) and esseential infrastruucture
liv
managem ment softwaree rapidly and consistently, reducing m manual insta allation
procedurres and speeding time to production. These solutio ons deliver ccomplete
de
lifecycle managemen
m t for HP ProLLiant and Bla
adeSystem infrastructure. HP Insight
Control brings
b a single, consistent managemeent environmeent for rapid d deploymentt of
the operaating system and hardwa are configuraation.
HP Insigh
ht Control alsso includes fuull capabilitiees to migratee complete sservers (both
TT
physical and
a virtual) to t new serve er (both virtuaal and physical), supporting conversion
from physsical to virtua
al and vice-vversa and coonversion bettween differe ent virtualizattion
environments.
rT
In additio
on, Insight Control provid
des proactivee health and performancee monitoring
g,
power ma anagement, performance e analysis, lights-out remote management, and
virtual ma
achine mana agement for HP ProLiant M ML/DL 300--700 series servers and
BladeSystem infrastructure.
Fo
Insight Co
ontrol also extends the fu
unctionality o
of Microsoft SSystem Centeer and VMw
ware
vCenter Server
S by prooviding seammless integra tion of the unique ProLiant and
BladeSystem manage eability features into Micrrosoft System
m Center andd VMware
vCenter Server
S management conssoles.
HP Insigh
ht Control is based on HP
P SIM as the primary ma
anagement co
onsole.
1 –40 Rev. 12
2.31
HP BladeSystem Portfolio Introduction
For customers who have chosen Microsoft System Center as their primary console, we
offer HP Insight Control for Microsoft System Center, which is based on HP Insight
Control, but adds several extensions to make the ProLiant management information
available through the System Center consoles. It also adds monitoring, alerting,
proactive virtual machine management, and ProLiant operating system deployment
and update capabilities to the System Center consoles.
For customers who have chosen VMware vCenter Server as their primary console, we
offer HP Insight Control for VMware vCenter Server, which is based on HP Insight
Control, but adds several extensions to make the ProLiant management information
ly
available through the VMware vCenter Server console, enabling comprehensive
monitoring, remote control, and power optimization directly from the vCenter
console.
on
Features of Insight Control 7.x include:
Support for the latest ProLiant Gen8 servers
Data Center Power Control (DCPC) support for Superdome 2
y
er
HP Sy
ystems In
nsight Ma
anager
ly
on
y
HP System
ms Insight Manager
storage management
m
M
er (HP SIM) is the ffoundation fo
t strategy. HP
P SIM is a ha
or the HP uniified server-
ardware-leveel manageme ent product thhat
liv
supports multiple ope
erating system
ms on HP Pro oLiant, Integrrity, and HP 9000 serverrs,
HP Storag
ge MSA, EVA A, XP arrays, and third-p party arrays. Through a ssingle
de
Inven
ntory data co
ollection
Repo
orting
rT
1 –42 Rev. 12
2.31
HP BladeSystem Portfolio Introduction
Using HP Integrity Essentials you can choose plug-in applications that deliver
complete lifecycle management for your hardware assets:
Workload management
Capacity management
Virtual machine management
Partition management
HP Systems Insight Manager can be installed on three different operating systems:
Windows, Linux, and HP-UX. Basic functionality is the same for all versions, but the
ly
Windows version has the greatest scalability and expansion possibilities. HP SIM
can also be easily integrated with other Insight Software components like HP Insight
on
Server Migration software, HP Insight Control Server Deployment software and
others.
HP SIM updates
y
HP SIM 7.0 and ProLiant Gen8 server blades introduce features to the management
software.
er
Updates to the HP SIM management software include:
liv
Shifting Host Health and Alerting to iLO – Shifting the health and alerting tasks
to iLO provide more processing resources for applications.
Agentless Monitoring and Alerting – Hardware health and inventory available
de
Service Pack for ProLiant – SPP is a combination of the ProLiant Support Pack
and the firmware maintenance DVD and is available as an ISO for both
Windows and Linux.
Fo
Learning check
1. Which operating systems are supported on ProLiant server blades? (Select two.)
a. Microsoft Windows
b. OpenVMS
c. Linux
d. HP-UX
2. A customer with a requirement for InfiniBand and more than 2Gb Fibre Channel
ly
would be a good candidate for which platform?
on
a. ProLiant rack-mount servers
b. HP VDI systems
c. HP BladeSystem
y
d. HP Superdome 2
er
3. The HP Storage tape blades have a full-height form factor.
True
liv
False
4. Name the tools that you can use to manage a BladeSystem.
de
.................................................................................................................
.................................................................................................................
.................................................................................................................
TT
.................................................................................................................
.................................................................................................................
5. List three key Gen8 technologies.
rT
.................................................................................................................
.................................................................................................................
Fo
.................................................................................................................
Objectives
ly
After completing this module, you should be able to:
on
Identify and describe the HP BladeSystem enclosures
Explain how HP Onboard Administrator modules are used
Describe the power architecture used in HP BladeSystem systems, including:
y
Power modes
er
Power supplies
Power modules
liv
Power distribution units (PDUs)
Describe Thermal Logic technology
de
Rev. 12.31 2 -1
Implemen
nting HP BladeS
System Solutions
Blad
deSystem enclo
osure fa
amily
ly
HP offerss two BladeS
System enclossures:
on
Blad
deSystem c30000 enclosurre — A loweer-cost, smalleer version ta
argeted for
remo
ote sites and small and medium
m businnesses (SMB
Bs)
Blad
deSystem c70 on designed for data cen
000 enclosurre — An enteerprise versio nter
y
appllications
er
The Blade eSystem c30
000 enclosurre has a sma
aller rack foootprint, spanning 6U
compared to the 10U U of the c700
00 enclosuree. Seven c30 000 enclosurres per 42U
he maximum number of c3000
rack is th c enclo
osures in a fuully populated rack.
liv
The c30000 enclosure e is designed d for a small to mid-size ccompany, brranch office,, or
remote siites that have
e little or no rack space. The c3000 enclosure is the right
choice if::
de
The server
s enviro owing rapid ly, with frequent server p
onment is gro purchases
ents include rack-level PD
Power requireme DUs or data center UPSs
The highest levells of availability and red undancy aree required
The server
s blade
es need multiiple rack-bassed shared storage arrayys
2 -2 Rev. 12
2.31
HP BladeSystem Enclosures
ly
Onboard Administrator for remote management
Multiple enclosure setup functions
on
Choice of power input
-48VDC, 110VAC, or 220VAC
Ability to handle higher ambient temperatures
y
Enclosure-based CD/DVD drive and 3-inch LCD Insight Display
er
Interconnect fabrics of up to 8Gb/s
Choice of redundant and non-redundant fabrics
liv
RoHS compliance for devices changes in a single unit
de
TT
rT
Fo
Rev. 12.31 2 -3
Implemen
nting HP BladeS
System Solutions
Blade
eSystem enclosure
e e comparison
ly
on
Both Blad
deSystem enclosures cann hold comm on critical coomponents ssuch as serve ers,
interconn
nects, mezza
anine cards, storage blad des, and fans. Key differrences in the
y
BladeSysstem enclosures include rack
r size, red
dundancy op ptions, and sscalability.
er
Following
g is a feature
e comparison of BladeSyystem c7000
0 and c3000
0 enclosures::
Heig
ght
liv
c3000 — 6U
6 height
c7000 — 10U height
de
Form
m factor, whe
en fully popu
ulated
c3000 — Eight half-heig
ght blades, ffour full-height blades, orr six half-heig
ght
and one full-height
TT
c7000 — Vertical
V blade
e orientation
Power supplies
Fo
2 -4 Rev. 12
2.31
HP BladeSystem Enclosures
System fans
c3000 — Six HP Active Cool 100 fans
c7000 — Ten Active Cool 200 fans
Interconnect bays
c3000 — Four interconnect bays
c7000 — Eight interconnect bays
Onboard Administrator
ly
c3000 — Dual Onboard Administrator option
on
c7000 — Single or dual Onboard Administrator capability with KVM
Midplane
c3000 — Tested up to 6 Gbit
y
c7000 — Tested up to 10 Gbit
er
Connection
c3000 — Onboard Administrator serial/USB connections in front
liv
c7000 — Onboard Administrator serial/USB connections in rear
KVM support
de
Rev. 12.31 2 -5
Implemen
nting HP BladeS
System Solutions
Blade
eSystem c7000
c enclosure
ly
on
Following
g are the fea
atures of the c7000 enclo
osure:
2400
0W power supplies
s
y
Increased po —2400W; s upports moree blades witth fewer pow
ower output— wer
er
supplies
High efficien
ncy to save energy;
e proviides 90% effficiency from as low as 10%
liv
load
Low standby y power that facilitates reeduced poweer consumption when
servers are idle
i
de
Onboard Ad
dministrator 2.30 or late r
V high line operation onlyy
200 – 240V
TT
Ten fans
f
Six 2400W
2 high
h efficiency power
p suppliees
Fo
2 -6 Rev. 12
2.31
HP Blad
deSystem Enclossures
Blade
eSystem c3000
c enclosure
e
ly
on
y
c3000
0 Onboard Adm
ministrator trayy
er
The c30000 enclosure
e includes fouur full-height device bayss or eight ha
alf-height devvice
bays, acccommodating the full arrray of BladeS System serveer, storage, ttape, and PC
CI
Expansio
on blades.
liv
An integrrated Insight Display is linked to the O
Onboard Ad
dministrator for local
enclosure
e manageme ent.
de
Note
If you are using full-height se
erver blades in the enclosure, any empty full-height device
bays should beb filled with blade blanks. To o make a full-heeight blank, join
n two half-heigh
ht
blanks togethher.
rT
Fo
Rev. 12.3
31 2 -7
Implemen
nting HP BladeS
System Solutions
BladeS
System c3
3000 enclo
osure — Rear
R view
ly
on
y
The rear of the c3000 0 enclosure offers four innterconnect b
bays. The avvailable bayss
er
can supp port a variety
y of pass-thru
u modules annd switch tecchnologies, including
Ethernet, Fibre Chann nel, and Infin
niBand. The enclosure suupports up too three
liv
independ dent I/O fabbrics with the ability to co
ombine intercconnect bayys 3 and 4 fo
or a
fully redu
undant fabricc.
The HP In
nfiniBand sw
witch module is double-w ide; two neig
ghboring ba
ays are
de
Note
The KVM mo
odule is an optio
onal componennt that must be o ately.
ordered separa
Fo
2 -8 Rev. 12
2.31
HP Blad
deSystem Enclossures
BladdeSystem enclo
osure manage
m ement h
hardwa
are and
d
softw
ware
HP Onboard
O Administtrator
ly
on
y
er
liv
Onboa
ard Administrattor device view
w
de
(VSM) appplication on
n switches insstalled in thee BladeSystem
m enclosure. After selectiing
a switch, you can use
e the Onboa ard Administrrator to:
View
w switch statu
us informatio
on
rT
View
w other switch informatio
on
Clickk virtual butto
ons to:
Power off the switch
Fo
Reset the sw
witch
Toggle the Unit
U Identifica
ation (UID) liight on or offf
Ope
en the Manag
gement Conssole (VSM)
Ope
en the Port Mapping
M wind
dow to view
w detailed po
ort mapping information
Rev. 12.3
31 2 -9
Implemen
nting HP BladeS
System Solutions
Onbo
oard Adm
ministrato
or module compo
onents
c7000 Onboard
O Admiinistrator module
ly
The Onbo oard Adminiistrator moduule provides a single point of control for intelligen
nt
managem ment of the entire
e enclosu
ure. It has beeen designed
d for both loccal and remo ote
on
administrration of a BladeSystem enclosure.
e
Each Onboard Administrator mod
dule has a nnetwork, USB
B, and serial port and som
me
models also
a have a VGA
V connecttor.
y
work port — Ethernet 1000BaseT RJ4
Netw 45 connectorr, which provvides Etherne
et
acce
ess to the On nboard Admiinistrator and d the iLO pro
ocessor on each server
blad
de. It also sup
configured to use
er
e the enclosu
onnect moduules with management prrocessors
pports interco
ure managem ment networkk. It auto neg
gotiates
liv
10000/100/10 or o can be con nfigured to fo
orce 100Mb b or 10Mb fu ull duplex.
USB port — USB B 2.0 Type A connector uused for conn necting supported USB
devicces such as DVD
D drives, USB key drivves, or a keyyboard or moouse for
de
enclo
osure KVM use.
u To conne ect multiple d
devices, a USB hub (not included) is
required.
al port — Se
Seria erial RS232 DB-9connecto
D or with PC sttandard pin-o
out. It conne
ects
TT
a co
omputer with a null-modem serial cab
ble to the Onboard Administrator
command line innterface (CLI).
A connector — VGA DB-15 connector with PC stan
VGA ndard pin-ouut. To access the
KVM
M menu or Onboard
O Adm
ministrator CLLI, connect a VGA monito or or rack KV
VM
rT
2 -10 Rev. 12
2.31
HP BladeSystem Enclosures
The uppermost enclosure uplink port functions as a service port that provides access
to all the BladeSystem enclosures in a rack. If no enclosures are linked together, the
service port is the top enclosure uplink port on the enclosure link module. Linking the
enclosures enables the rack technician to access all the enclosures through the open
uplink port.
If you add more BladeSystem enclosures to the rack, you can use the open enclosure
up port on the top enclosure or the down port on the bottom enclosure to link to the
new enclosure.
The Onboard Administrator module for the c7000 enclosure is available with or
ly
without KVM support. Firmware for both versions is the same, but the part numbers
are different.
on
Redundant Onboard Administrator modules
When two Onboard Administrator modules are present in an enclosure, they work in
an active - standby mode, ensuring fully redundant integrated management. Either
y
module can be the active module. The other becomes the standby module.
er
If you install two Onboard Administrator modules of the same firmware revision, the
one on the left of the enclosure will be the active one. If two Onboard Administrator
modules installed into the same enclosure have different firmware versions, the
liv
automatic configuration sync is disabled. Both Onboard Administrator modules will
put a clear entry into syslog stating exactly which version is on which Onboard
Administrator and how to upgrade them. However, the different firmware versions do
de
not affect which module is active or standby. The same rules apply.
Configuration data is constantly replicated from the active Onboard Administrator
module to the standby Onboard Administrator module, regardless of the bay in
which the active module currently resides.
TT
When the active Onboard Administrator module fails, the standby Onboard
Administrator module automatically becomes active. This happens regardless of the
position of the active Onboard Administrator module. This automatic failover occurs
rT
only when the currently active module comes completely offline and the standby
module can no longer communicate with it. In all other cases, the administrator must
initiate the failover by logging into the standby module and promoting it to active.
Fo
Note
You can hot plug (add without powering down the system) Onboard Administrator
modules but they are not hot-swappable (replaceable without powering down the system).
Dual Onboard
O Administra
A ator tray
ly
on
c3000 tray for
f Onboard A
Administrator mo
odules
y
An enclosure ships with
w one Onboard Admin istrator moduule and supp
ports up to tw
wo
Onboardd Administrattor modules.
The stand
which hoouses the mod
er
dard Onboard Administrator module is preinstalleed in a front-loading trayy,
dule and the
e BladeSystem
m Insight Dissplay. The Onboard
liv
Administrrator tray:
Supp
ports dual Onboard
O Adm
ministrator mo
odules for an
n enclosure
de
Requ
uires a blankk module if th
here is no red
dundant Onboard Administrator
module
rT
Fo
2 -12 Rev. 12
2.31
HP Blad
deSystem Enclossures
ly
on
y
er
liv
The Onbo oard Administrator link module
m is se parate from the Onboard d Administraator
module. It is containe
ed within the Onboard A Administrator module slee eve. The rearr-
loading Onboard
O Addministrator link module ccontains RJ-445 ports for e
enclosure
de
Rev. 12.3
31 2 -13
Implemen
nting HP BladeS
System Solutions
HP In
nsight Dissplay
ly
on
Inssight Display m
main screen
y
through an
a LCD display conveniently sited on the front of tthe system.
er
Insight Display is a sttandard component of c3 3000 and c7 7000 enclosures. It provides
ace that can be used for initial enclossure configurration, and is a valuable
an interfa e
tool durin
ng the trouble eshooting prrocess. If a p
problem occuurs, the display changes
liv
color andd starts to bliink to get the
e attention off an adminisstrator. The In
nsight Display
can even be used to upgrade the Onboard A Administrator firmware.
de
us available to an admin
The menu nistrator stand
ding in front of the blade
e enclosure a
are:
y — Displayss the current condition of the enclosurre.
Heallth Summary
Enclo gs — Enabless configuration of the enclosure, inclu
osure Setting uding Power
Mod de, Power Lim
mit, Dynamic Power, IP ad ddresses for Onboard Ad dministrator
TT
modules, enclosu ure name, annd rack nam e. It is also uused for conn
necting a DVVD
e to the blades and settin
drive ng the lockouut PIN.
osure Info — Displays the
Enclo e current encclosure config
guration.
rT
optio
on is selectedd, the display
y backgrounnd color chan
nges to blue,, and a blue
LED is visible at the
t rear of th he enclosure..
2 -14 Rev. 12
2.31
HP BladeSystem Enclosures
View User Note — Displays six lines of text, each containing a maximum of
16 characters. This screen can be used to display contact information or other
important information for users working on-site with the enclosure.
Chat Mode — Enables communication between the person in front of the
enclosure and the administrator managing the enclosure through the Onboard
Administrator.
USB Menu — Can be used to update Onboard Administrator firmware or to
save or restore the Onboard Administrator configuration when using a USB stick
plugged into the USB port on an Onboard Administrator module.
ly
on
y
er
liv
de
TT
rT
Fo
iLO Managem
M ment Engine
ly
on
The HP iLLO Managem ment Engine is a set of em mbedded ma anagement ffeatures that
support the complete lifecycle of the
t individua al server, from
m initial dep
ployment
through ongoing
o man nagement to service alertting and rem mote support.. The iLO
Managem ment Engine enables you u to access, d deploy, and manage a sserver anytim me
y
from anyw where with a smartphone e device. It ssupports a co omplete sepa aration of
er
system management and a data pro ocessing, no ot just on the LAN connecctions, but also
e system itsellf.
within the
liv
Through use of key iLO O technologies such as rremote conso ole with DVRR, virtual med
dia,
ower, and virrtual serial po
virtual po ort, you can remotely control iLO ma anaged serve ers
as efficiently as if you
u are actually
y at the remoote site. The iLO firmware e innovationss
de
iLO managemen
m t processor — Is the coree foundation of the iLO M
Managementt
Engine. It is emb
bedded on th he system boaard and shipps standard iin every
ProLiiant Generattion 8 (Gen8 8) server blad
de. HP iLO simplifies servver setup,
rT
Agen
ntless Manag gement — Iss the base haardware mon nitoring and alerting
capa
ability built in
nto the system
m (running o
on the iLO ch
hipset) and sttarts working
g as
soon
n as a powerr cord and an Ethernet ca able are connnected to the server.
2 -16 Rev. 12
2.31
HP BladeSystem Enclosures
Active Health System — Provides diagnostics tools and scanners in one bundle.
Intelligent Provisioning (previously known as SmartStart) — Offers out-of-the box
single-server deployment and configuration without the need for media.
Embedded Remote Support — Builds on Insight Remote Support, which runs on
a stand-alone system or as a plug-in to HP Systems Insight Manager (HP SIM). It
provides phone-home capabilities that can either interface directly with the
backend (which is ideal for smaller customers, or for remote sites without a
permanent connection to the main site), or can use an HP Insight Remote
Support host server as an aggregator.
ly
For more information about the iLO Management Engine go to:
on
http://www.hp.com/go/ilo
You also can enable Power Regulator on supported server models from the iLO
Standard browser, CLP, and script interfaces. On supported server models, iLO
displays the present power consumption in Watts. The present power is a five-minute
y
average that is calculated and displayed through all iLO interfaces.
er
liv
de
TT
rT
Fo
Agenttless Mana
agement
ly
on
HP iLO 4 Agentless Mannagement Conssole
y
For customers who wa ant to enrich the hardwaare managem ment with ope erating syste
em
er
on and alertting, iLO Management Enngine featurees Agentless Manageme
informatio ent,
an optionnal applicatio
on loaded innto the operaating system that routes th
he operatingg
liv
system management information and a alerts ovver the mana agement netw work.
Operating system age ents are not required; alll SNMP traps and alertinng take place e
from the iLO architectture. Agentle
ess Managem ment providees:
de
Incre
eased securitty and stabiliity, even wheen systems are not yet po
owered on
Deta
ailed informa eeds time to issue diagno
ation that spe osis and reso
olution
TT
rT
Fo
2 -18 Rev. 12
2.31
HP Blad
deSystem Enclossures
Active
e Health Sy
ystem
ly
on
y
er
The Activve Health Sysstem is an esssential comp ponent of thee HP iLO Ma anagement
Engine. Itt monitors an nd records changes in th e server hard dware and ssystem
configuraation. It assistts in diagnossing problemms and deliveering rapid rresolution wh
hen
liv
system failures occur. This technology monitorrs and secureely logs more e than 1,6000
system pa arameters an nd 100% of configuration
c n changes fo or accurate p
problem
de
resolution
n. Because Active
A Health is agentlesss, it does not impact application
performa ance.
Previouslyy, whenever you had a sy ystem issue wwithout an obvious root ccause, you
would relly on runningg diagnostic tools to try tto isolate thee cause. Althoough these
TT
tools ofte
en do a good d job of provviding the neecessary information, theyy can only b be
used afteer the fact and often just look at subsyystems individ dually. Circu
umstances occcur
where the ese tools can
nnot provide the informattion needed to isolate the e root cause.
rT
Active He
ealth System technology:
Monnitors and seccurely logs more
m than 1,6
600 system pparameters a
and 100% off
configuration cha
anges for mo ore accurate problem ressolution
Fo
Enab
bles you to deploy
d updattes three timees faster with
h 93% less downtime usin
ng
HP Smart
S Update e Manager (SUM)
(
Runss as agentlesss system and
d does not im
mpact appliccation perform
mance
Rev. 12.3
31 2 -19
Implementing HP BladeSystem Solutions
ly
on
Continuous monitoring for increased stability and shorter downtimes
Rich configuration history
Health and service alerts
y
Integrated diagnostics tools and scanners
er
Easy export and upload to HP Service and Support
HP Inttelligent Prrovisioning
g
ly
on
HP Intellig
gent Provisio
oning enable
es single-servver deploymeent and confiiguration
y
without th
he need for additional
a media.
m Previo us generatio
on server provvisioning and
er
maintena ance capability is now em
mbedded in the iLO Man nagement Enngine across all
ProLiant Gen8
G serverss.
Intelligent Provisioning
g is targeted
d for provisioning and deeploying sing
gle servers an
nd
liv
provides these operatting system installation o ptions:
Reco
ommended/E
Express Insta
allation
de
Assissted/Guided
d Installation
Man
nual Installation
With Inte
elligent Provissioning, you can choose from numero
ous options:
TT
perfo
orm firmware
e updates an
nd install an operating syystem in the ssame step
Rollb
back firmwarre from within
n the HP Inteelligent Provissioning main
ntenance me
enu
Insta
all Windows, Linux, and VMware
V quicckly
Fo
Proviision a serve
er remotely ussing iLO
Remo
ote Support registration
Full system
s integrration and operating systtem configurration elimina
ates 45% of
stepss, allowing you
y to deploy y server threee times faster.
Rev. 12.3
31 2 -21
Implementing HP BladeSystem Solutions
ly
any communication from iLO to the Onboard Administrator module. There is no path
from an iLO processor on one server blade to the iLO processor on another blade.
on
The iLO processor has information only about the presence of other server blades in
the infrastructure and whether there is enough amperage available from the power
subsystem to boot the iLO host server blade.
Within BladeSystem enclosures, the server blade iLO network connections are
y
accessed through a single, physical port on the rear of the enclosure. This greatly
er
simplifies and reduces cabling.
Note
liv
The iLO on a server blade maintains an independent IP address.
de
TT
rT
Fo
HP iLO
O Advance
ed for HP BladeSysttem
ly
on
y
er
liv
HP
P iLO features ccomparison
Rev. 12.3
31 2 -23
Implementing HP BladeSystem Solutions
ly
snap-in and extend an existing directory schema to enable directory support
for iLO.
on
A directory migration tool is available to automate setup for both methods
of integration.
Integration also supports LDAP nested groups.
y
You can configure a redundant domain controller when using Active
Directory and iLO.
er
iLO can use a backup domain controller if the primary domain controller is
unavailable. In an Active Directory configuration, there is no need to configure
liv
the actual iLO device to allow a backup domain controller. The Microsoft
Domain Name System (DNS) server will automatically update the DNS name to
reflect domain controller availability.
de
You should configure iLO to reference the DNS name of the domain, not the
specific IP address of the domain controller. If the primary DC is unavailable, the
DNS lookup of the domain will not return that server’s IP, so that iLO can
connect to the next available domain controller. Alternatively, in the iLO
TT
Console Replay captures and stores for replay the console video during a
server's last major fault or boot sequence. Server faults include an ASR, server
boot sequence, Linux panic, or Windows blue screen. Additionally users are
able to manually record and save any console video sequence to their client
Fo
hard drive for replay from the ProLiant Onboard Administrator/iLO Integrated
remote console.
iLO Text Console — Onboard Administrator/iLO text consoles provide server
access via a text console, similar to a graphical remote console.
iLO Video Player — Onboard Administrator/iLO allows you to view
automatically captured server video footage or on-demand captured footage
within an iLO session or separately through the iLO video player.
ly
mode and average, peak and minimum power consumption over 24-hour
intervals. Check the server QuickSpecs to verify specific system support for
Power Regulator and power monitoring.
on
Virtual folders — This feature allows you to mount a local folder on a remote
server.
y
er
liv
de
TT
rT
Fo
Blad
deSystem pow
wer and cooling
ly
on
y
er
liv
In the past, better datta center perrformance wwas the goal, and power and cooling
de
y is preventin
effectively ng many com mpanies from m achieving ttheir IT goalss.
Power an
nd cooling arre issues regardless of fo
orm factor. Ho
owever, incre
eased serverr
and proccessor density
y have accellerated the d
demands.
rT
To achievve a controlla
able balance e between po ooling while boosting data
ower and co
center en
nergy efficien
ncy, significant tradeoffs m
must be mad
de:
Larger fans move
e more air bu
ut take more power.
Fo
2 -26 Rev. 12
2.31
HP Blad
deSystem Enclossures
Blade
eSystem enclosure
e e design challeng
ges
ly
Aperture
es in backplane
es/signal midpla
anes of BladeS
System enclosurres
on
Challenges faced by the BladeSystem design engineers in
ncluded:
Small apertures in
i the backpplane assemb bly meant tha
at getting suffficient air fro
om
the server
s bladess required high pressure.
y
The Xeon
X processor E5 seriess in the ProLiant BL460c Gen8 serverr blade requires
er
up too 30 cubic fe
eet per minutte (CFM) to ccool and theerefore can re
equire high
airflo
ow.
Up to
o 16 half-heiight blades per
p chassis reequire large air volumes to be moved
d.
liv
HP Active
e Cool Fans and Thermal Logic are thhe solutions tto these challlenges.
de
TT
rT
Fo
Rev. 12.3
31 2 -27
Implemen
nting HP BladeS
System Solutions
PARSEC archiitecture
ly
on
y
er
liv
The Blade
eSystem c70
000 enclosure e uses paralllel, redundan
nt, scalable, enclosure-
based co
ooling (PARSE
EC) architectture:
de
the rear
r of the c7
7000 enclosure. The enclosure suppo orts up to 10 fans so thatt
cooling capacity y can scale as
a needs cha ange.
osure-based — By mana
Enclo aging cooling ne
g throughoutt the entire enclosure, zon
cooling minimize es the powerr consumptio n of the fan subsystem and increasess
Fo
fan efficiency
e in a single zone if one of thhe server bla
ades requiress more coolinng.
This saves operating costs an nd minimizess fan noise. H HP recommends using att
leastt eight fans. Using 10 fan
ns optimizes power and ccooling.
2 -28 Rev. 12
2.31
HP BladeSystem Enclosures
ly
Cooling is managed by the Thermal Logic technology, which features Active Cool
Fans. These fans provide adaptive flow for maximum power efficiency, air movement,
on
and acoustics. Active Cool Fans provide an adaptive flow for maximum power
efficiency, air movement, and acoustics.
The PARSEC architecture is designed to draw air through the interconnect bays. This
allows the interconnect modules to be smaller and less complex.
y
The power supplies are designed to be highly efficient and self-cooling. Single- or
er
three-phase enclosures and N+N or N+1 redundancy yield the best performance per
watt.
liv
de
TT
rT
Fo
Blade
eSystem c7000
c enclosure airflow
ly
on
y
er
Schema of airflow inside a c7000 enclo
osure
Thermal Logic
L uses a control algo
orithm to optiimize for anyy configuratio
on based on
n
liv
the follow
wing customeer parameterrs:
Airflo
ow
de
Acou
ustics
Powe
er
Perfo
ormance
TT
Airflow th
hrough the enclosure is managed
m to eensure that eevery device gets cool air
and doess not sit in the hot exhausst air of anotther device, a and to ensurre that air on
nly
goes whe ere it is need
ded for coolinng. Fresh airr is pulled intto the interco
onnect bays
through a side slot in the front of the enclosuree. Ducts movve the air fro om the front to
o
rT
the rear of
o the enclosure, where itt is then pulleed into the innterconnect m modules and d
the centra
al plenum, and then exha austed out thhe rear of thee system.
Fo
2 -30 Rev. 12
2.31
HP Blad
deSystem Enclossures
Active
e Cool Fa
ans
ly
on
y
er
liv
HP Active
e Cool Fans are an innovvative designn that can coool 16 bladess using as litttle
as 100WW of power. The
T design iss based on a aircraft technology that generates fann-tip
de
prese
et cooling thresholds
Easy
y scalability to m stringen t future roadmap require
t even the most ements
Rev. 12.3
31 2 -31
Implementing HP BladeSystem Solutions
ly
Eight server blades — Install fans in bays 1, 2, 4, 5, 6, 7, 9, and 10.
Ten server blades — Install fans in all bays.
on
Important
! If the fans are not in these exact locations, the thermal subsystem will be degraded and no
newly inserted server will be allowed to power up.
y
The c3000 enclosure
er
The c3000 enclosure ships with a minimum of four fans and supports up to six. The
c3000 supports Active Cool 100 Fans. To ensure proper cooling, HP recommends
liv
that you distribute fans based on these fan location rules:
Four-fan configuration — Fans in bays 2, 4, 5, and 6 support a maximum of
four half-height blades or two full-height blades.
de
Six-fan configuration — Fans in all six bays support population of all server
bays.
TT
rT
Fo
Fan populatio
p on
The c7
7000 encllosure
ly
on
y
er
liv
de
In a four-fan con
nfiguration, fan
f bays 4, 5 5, 9, and 10
0 are used to support a
TT
devicces in device
e bays 1, 2, 3,
3 4, 9, 10, 11, or 12.
n eight-fan configuration, fan bays 1,, 2, 4, 5, 6, 7, 9, and 10
In an 0 are used to
o
suppport devices in
i all device bays.
Fo
Important
! Install fan bla
anks in any unu
used fan bays.
Rev. 12.3
31 2 -33
Implemen
nting HP BladeS
System Solutions
The c3
3000 enclosure
ly
on
y
er
Fan
F population (c3000)
Base c30 000 enclosurres ship with four Active C Cool 100 Fa ans installed, supporting up
liv
to four ha
alf-height devvices or two full-height seerver blades.. Adding twoo additional
fans to th
he enclosure allows popu ulation of eigght half-heigh
ht devices or four full-heig
ght
de
server bla
ades.
Four--fan configurration require
es populationn of fan bayys 2, 4, 5, an
nd 6.
Six-fa
an configura
ation enabless population of all fan ba
ays.
TT
In a four-fan configura
ation, the Onnboard Adm ministrator preevents bladee devices in
bays 3, 4 7, and 8 frrom powering g on and ideentifies the fa
an subsystem
m as degrade ed.
To popula ate blade de
evices in thesse bays, pop
pulate c3000 0 enclosures with six fanss.
rT
Fo
2 -34 Rev. 12
2.31
HP BladeSystem Enclosures
Important
! Remove and replace this fan to correct the failure condition. Replacing the failed fans will
result in automatically returning the fan subsystem health to OK.
If the fan subsystem is marked degraded, another fan failure will result in marking the
ly
fan subsystem as failed. In this circumstance the Onboard Administrator probably
cannot prevent a server from overheating.
on
Caution
Failure to replace the affected fans could result in loss of data or damage to hardware.
In all cases of fan failure, the Onboard Administrator continues to monitor server
y
temperatures and provides adequate cooling. In extreme cases such as fan failure, or
er
elevated enclosure or server ambient temperatures, the system resorts to maximum
enclosure fan rpm. When the failed fan is replaced, fan subsystem redundancy is
restored and the fan rpm returns to a controlled rpm.
liv
Fan redundancy rules control system behavior in the event of the loss of fan:
If the 10-fan rule (c7000) is in place and the failed fan is in bay 1, 2, 6, or 7
de
and no blades are powered on in right half of enclosure (bays 5 through 8 and
13 through 16):
The fan subsystem is still redundant.
TT
redundant. If one fan fails, the Onboard Administrator will not prompt you to step
down to the four-fan configuration, because some of the server blades would have to
be powered down. Instead, the Onboard Administrator allows the server to run with
five fans, provided that adequate cooling continues.
Fo
Fan qu
uantity versus powe
er
ly
on
y
er
Fan
F quality verssus power
The preceeding graph represents the number o of fans versuss power draw
w. The circled d
liv
area indiicates the po
oint at which 10 fans are more efficient than eightt fans for the
e
same airfflow delivere
ed.
de
For sound
ds with simila
ar frequency
y content, mo
ost people coonsider a 3dB change in
sound pre
essure a 2x difference in
n sound. Sim ilarly, peoplee perceive a 10dB increa
ase
as being nine times as
a loud.
rT
Fo
2 -36 Rev. 12
2.31
HP Blad
deSystem Enclossures
Self-se
ealing Blad
deSystem enclosure
e
ly
on
y
er
The c700 00 enclosure and the com mponents witthin it optimiize the coolin ng capacity
through unique
u mechanical designs. Airflow tthrough the eenclosure is m managed to
liv
ensure that every devvice gets coool air, devicess do not sit in
n the hot exhhaust air of
another device,
d and ensures
e that air only goees where it is needed for cooling. Fresh
air is pulled into the interconnect
i bays throug h a slot in thhe front of the
e enclosure.
de
Rev. 12.3
31 2 -37
Implemen
nting HP BladeS
System Solutions
Coolin
ng multiple
e enclosurres
ly
on
y
er
liv
Multiple c70
000 enclosures cooling requireements
de
The c70000 enclosure can operate e with four ennclosures in a rack if the data center is
equippedd to deliver sufficient
s airfllow at the fro
ont of the racck and no aiir recirculatio
on
occurs ovver the top or around the e sides of thee racks.
HP recommmends that you run the Power Sizer before installing enclosu
ures to
TT
2 -38 Rev. 12
2.31
HP BladeSystem Enclosures
Thermal Logic
Thermal Logic is the portfolio of technologies embedded throughout HP servers to
produce an energy efficient data center. Thermal Logic reduces energy consumption,
reclaims capacity, and extends the life of the data center.
Thermal Logic innovations include:
Dynamic Power Capping – Reclaim trapped power and cooling capacity by
safely “capping” server power consumption. The result: triple server capacity.
Sea of Sensors – Up to 32 sensors adjust fan speeds and powers only the slots
ly
that are in use. The result: 2.5x more efficient than ProLiant G5 servers and much
quieter.
on
Common Slot Power Supplies – Reduce spares with standardized form factors
and “right-size” to match capacity. The result: up to 92% efficiency.
Power Management Tools – Insight Control Suite management software delivers
y
deep insight, precise control, and ongoing optimization to unlock the potential
of the infrastructure.
er
Intelligent Power Discovery – The industry's first automated, energy-aware
network to bring together facilities and IT by combining HP Intelligent PDUs,
liv
Platinum common slot power supplies, and Insight Control software.
de
TT
rT
Fo
ly
on
Schema of ProLiant
P Power Regulator operration
y
HP Powerr Regulator teechnologies improve servver energy eefficiency by giving CPUss full
power for application ns when theyy need it and d power savings without performance e
er
degradattion when ap pplication acctivity is reduuced. It enab
bles you to re
educe power
consumption and gen nerate less daata center heeat, resulting
g in compoun nded cost
savings. You
Y save firsst by using le
ess power in racks and seecond by pro oducing less
liv
work for air cooling systems.
s Thesse factors ca n save on op perational exxpenses and
enable greater densitty in the dataa center enviironment, an nd do not neccessarily resu
ult
in loss of system perfo
ormance.
de
Power Reegulator Static Low Powerr and Dynam mic Power Saavings modess as well as
operatingg system-baseed modes (AAMD PowerN Now or Intel Demand Bassed Switching)
can be enabled to sa ave on serverr power and cooling costs. On suppo orted ProLian
nt
TT
servers, Power
P Regula
ator allows CPUs
C to operrate at lower frequency a
and voltage
during peeriods of red
duced applicaation activityy.
This power managem ment technoloogy enables dynamic or static change es in CPU
performaance and pow n dynamic m ode, Power Regulator au
wer states. In utomatically
rT
adjusts th
he server's processor pow wer usage annd performance to match h CPU
applicatioon activity. Power
P Regulaator effectivelly executes a
automated po olicy-based
power ma anagement at a the individ
dual server leevel. In addittion, a uniqu
ue static low
Fo
Note
For additiona
al information about
a Power Reg
gulator visit:
http://h180004.www1.hp.co om/products/seervers/manageement/ilo/pow
wer-regulator.htm
ml
2 -40 Rev. 12
2.31
HP BladeSystem Enclosures
ly
when they need it and reducing power when they do not. This power management
feature allows ProLiant servers with policy-based power management to control
on
processor power states.
Important
! Dynamic Power Savings mode is not available on all processor models. To determine
which processors are supported, consult the Power Regulator website at:
y
http://www.hp.com/servers/power-regulator
er
Because Power Regulator resides in the BIOS, it is independent of the operating
system and can be deployed on any supported ProLiant server without waiting for an
liv
operating system upgrade. HP has also made deployment easy by supporting Power
Regulator settings in the HP iLO scripting interface.
The Power Regulator for ProLiant feature enables iLO 4 to dynamically modify
de
Note
TT
In addition to Power Regulator, ProLiant servers also support operating system based
power management using Intel Demand Based Switching and AMD Opteron PowerNow.
rT
Fo
Note
Consult operating system documentation for details on power management support for a
given system.
ly
Static Low Power Mode — Power Regulator for Integrity sets the processors to the
p-state with the lowest power consumption and forces them to stay in that state.
on
This mode saves the maximum amount of resources, but it might affect the system
performance if processor utilization stays at 75% utilization or more.
Static High Performance Mode — Power Regulator for Integrity sets the
y
processors to the p-state with the highest performance and forces them to stay in
that state. This mode ensures maximum performance, but it does not save any
er
resources. This mode is useful for creating a baseline of power consumption
data without Power Regulator for Integrity.
liv
Dynamic Power Savings Mode — Allows the system to dynamically change
processor p-states when needed based on current operating conditions. The
implementation of this mode is operating system specific, so consult your
de
equipped with Dual-Core Intel Itanium Processor 9100 series 1.6 GHz dual-core
parts.
Note
Fo
The user must have the Configure iLO 4 Settings privilege to change these settings.
iLO 4 power ma
anagemen
nt
ly
on
y
er
liv
iLO 4 powwer manageement enable es you to view
w and contro ol the powerr state of the
server, monitor power usage, mon nitor the proccessor, and modify powe er settings. TThe
Power Ma anagement page
p in the iLO
i 4 interfa ace has threee menu optioons:
de
Note
Some of the power
p control options
o do not g
gracefully shut down the opera
ating system.
Rev. 12.3
31 2 -43
Implementing HP BladeSystem Solutions
Power Meter — The Power Meter page displays server power utilization as a
graph. This page has two sections:
Power Meter Readings
Power History
Power Settings — The iLO Power Settings page allows you to view and control
the Power Regulator modes. The Power Management Settings page enables you
to view and control the Power Regulator mode of the server. Power Regulator for
ProLiant settings are:
ly
Static Low Power mode — Sets the processor to minimum power, reducing
processor speed and power usage. Guarantees a lower maximum power
on
usage for the system.
Static High Performance mode — Processors will run in their maximum
power/performance state at all times regardless of the operating system
power management policy.
y
Note
er
Selecting Static High Performance mode usually causes the system to use more power,
especially when it is lightly loaded. Most applications benefit from the power savings
offered by Dynamic Power Savings mode with little or no impact on performance.
liv
Therefore, if choosing Static High Performance mode does not increase performance, HP
recommends that you re-enable Dynamic Power Savings mode to reduce power use.
de
TT
rT
Fo
ly
management policy.
Note
on
With the exception of the OS Control mode, Power Regulator modes configured through
iLO do not require a reboot and are effective immediately. OS Control mode changes
become effective on the next reboot.
The Power Capping Settings section displays measured power values and enables
y
you to set a power cap and disable power capping. Measured power values include
er
the server power supply maximum value, the server maximum power, and the server
idle power. The power supply maximum power value refers to the maximum amount
of power that the server power supply can provide. The server maximum and idle
liv
power values are determined by two power tests run by the ROM during POST.
Note
de
The iLO command line interface (CLI) gives you command line access to the same
functions available through the iLO browser-based interface.
TT
rT
Fo
Power efficiency
HP iLO 4 enables you to implement improved power usage using a High Efficiency
Mode (HEM). HEM improves the power efficiency of the system by placing the
secondary power supplies into step-down mode. When the secondary supplies are in
step-down mode, the primary supplies provide all the DC power to the system. The
power supplies are more efficient (more DC output Watts for each Watt of AC input)
at higher power output levels, and the overall power efficiency improves.
When the system begins to draw more than 70% capacity of the maximum power
output of the primary supplies, the secondary supplies return to normal operation (out
ly
of step-down mode). When the power use drops below 60% capacity of the primary
supplies, the secondary supplies return to step-down mode.
on
HEM enables systems to achieve power consumption equal to the maximum power
output of the primary and the secondary supplies, while maintaining improved
efficiency at lower power usage levels. HEM does not affect power redundancy. If
the primary supplies fail, then the secondary supplies immediately begin supplying
y
DC power to the system, preventing any downtime.
er
HEM can only be configured through the ROM-Based Setup Utility (RBSU). These
settings cannot be modified through iLO. The settings for HEM are Enabled or
Disabled (also called Balanced Mode), and Odd or Even supplies as primary. These
liv
settings are visible in the High Efficiency Mode & Standby Power Save Mode section
of the System Information, Power tab. This section displays the following information:
de
Dynam
mic Power Saver
ly
on
y
The Dynaamic Power Saver
supplies operate
o
S
er feature
e takes advanntage of the fact that mo
ineffficiently when lightly load
ost power
ded and morre efficiently when heavilly
liv
loaded. A typical powwer supply ru unning at 200% load couuld have efficciency as low
w as
60%. Ho owever, at 50
0%, the load could be 90 0% efficient.
de
In the gra
aphic, the top
p example shows the pow wer demand d spread ineffficiently acro
oss
six power supplies. Thhe second ex xample dem onstrates thaat with Dynamic Power
Saver, the
e power load d is shifted to
o two powerr supplies forr more efficie
ent operationn.
When thee Dynamic Power
P Saver feature
f is en abled, the to
otal enclosure
e power
TT
power suupplies are placed in stan ndby mode, their LEDs fla
ash.
Rev. 12.3
31 2 -47
Implementing HP BladeSystem Solutions
ly
infrastructure.
Benefits of Dynamic Power Capping include:
on
Reduces costly power and cooling overhead by efficiently using the power and
cooling resource budgeted to each rack
Maximizes utilization of data center floor space by fitting more servers or
y
enclosures in each rack
er
Postpones the need for costly data center expansions or facilities upgrades
Before using Dynamic Power Capping, ensure the enclosure contains redundant
Onboard Administrator modules and is in an N+N Redundant power mode.
liv
For more information on Power Capping, refer to:
http://h18013.www1.hp.com/products/servers/management/dynamic-power-
de
capping/support.html?jumpid=reg_R1002_USEN
TT
rT
Fo
Power delivery
y modes
BladeSysstem enclosures can be configured
c inn one of threee power delivery modes:
Non-Redundant Power
Power Supply Re
edundant
AC Redundant
R
Non-R
Redundantt Power
ly
on
y
er
liv
The Non--Redundant Power
P modee provides noo power reduundancy; an ny power sup
pply
de
or AC lin
ne failure will cause the system
s to pow
wer off. Tota
al power is th
he power
available
e from all power suppliess installed.
Six power
p supplies installed in a BladeSyystem c7000
0 enclosure = 14400W
TT
Six power
p supplies installed in a BladeSyystem c3000
0 enclosure = 7200W
This scenario is used to demonstra ate simple ennclosure setuups or in classsrooms for
training purposes.
p It is not recomm
mended for a production n environmen nt.
rT
Fo
Rev. 12.3
31 2 -49
Implemen
nting HP BladeS
System Solutions
ly
on
y
er
liv
Bla
adeSystem encllosures with DC
C redundant co
onfiguration
The most basic powe er configuratiion has two power supplies. Based oon the power
de
Note
The graphic shows
s two circu
uits (circuit A annd B) being useed. This is possiible but not
necessary forr the power suppply redundant mode. Total po ower for the c70 000 enclosure is
total power available,
a less one
o power supp ply. A 5+1 conffiguration = 120 000W. The
c3000 enclossures can provide up to 6000 0W in a 5+1 co onfiguration.
2 -50 Rev. 12
2.31
HP Blad
deSystem Enclossures
AC Re
edundant
ly
on
y
er
liv
Bla
adeSystem encllosures with AC
C redundant co
onfiguration
In the N+
+N AC Redundant power mode, a m minimum of twwo power supplies is
de
Rev. 12.3
31 2 -51
Implementing HP BladeSystem Solutions
ly
Power Supplies. Power line communication is a feature that allows the power supply
to communicate with the iPDU. The communication between the power supply and
on
iPDU helps:
Automatically discover the server when it is plugged into a power source
Map the server to the individual outlet on the iPDU
y
When combined with the HP line of Platinum-level high-efficiency power supplies, the
er
Intelligent PDU actually communicates with the attached servers to collect asset
information for the automatic mapping of the power topology inside a rack. This
capability greatly reduces the risk of human errors that can cause power outages.
liv
HP Thermal Discovery Services help you reduce energy usage and increase compute
capacity. This feature helps you squeeze the most IT out of every bit of data center
de
power and cooling capacity and reduce energy consumption by 10% compared to a
ProLiant G6 server.
The automated energy optimization capabilities in the ProLiant Gen8 family are
enabled by HP 3D Sea of Sensor technologies. Embedded intelligence across a
TT
sense of location, power utilization, and thermal demand provides a high level of
visibility and control over the energy efficiency of the data center.
http:// www.hp.com/go/ipd
Fo
HP Inttelligent PD
DUs
Rea
ar view of 12 O
Outlet iPDU
ly
on unit with full remote outlet
distributio o control,, outlet-by-ouutlet power trracking, and
d
automate ed documenttation of pow wer configura ation. HP iPD DUs track ouutlet power
on
usage at 99% accura acy, showing g system-by-ssystem poweer usage and d available
power. The iPDU reco ords server ID informatioon by outlet a and forward ds this
information to HP Insight Control,, saving hou rs of manual spreadshee et data-entry
time and eliminating human wirin ng and docuumentation errors.
y
When co ombined with h the HP line of Platinum--level high-effficiency pow
wer supplies, the
er
Intelligent PDU actuallly communiccates with thee attached servers to collect asset
informatioon for the au
utomatic map pping of the power topollogy inside a rack. This
capabilityy greatly red
duces the riskk of human eerrors that ca
an cause pow wer outages.
liv
HP iPDUss provide pow ple objects frrom a single source. In a rack, the iPD
wer to multip DU
distribute
es power to th
he servers, sttorage units,, and other p
peripherals.
de
Using the
e popular core-and-stick architecture
a o
of the HP mo odular PDU lline, the iPDU U
monitors power consu umption at th he core, load d segment, sttick, and outtlet level, with
h
unmatcheed precision and accuraccy. Remote m management is built in. TThis iPDU offe ers
power cyycle ability off individual outlets
o on thee Intelligent E
Extension Ba
ars.
TT
befo
ore it is powe
ered on
overs and maps servers to
Disco t specific ouutlets insuring
g correlationn between
equipment and power
p data collected,
c as a function o of Intelligent Power
Disco
overy
Rev. 12.3
31 2 -53
Implemen
nting HP BladeS
System Solutions
HP po
ower distribution units
ly
on
Monitored PDU
HP PDUs provide pow wer to multiple objects froom a single ssource. In a rack, the PDU
y
distribute
es power to th
he servers, sttorage units,, and other p
peripherals.
er
PDU syste
ems:
Address issues of power distrribution to co
omponents w
within the com
mputer cabin
net
liv
Redu
uce the numb
ber of powerr cables com
ming into the cabinet
Proviide a level of power prottection throug
gh a series o
of circuit brea
akers
de
2 -54 Rev. 12
2.31
HP BladeSystem Enclosures
PDU benefits
Benefits of the modular PDUs from HP include:
Increased number of outlet receptacles
Modular design
Superior cable management
Flexible 1U/0U rack mounting options
Easy accessibility to outlets
ly
Limited three-year warranty
on
HP 16A to 48A Modular PDUs
HP Modular PDUs have a unique modular architecture designed specifically for data
center customers who want to maximize power distribution and space efficiencies in
the rack.
y
Modular PDUs consist of two building blocks—the Control Unit (core) and the
er
Extension Bars (sticks). The Control Unit is 1U/0U, and the Extension Bars mount
directly to the frame of the rack in multiple locations.
liv
Available models range from 16A to 48A current ratings, with output connections
ranging from four outlets to 28 outlets.
HP Monitored PDUs
de
The monitored vertical rack-mount power distribution units provide both single- and
three-phase monitored power, as well as full-rack power utility ranging from 4.9 kVA
to 22 kVA. Available monitored PDUs include:
TT
Fo
BladeS
System c7
7000 PDUs
ly
on
y
er
Avvailable power distribution un its for a c7000
0 enclosure
2 -56 Rev. 12
2.31
HP Blad
deSystem Enclossures
BladeS
System c3
3000 PDUs
ly
on
y
er
liv
Avvailable power distribution un its for a c3000
0 enclosure
de
Note
A pair of PDUUs must be orde
ered for AC feeed redundancy.. If AC redunda
ancy is not
required, a single PDU may be acceptablee.
TT
rT
Fo
Rev. 12.3
31 2 -57
Implemen
nting HP BladeS
System Solutions
Blade
eSystem enclosure
e e power supplies
HP Co
ommon Slo
ot Power Supplies
S
ly
on
y
HP Comm
design th
mon Slot (CS
er
S) Power Supplies share a common ellectrical and physical
hat allows forr hot-swap, toool-less insta
allation into H
HP server and storage
liv
solutions.. CS power supplies
s are available in multiple high-efficiency iinput and
output opptions, allowiing users to “right-size”
“ a power supp ply for speciffic
de
g models:
following
Com
mmon Slot Pla
atinum Plus Power
P Suppliees
Are compatiible with ProLiant Gen8 sservers only
rT
Provide up to
o 94% powe
er efficiency a
at 50% serveer utilization level
Com
mmon Slot Pla
atinum Powerr Supplies
Fo
2 -58 Rev. 12
2.31
HP Blad
deSystem Enclossures
Common
n Slot Platin
num Plus Po
ower Suppliees
ly
on
y
The CS Platinum
P Plus Power Supply family is id
deal for ProLLiant Gen8 ccustomers
er
operating
g mid-to-large data cente
er environments with a foocus on reduccing power,
downtime e, and humaan resource expenses.
e Thee CS Platinumm Plus Power Supply:
Enables HP Intelligent Power Discovery
D — Creates eneergy aware n network that
liv
helpss to reduce data center outages, shrinnk deploymeent times from
m hours to
minuttes, reclaim stranded
s powwer, and ma ximize IT commpute densitty.
de
Provid
des certified best power efficiency (9 ndustry — Re
94%) in the in educes data
cente
er power requ uirements byy up to 60WW/server (as ccompared too ProLiant G6
6
poweer estimates).. This can save up to $800 annually p
per server.
Supports redunda ant High Efficciency and LLoad-Balancing modes — Maximizess the
TT
er efficiency capabilities
powe c of
o power sup pplies.
Providdes compatib bility — Is co
ompatible w
with a wide ra
ange of ProLiant and
Integrrity servers, as
a well as HP Storage soolutions. Easily accessiblee, hot-plug
rT
Note
One CS Pla
atinum Hot Plug Power Supp
ply Kit is requiired for each sserver.
Rev. 12.3
31 2 -59
Implementing HP BladeSystem Solutions
ly
DL360p Gen8
DL380p Gen8
on
DL385p Gen8
ML350p Gen8
SL6500 Gen8
y
It is the lowest-cost power solution available in CS design for 48VDC input. The CS
er
750W - 48VDC Power Supply offers a higher- efficiency DC power solution with
improved power input cabling options.
liv
Improved power efficiency to 94% at 50% utilization — Reduces power waste
and consumption when compared to previous generation 1200W -48VDC
(90%) power supply option.
de
BladeS
System c7
7000 enclo
osure pow
wer suppliees
ly
on
y
er
liv
de
Power supplies
s for a cc7000 enclosurre
c7000 en
nclosure bunndled with the
e HP Insight Management suite proviides six HP
2400W high-efficienccy hot-plug power
p supplies.
Rev. 12.3
31 2 -61
Implementing HP BladeSystem Solutions
ly
Important
! The 2400W power supplies do not operate with 2250W power supplies. Therefore, to
on
use the 2400W power supplies with a c7000 enclosure that uses 2250W power supplies,
you need to replace all the 2250W power supplies with the 2400W power supplies.
y
er
liv
de
TT
rT
Fo
ly
on
y
er
Different
D input power
p moduless for a c7000 eenclosure
liv
The c700
00 enclosure can be insta
alled in both the AC and
d DC environments:
Three
e-phase (3Ø
Ø) AC power
de
Single-phase (1Ø
Ø) AC powerr
-48V
V DC power
Each type
e of power environment
e requires
r a sp
pecific powerr module.
TT
IEC-C
C19 – 16A 208V
2 = 3328
8VA
MA L15-30p 24A 3Ø 208V = 8646V
NEM VA
Fo
IEC 309
3 5-pin 16
6A 3Ø 230V
V = 11040VA
A
In the Bla o L15-30p line cord ca n power onee enclosure p
adeSystem, one populated wiith
16 half-he
eight blades.
nt of comparrison, if it had been desig
As a poin gned for racck-based pow
wer,
BladeSystem enclosurres would req quire 60A to
o 100A threee-phase powe er.
Rev. 12.3
31 2 -63
Implemen
nting HP BladeS
System Solutions
Single
e-phase AC
C power supply
s pla
acement
ly
on
y
Power supply placement fo
or a c7000 encclosure
Install the
e power supp
Two power supp
er
plies based on
plies — Powe
o the total nnumber of suupplies neede
er supplies inn bays 1 and
d4
ed:
liv
Note
The Insight Display panel sliides left or rightt to allow accesss to power sup
pply bays 3 and
d
4.
rT
The prece
eding graphic further deffines the pow
wer supply p
placement ba
ased on the
power redundancy mode.
m
Note
Fo
In single-phasse configuration
ns, you can usee fewer than sixx power supplie
es.
Important
! Three-phase AC
A power requ
uires that all six power suppliess be installed.
2 -64 Rev. 12
2.31
HP BladeSystem Enclosures
ly
module.
Mixing of AC and DC components within the same system is prohibited.
on
Caution
y
! To prevent damage to components in the enclosure, never mix AC and DC power in the
er
same enclosure.
liv
de
TT
rT
Fo
Total available
a power
Power supplies
s in a c7
7000 enclosuree
Total pow
wer available
e to the enclo
osure, assum
ming 2400W
W are availab ble from each
h
power suupply, depends on the poower mode cconfigured fo
or the enclosu
ure.
ly
If no
o power redundancy is co onfigured, th e total poweer available iis defined ass
the power
p availa
able from all supplies insttalled. Thereffore, if six po
ower supplie
es
are installed
i in an
a enclosure installed, 144400W of po ower will be available to o
on
the enclosure.
e
If the
e N+1 powe er mode is coonfigured, theen the total p
power availaable is define
ed
as thhe total powe
er available, less one pow wer supply. TTherefore, an
n enclosure wwith
y
a 5+ +1 configuration will rece
eive 12000W W of power.
er
If the
e N+N AC Redundant
R mode is configgured, then tthe total pow
wer availablee is
the amount
a from the A or B side
s with the lesser numb
ber of suppliees. Therefore,,
an enclosure
e with
h 3+3 config
guration will receive 720
00W of powe er.
liv
Important
! HP strongly re
ecommends tha at you run the H
HP BladeSystemm Power Sizer to
o determine the
de
mple
Exam
Single-phase pow wer runs on 30A circuits in North Am merica. Whe en you apply the
TT
80%% rule (in an NA/JPN envvironment, yo ou can only pull 80% of the total powwer
available on a circuit) this tra
anslates to 24
4A availablee. Therefore, you would uuse
4A modular PDU,
a 24 P which can
c only sup pport 4992VAVA or two powwer suppliess.
With
h redundant AC feeds, yo ou can suppo ort four pow
wer supplies p
per enclosure
e.
rT
Four power supp plies can provvide 9600W W of power to o the compoonents.
A full encclosure of 16 blades requuires up to 37
700W. Four power supp
plies enable
N+N AC C redundancy y as long as you have reedundant AC
C feeds.
Fo
Note
3700W averrages 231.25W W per blade. If thhe 3700W figuure does not incclude the
Onboard Administrator, fans, and interconnnects, you still have overhead of 800W per AC
feed to coverr the additional need and rema ain N+N redunndant.
2 -66 Rev. 12
2.31
HP Blad
deSystem Enclossures
BladeS
System c3
3000 enclo
osure pow
wer suppliees
ly
The c30000 enclosure power supp plies are sing
gle-phase po
ower suppliess that supporrt
both low--line and hig
gh-line enviro
onments. Wa ttage output per power ssupply depen nds
on
on the rated AC input voltage.
200V
VAC to 240V
VAC input = 1200W DC
C output
120V
VAC input = 900W DC output
o
y
Important
! Wall outlet power cords sho ould only be useed with low-linee (100V to 110V V) power sourcees.
If high-line po
ower outlets are
e required, safeety regulations rrequire the use of a PDU or a
TT
UPS between n the c3000 encclosures power supplies and w wall outlets.
c3000 enclosure. DCC and AC poower suppliess cannot be mixed insidee one c3000 0
enclosure
e.
Caution
Fo
Rev. 12.3
31 2 -67
Implemen
nting HP BladeS
System Solutions
ly
on
y
er
liv
c30
000 enclosure power supply nnumbering and
d placement
The quanntity of power supplies is a function o of the power redundancy mode versus
de
the quanttity, type, and configurattion of the deevices installeed in the encclosure. The
tables dissplay the pro
oper location n for power ssupplies in th
he Power Sup pply Redundant
and AC Redundant
R power
p modess. For properr functionalityy, the AC Redundant pow wer
mode req quires two AC C circuits; on
ne connected d to power supplies 1, 2,, and 3 and the
TT
Note
There is no Onboard
O Admin nistrator-enforceed rule that dicta
ates the power supply placem ment
rT
based on thee number of servver blades; how wever, there is oone for the fanss. Power supplyy
population is dependent on the power sup ply redundancyy level and the quantity and
n of server blades and interconnnects.
configuration
Fo
2 -68 Rev. 12
2.31
HP BladeSystem Enclosures
ly
DC.
In AC Redundant mode, six power supplies provide a total of 3600W DC.
on
Important
! HP strongly recommends that you run the Power Sizer (available from
http://www.hp.com/go/bladesystem/powercalculator) or HP Power Advisor to determine
the power and cooling requirements of your configuration.
y
er
liv
de
TT
rT
Fo
ly
Install additional software
on
Perform critical operating system updates and patches
Update server platform firmware
The DVD-ROM drive can be attached using the:
y
DVD-ROM drive bay in the front of the c3000 enclosure
er
USB port on the c7000 enclosure Onboard Administrator module
Local I/O cable connection to the individual server blades
liv
ISO images on a locally attached USB key
The DVD-ROM drive offers local drive access to server blades by using the virtual
de
media scripting capability of iLO. The DVD-ROM drive is connected directly to the
server blade’s USB and provides significantly improved data throughput, as
compared to iLO virtual media, using physical disks or ISO files, especially over
long distances.
TT
rT
Fo
Learning check
1. List three factors that distinguish an ideal deployment for a c3000 enclosure.
.................................................................................................................
.................................................................................................................
.................................................................................................................
2. An HP c7000 enclosure can use standard wall-outlet power.
a. True
ly
b. False
on
3. What is the difference between the Onboard Administrator in the c3000 and
the c7000 enclosures?
a. The c3000 Onboard Administrator is not a DDR2 module.
y
b. The c3000 Onboard Administrator does not have USB ports.
er
c. The c3000 Onboard Administrator has the same components, but they are
in different locations.
d. The c3000 does not support a redundant Onboard Administrator.
liv
4. The Onboard Administrator module for the c7000 enclosure is available with
KVM support and without KVM, and these two versions require different
de
firmware.
True
False
TT
False
Fo
6. What are the benefits of using an industry-standard 12V infrastructure for the
BladeSystem?
.................................................................................................................
.................................................................................................................
7. What BladeSystem challenges are met by Thermal Logic and Active Fans
technology?
.................................................................................................................
.................................................................................................................
ly
.................................................................................................................
on
y
er
liv
de
TT
rT
Fo
Objectives
After completing this module, you should be able to describe the HP ProLiant
Generation 8 (Gen 8) and Integrity server blades that constitute the HP BladeSystem
portfolio.
ly
on
y
er
liv
de
TT
rT
Fo
Rev. 12.31 3 –1
Implemen
nting HP BladeS
System Solutions
ProLiant Ge
en8 serrver bla
ade portfolio
ProLia
ant BL420c Gen8
8 server blade
b
ly
on
y
workload
er
The HP ProLiant BL420c Gen8 serrver blade iss an entry-levvel blade. Th
d spans from single appliications for m
mid-market so
he BL420c
olutions to la
arge enterprisse
liv
requiremeents.
This serve
er blade feattures two eig
ght-core Xeonn processors with the Inte
el C600 serie
es
de
chipset. Additional
A fe
eatures of the
e ProLiant BL4
420c Gen8 server includ de:
SAS/
/SATA/SSD hot-plug drivves
Max
ximum 2 TB storage
s confiiguration
TT
3 –2 Rev. 12
2.31
HP BladeSyste
em Server Blad
des
ProLia
ant BL460c Gen8
8 server blade
b
ly
on
y
The HP ProLiant BL46
er
60c Gen8 server blade o
scalability, and expandability, ma
offers a balan
aking it a sta
andard for da
nce of perforrmance,
ata center coomputing. Th
his
liv
server bla
ade features two eight-coore Xeon pro ocessors with the Intel C6
600 series
chipset. Additional
A fe
eatures includ
de:
Up to
o 512GB of DDR3
D LRDIMMMs — With LRDIMMs, a ProLiant BL4
460c Gen8
de
serve
er can be co
onfigured with up to 512 G
GB of memo
ory.
I/O expansion slots — The BL460c
B Gen8 ports two I/O
8 server supp O expansion
n
mezzzanine slots:
TT
Rev. 12.3
31 3 –3
Implemen
nting HP BladeS
System Solutions
ProLia
ant BL465c Gen8
8 server blade
b
ly
on
y
The HP ProLiant BL46 65c Gen8 serrver blade iss an ideal seerver for virtualization andd
er
consolida
ation. The BLL465c Gen8 is the first seerver blade tto achieve more than 2,0 000
cores perr rack by usin
ng AMD Oppteron 6200 series proceessors with up p to 16 coress
each.
liv
Features of the BL465
5c Gen8 incllude:
Smart Array conttroller with 512
5 MB flash--backed writte cache
de
SmartMemory
SAS and SAS solid-state drive
es
iLO Managemen
M nt Engine
TT
rT
Fo
3 –4 Rev. 12
2.31
HP BladeSyste
em Server Blad
des
Integ
grity i2 server blade portfollio
Integrrity BL86
60c i2
ly
on
y
er
liv
de
Note
rT
The Integrity
y BL860c i2 only
o supports iidentical processors in a tw
wo-processor
configuratio
on.
Registered CAS9 mem mory module es. These meemory modules support e error correctin
ng
code (ECCC), as well as
a double ch hip sparing teechnology. DDouble chip sparing can
detect an
nd correct an
n error in DRA
AM bits, praactically eliminating the d
downtime
needed too replace failed DIMMs.
Note
The Integrity
y BL860c i2 re
equires a min imum of 8 GB
B of RAM to o
operate. Doub
ble
chip technology is not enabled with 2 GB memory m modules.
Rev. 12.3
31 3 –5
Implementing HP BladeSystem Solutions
The server features two SFF SAS hot-plug hard drive bays. Hardware RAID is
provided by an embedded HP P410i RAID controller, which supports RAID 1 for
HP-UX and Linux. Because the SAS controller does not support Microsoft Windows,
Windows internal disk mirroring requires a Smart Array controller and cannot use the
internal hard drives.
Important
! RAID 1 configuration requires two identical hard drives.
The server also features four autosensing 1Gb/10Gb NIC ports through two dual
ly
NC532 Flex-10 adapters, plus an additional 100Mb NIC dedicated to Integrity iLO
management.
on
y
er
liv
de
TT
rT
Fo
3 –6 Rev. 12.31
HP BladeSyste
em Server Blad
des
Integrrity BL87
70c i2
ly
on
y
er
liv
The Integrity BL870c i2 server bla
ade is a full-hheight, doublle-wide form factor serve
er
blade tha
at occupies tw
wo device ba ay slots in a BladeSystem m enclosure.
de
The BL87
70c i2 server blade features the Intel 7
7500 chipseet and:
Processors — The Integrity BL870c i2 maay contain up
p to four Itan
nium 9300
quadd-core processors, with up to 24 MB of L3 cache.. Processor kkits include:
TT
Quad-core processors
p
Itanium 9320 (1.33G
GHz/4-core/
/16MB/155
5W); up to 1.46 GHz witth
Turbo) processor
p
rT
Turbo) processor
p
Dual-core prrocessor
Itanium 9310 (1.6GH
Hz/2-core/1
10MB/130W
W) processor
Note
The Integrity
y BL870c i2 su
upports two, tthree, or four-p
processor con
nfigurations.
Processors must
m be identical.
Rev. 12.3
31 3 –7
Implementing HP BladeSystem Solutions
Note
Memory for the BL870c i2 must be installed in groups of four DIMMs.
ly
and less costly to remotely manage Integrity servers. Integrity iLO 3 ships with a
built-in Advanced Pack License. iLO Advanced features include Virtual Media,
on
LDAP directory services, iLO power measurement, and integration with Insight
Power Manager. No additional iLO licensing is needed.
Note
The iLO Management Engine with iLO 4 is not supported on Integrity server
y
blades.
er
Storage
Up to four SFF SAS hot-plug hard drive bays, providing up to 3.6 TB of
liv
internal storage using four 900 GB SAS drives.
Note
de
Mixed disk configurations are supported, although not for RAID configurations.
RAID 1 configuration requires two identical hard drives.
Two P410i 3Gb SAS controllers provide support for RAID 0, RAID 1, and
TT
Important
! Flex-10 capability requires operating system drivers and the use of an HP Virtual
Connect Flex-10 10GbE Ethernet module.
Fo
3 –8 Rev. 12.31
HP BladeSystem Server Blades
ly
HP QMH 2562 8Gb FC BL-c HBA (2-port 8Gb QLogic FC HBA)
HP Smart Array P711m/1G FBWC Controller
on
HP Smart Array P700m/512 Controller
HP 4X QDR IB CX-2 Dual Port Mezz HCA for HP BladeSystem
HP NC364m 4-port 1GbE BLc Adapter
y
HP NC360m 2-port 1GbE BLc Adapter
er
liv
de
TT
rT
Fo
Rev. 12.31 3 –9
Implemen
nting HP BladeS
System Solutions
Integrrity BL89
90c i2
ly
on
y
er
liv
The Integrity BL870c i2 is a full-he
eight, quadruuple-wide forrm factor serrver blade that
de
occupies four device bay slots in the BladeSysstem enclosuure. It features the Intel 75
500
chipset and supports Integrity iLOO 3. It also feeatures:
Processors — The Integrity BL890c i2 maay contain up
p to eight Ita
anium 9300
quadd-core processors, with up to 24MB o
of L3 cache. Processor kits include:
TT
Quad-core processors
p
Itanium 9320 (1.33G
GHz/4-core/ 5W); up to 1.46 GHz witth
/16MB/155
Turbo) processor
p
rT
Turbo) processor
p
Dual-core prrocessor
Itanium 9310 (1.6GH
Hz/2-core/1
10MB/130W
W) processor
Note
The Integrity
y BL890c i2 supports up to eight processsors. Processo
ors must be
identical.
3 –10 Rev. 12
2.31
HP BladeSystem Server Blades
Note
Memory for the BL890c i2 must be installed in groups of four DIMMs.
Storage — Up to eight SFF SAS hot-plug hard drive bays, providing up to 7.2
TB of internal storage using eight 900GB SAS drives. Four P410i 3Gb SAS
ly
controllers provide support for RAID 0, RAID 1, and HBA mode options.
on
Note
Mixed disk configurations are supported, although not for RAID configurations.
RAID 1 configuration requires two identical hard drives
y
NICs — The Integrity BL890c i2 ships with 16 autosensing 1Gb/10Gb NICs via
er
eight embedded NC532i dual port Flex-10 adapters.
Important
!
liv
Flex-10 capability requires operating system drivers and the use of a Virtual
Connect Flex-10 10GbE Ethernet module.
de
Important
! A maximum of eight of these additional adapters are supported with the BL890c
i2 server blade.
Learning check
1. How many processors can be installed in an Integrity BL860c i2 server blade?
a. 1
b. 2
c. 4
d. 8
2. What is the maximum memory supported in a ProLiant BL460c Gen8 server
ly
blade?
on
a. 96 GB
b. 128 GB
c. 256 GB
y
d. 512 GB
er
3. The ProLiant BL460c Gen8 supports up to two Intel Xeon processors.
True
liv
False
de
TT
rT
Fo
Objectives
After completing this module, you should be able to:
Describe the features and functions of HP:
ly
Storage blades
Tape blades
on
Expansion blades
Describe the features and functions of HP Smart Array controllers
y
er
liv
de
TT
rT
Fo
Rev. 12.31 4 –1
Implemen
nting HP BladeS
System Solutions
HP BladeSy
B ystem storage
s and exxpansio
on blad
des
HP BladeeSystem is bu
uilt not only on
o servers, b
but also on sttorage and e expansion
modules. HP offers many storage solutions tha at increase eeither storage
e capacity orr
storage performance
p for server bllades. A Blad
deSystem ca an also conso olidate otherr
network equipment,
e including storage and ba ackup options.
HP sto
orage blades
ly
on
y
er
liv
de
D2200sb
D Stora
age Blade
HP offers storage solu utions design ned to fit inside the BladeeSystem enclosure, as we
ell
as external expansion n for virtually
y unlimited sttorage capacity. HP stora
age blades
offer flexiible expansio
on and workk side-by-sidee with ProLiant and Integrity server
TT
blades.
The HP sttorage portfo
olio for Blade
eSystems inccludes:
D2200sb Storag
ge Blade
rT
X380
00sb G2 Ne
etwork Storag
ge Gatewayy Blade
X180
00sb G2 Ne
etwork Storag
ge Blade
Fo
IO Accelerator
A
4 –2 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
HP D2
2200sb Sttorage Bla
ade
ly
on
y
er
The HP D2200sb
D Storage Blade delivers
d direcct-attached sstorage (DAS
S) for server
blades. The
T enclosuree backplane provides a PPCIe connecttion to the ad djacent serveer
liv
blade and enables hiigh-performa
ance storage access without additional cables.
The D220 00sb storagee blade featuures an onbooard Smart A
Array P410i ccontroller witth
1GB flash-backed wrrite cache (FB BWC) for inccreased perfo
ormance andd data
de
stora
age environmment inside a BladeSystem
m enclosure
Abiliity to configu
ure the storagge blade forr RAID levels 0, 1, 1 + 0,, 5, and 6
(RAID ADG) by using
u the inte
ernal Smart AArray P410i controller with1 GB flash
h-
backked write cacche
Note
RAID 6 and RAID
R 60 requirre purchase of a Smart Array A
Advanced Packk (SAAP) license
e.
Rev. 12.3
31 4 –3
Implemen
nting HP BladeS
System Solutions
HP X1800sb G2
2 Network Storage Blade
ly
on
y
er
HP X1800sb G2 Network Storage Blade is a flexible stora age server so
olution for
BladeSystem environmments. File se
erving insidee the BladeSyystem enclosu
ure is available
liv
when thee X1800sb G2
G is paired with
w the D22 200sb storagge blade. The X1800sb G G2
can also be used as an
a affordable SAN gatew way to proviide consolida
ated file-servvice
access to
o Fibre Channel, SAS, or iSCSI SANs.
de
Features include:
6 GB
B (3 x 2 GB)) PC3-10600
0R RDIMMs
HP Smart
S Array P410i Contro
oller (RAID 0
0/1)
TT
One
e additional 10/100
1 NIC
C dedicated tto iLO 3 man
nagement
Two I/O expansion mezzanine slots
Fo
Supp
ports up to tw
wo mezzanin
ne cards
Functiona 1800sb G2 can be enha
ality of the x1 anced with o
optional softw
ware such ass HP
Mirroring
g Software or Data Protecctor Express..
4 –4 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
HP X3
3800sb G2 Networrk Storage
e Gatewayy Blade
ly
on
y
The X380 00sb G2 Ne
er
etwork Storagge Gateway Blade is useed to access Fibre Chann
SAS, or iSCSI SAN sttorage, transslating file da ata from the server into b
blocks for
nel,
liv
storage too provide co
onsolidated fiile, print, and
d managemeent hosting sservices in a
cluster-ab
ble package.
de
Key featu
ures include:
One
e quad-core Intel Xeon Pro
ocessor E564
40 (2.66 GH
Hz, 80w)
6 GB
B (3 x 2 GB)) PC3-10600
0R RDIMMs
rT
Integ
grated NC55
53i Dual Portt FlexFabric 1
10GbE Convverged Netw
work Adaptorr
One
e additional 10/100
1 NIC
C dedicated tto iLO 3
Two I/O expansion mezzanine slots
Supp o two mezzanine cards
port for up to
Rev. 12.3
31 4 –5
Implemen
nting HP BladeS
System Solutions
ly
on
y
er
liv
Direct Coonnect SAS Storage
S for BladeSystem
B allows customers to build
d local serveer
de
storage quickly
q with zoned
z storagge or low-cosst shared storage within tthe rack. The
e
high-perfoormance 3G Gb/s SAS arcchitecture co onsists of a Smart Array PP700m
controllerr in each servver and 3Gb b SAS BL swiitches conneected to an HHP Modular D Disk
System (MMDS) 600.
TT
requireme ents.
Fo
4 –6 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
Blade
eSystem tape
t blad
de portfo
olio
HP Ulttrium Tape
e Blades
ly
on
y
The HP Ultrium
U
integrated
d data prote
er
Tape Blades are id deal for BladdeSystem cusstomers who need an
ection solution. These halff-height tapee blades provvide direct-
liv
attach daata protectionn for the adjaacent server and networrk backup pro otection for a
all
data resid
ding within the
t enclosure e. Ultrium Tap
pe Blades offfer a comple ete data
protection
n, disaster re
ecovery, and archiving so olution for BladeSystem ccustomers.
de
Each Ultrrium Tape Bla ade solution ships standa ard with Data a Protector E
Express Basicc
backup and
a recovery y software. In
n addition, ea ach tape blaade supports HP One-Buttton
Disaster Recovery
R (OBDR), which allows quickk recovery of the operating system,
applicatioons, and datta from the la atest full bacckup set. Ultrium Tape Blaades are the
TT
industry'ss first tape bllades and arre developed d exclusively for BladeSysstem enclosu
ures.
The curre
ent BladeSysttem tape blade portfolio consists of:
rT
HP Ultrium
U 448c Tape Blade — Includes LTO-2 Ultrium
m tape technnology with 4
400
GB of
o capacity on
o a single data
d cartridg
ge (2:1 comp
pression) and
d performancce
up to
o 173 GB/hrr (2:1 compre
ession)
HP SB1760c
S Tapee Blade — In
ncludes LTO-4 4 Ultrium tap
pe technologgy with 1.6 TB of
Fo
capa
acity on a sin
ngle data ca
artridge (2:1 compression n) and perforrmance up to
o
576 GB/hr (2:1 compression n)
HP SB3000c
S pe Blade — In
Tap ncludes LTO--5 Ultrium tappe technolog
gy with 3 TB of
capa
acity on a sin
ngle data ca
artridge (2:1 compression n) and perforrmance up to
o1
TB/h
hr (2:1 compression)
Rev. 12.3
31 4 –7
Implemen
nting HP BladeS
System Solutions
BladeS
System tap
pe blades — Featurre comparrison
ly
on
y
er
liv
Comparing HP BladeSyystem tape blad
des
Ultrium 448c,
4 SB1760
0c, and SB3 3000c tape b blade featurees are listed in the
de
precedingg table. The main differeences are in tthe recordingg technologyy (LTO-2, LTO
O-4,
or LTO-5), compressed capacity on o a single ddata cartridge (400GB, 1 1.6TB, or 3.0
0TB,
1 data comp
at a 4:2:1 pression ratio
o), and susta ined transferr rate (173GBB/hr,
576GB/h hr, or 1TB/hr, at the 3:2::1 data comppression ratio
o).
TT
Therefore
e, the tape bllades will be
e seen exactlyy the same aas if they were directly
connected (by way of SCSI, for ex xample) to thhat server bla
ade.
For inform
mation about thee compatibility of these tape b blades with BladeSystem serve
er
blades, re
efer to the BladeSystem Compa atibility section of the QuickSppecs for the
respective
e tape blade, or visit: http://w
www.hp.com/g go/connect
4 –8 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
HP Sto
orage Librrary and Tape
T Toolss
ly
on
y
er
HP Librarry and Tape Tools (L&TT) is a free, rob bust diagnosstic tool for a
all HP tape
storage and
a magneto o-optical stora
age productss. Targeted ffor a wide ra ange of userss, it
liv
is ideal fo
or customers who want to o verify theirr installation, ensure prodduct reliabilitty,
perform their
t own dia
agnostics and d achieve fa ster resolutio on of tape deevice issues.
L&TT perfforms firmwa are upgradess, verificationn of device o
operation, faiilure analysiss
de
Operating systems cu
urrently suppo
orted includee HP-UX, Wiindows, Linuxx, OpenVMS
S,
Solaris, and
a Mac OS S X.
Rev. 12.3
31 4 –9
Implementing HP BladeSystem Solutions
ly
Reduced product downtime through preventative maintenance, fast issue
diagnosis with corrective actions
on
Automated, smart firmware downloads, updates and notifications
Comprehensive device analysis and troubleshooting tests
First-level failure analysis of both the device and system without HP involvement
y
Troubleshoot system performance issues through the use of analysis tools
er
A direct link to the ITRC web-based troubleshooting content
Seamless integration with HP hardware support organization
liv
Ability to generate and email support tickets to the support center for faster
service and support
An all-inclusive source of device information for HP support center
de
PCI Expansion
E n Blades
ly
on
y
HP offers an expansio
er
on blade to support
s card
ds that are no
ot offered in a mezzanine
liv
or.
form facto
The Blade
eSystem PCI Expansion Blade
B provid es PCI card expansion slots to an
de
ML and ProLiant
P DL se
ervers should
d work in thiss PCI Expansion Blade.
Note
HP does not offer any warra
anty or support for third-party PCI products.
Fo
Rev. 12.3
31 4 –11
Implemen
nting HP BladeS
System Solutions
HP PC
CI Expansion Blade — PCI ca
ard detailss
ly
on
y
er
liv
Each PCI expansion blade
b can hoold one or tw wo PCI-X card
ds (3.3V or u
universal) or
one or twwo PCIe cardds (x1, x4, orr x8). It cannnot hold one of each type
e of PCI card
d;
de
that is, on
ne PCI-X and
d one PCIe card at the sa ame time.
Installed PCI-X cards must use lesss than 25W per card. Insstalled PCIe cards must u
use
less than 75W per PC CIe slot, or a single PCIee card can usse up to 150W
W with a
special power
p connecctor enabledd on the PCI eexpansion b
blade.
TT
Customerrs typically in
nstall SSL or XML accelerrator cards, vvoice over IPP (VoIP) cardss,
special purpose
p telecommunicatio on cards, andd graphic accceleration ccards in the PPCI
expansioon blade.
rT
Fo
4 –12 Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
HP IO
O Accelera
ator
The HP IO
O Accelerato or is part of a comprehennsive solid sttate storage pportfolio. Thiis
storage device
d is targ
geted for markets and ap pplications reequiring highh transaction
ly
rates and
d real-time daata access thhat will beneefit from appllication perfo
ormance
enhancemment.
on
Three mo
odels are ava
ailable:
HP 80GB
8 IO Acccelerator for BladeSystem
m c-Class
HP 160GB
1 IO Acccelerator fo
or BladeSysteem c-Class
y
HP 320GB
3 IO Accelerator
A fo
or BladeSysteem c-Class
er
Size(bytes) (Megabytes)
(
8,192 400
4
4,096 800
2,048 1,500
1,024 2,900
TT
512 5,600
5
Data
abases that historically
h were
w run in m emory or accross many disk spindles for
perfo
ormance rea
asons
Seism
mic data pro
ocessing
Fo
Busin
ness intellige
ence and datta mining
Real-time financia
al data proccessing and vverification
Conttent caching for near-stattic data for fiile/web servvers
3D animation/re
a endering
CAD
D/CAM
ual Desktop Infrastructure (VDI) solutio
Virtu ons
Hype
ervisor running multiple virtual
v machiines
Rev. 12.3
31 4 –13
Implementing HP BladeSystem Solutions
Solid state technology can be implemented in various ways within a server. The two
most common implementations are as a solid state drive (SSD) (in a SATA or SAS
form factor) or as an I/O card attached to the PCI Express bus.
As an I/O card, the IO Accelerator is not a typical SSD; rather it is attached directly
to the server's PCI Express fabric to offer extremely low latency and high bandwidth.
The card is also designed to offer high I/O operations per second (IOPs) and nearly
symmetric read/write performance. The IO Accelerator uses a dedicated PCI Express
x4 link with nearly 800MB/s of usable bandwidth. Each mezzanine slot in an
enclosure offers at least that amount of bandwidth, so by combining cards, you can
ly
easily scale the storage to match an application's bandwidth needs.
The IO Accelerator's driver and firmware provide a block-storage interface to the
on
operating system that can easily be used in the place of legacy disk storage. The
storage can be used as a raw disk device, or it can be partitioned and formatted
with standard file systems. You can also combine multiple cards using RAID (up to
three cards with a full-height server blade) for increased reliability, capacity, or
y
performance in a single server blade.
er
liv
de
TT
rT
Fo
Sma
art Array contrroller portfolio
o
ly
on
y
er
liv
The HP array
a controller portfolio consists
c of seeveral models with differiing SAS
de
channels,, memory siz zes, and perfformance. A All Smart Arraay products sshare a
common set of config guration, man nagement annd diagnostic tools, inclu uding Array
Configuraation Utility (ACU),
( Arrayy Diagnostic Utility (ADU
U), and HP SIIM. These
software tools reduce e the cost of training
t for eeach successsive generatioon of producct
and take much of the guesswork out o of troubleeshooting fieeld problems. These toolss
TT
lower the
e total cost off ownership by reducing training and d technical exxpertise
necessaryy to install and maintain HP server sttorage.
The graphic outlines the
t enhancements of the Smart Arrayy controllers shipping in tthe
rT
ProLiant Gen8
G serverss.
More inform
mation about Sm
mart Array cont rollers is availa
able at:
http://h180
006.www1.hp.ccom/products/sservers/prolian ntstorage/array
ycontrollers/ind
de
Fo
x.html
Rev. 12.3
31 4 –15
Implementing HP BladeSystem Solutions
ly
Consistent configuration and management tools — Smart Array products use a
standard set of configuration and management tools and utility software that
on
minimize training requirements and simplify maintenance tasks.
Universal hard drive standards — Form-factor compatibility across many
enterprise platforms enables easy upgrades, data migration between systems,
and management of spare drives.
y
Online spares — You can configure spare drives before a drive failure occurs. If
er
a drive fails, recovery begins with an online spare and data is reconstructed
automatically.
liv
Recovery ROM — Recovery ROM provides a unique redundancy feature that
protects from a ROM image corruption. A new version of firmware can be
flashed to the ROM while the controller maintains the last known working
de
version of the firmware. If the firmware becomes corrupt, the controller reverts
back to the previous version of firmware and continues operating. This reduces
the risk of flashing firmware to the controller.
Note
TT
Although common in most new controllers, Recovery ROM is not a standard feature of
all Smart Array controllers.
rT
ly
The ProLiant ML350p server has Increased I/O expansion by 50% and
increased the I/O capacities by 200% with PCIe Gen3. More I/O bandwidth to
on
the processor, resulting in lower latency (Gen8 = 40 lanes/processor, G7 = 24
lanes/processor).
The ProLiant DL380p server has 200% the I/O capacities with PCIe Gen3. More
I/O bandwidth to the processor resulting in lower latency (Gen8 = 40
y
lanes/processor, G7 = 24 lanes/processor).
er
HP 331FLR and 331T adaptors feature the next generation of Ethernet integration
that reduces power requirements for four ports of 1Gb Ethernet and optimizes
I/O slot utilization.
liv
Other features include:
Choice of FlexLOM adapter tailored to meet the system workload
de
Note
This is important because it meets the performance demands of consolidated virtual
workloads.
Fo
ly
available, it is provided as an upgrade as opposed to shipping standard with
the controller.
on
High-performance controllers — Smart Array controllers generally have write
cache as a standard feature, and it is often upgradeable in this category of
controllers. This group also supports RAID 60 and RAID 6, with the optional
SAAP2.
y
HP Smart Array P822 controller
er
The HP Smart Array P822 controller supported on ProLiant Gen8 servers can support
two times more total drives internally and externally over previous generations, for up
liv
to 227 drives (108 drives are supported with the Smart Array P812 controller).
Additional features include:
de
ly
Global online spare
on
Pre-failure warning
y
controllers that provide improved performance, internal scalability, and lower
maintenance. The P420 controller is ideal for RAID 0/1, 1+0, 5, 50, 6 and 60.
er
Additional advanced features are upgradable by SAAP2. The P420 delivers
increased server uptime by providing advanced storage functionality, including
online RAID level migration with FBWC, global online spare, and pre-failure
liv
warning.
Smart Array P420 and P420i controllers:
de
Deliver high performance and data bandwidth with 6Gb/s SAS technology;
retain full compatibility with 3Gb/s SATA technology
Feature x8 PCI Express Gen 3 host interface technology for high performance
rT
Enable array expansion, logical drive extension, RAID migration and strip size
migration with the addition of the flash backed cache upgrade
Note
A minimum of 512 MB cache is required to enable RAID 5 and 5+0 support with the
Smart Array P420i controller.
Learning check
1. What enables the server blades to partner with storage and expansion blades
within the HP BladeSystem enclosures?
.................................................................................................................
.................................................................................................................
.................................................................................................................
.................................................................................................................
ly
2. The HP PCI Expansion Blade can partner with one full-height server blade and
two half-height server blades.
on
True
False
3. You can connect a storage blade and a tape blade to a single, full-height server
y
blade.
er
True
False
liv
4. The SB40c storage blade requires a dedicated Smart Array controller. This
controller is:
de
Objectives
After completing this module, you should be able to describe the following
ly
HP BladeSystem Ethernet interconnect modules for HP ProLiant server blades:
HP 6120XG Ethernet Blade Switch
on
y
er
HP 1:10Gb Ethernet BL-c Switch
HP 1Gb Ethernet Pass-Thru Module
liv
Rev. 12.31 5 –1
Implementing HP BladeSystem Solutions
ly
on
uplinks and a single 10Gb cross-connect.
Cisco Catalyst Blade Switch 3020 (410916-B21) — Flexible to fit the needs of a
variety of customers, the Cisco Catalyst Blade Switch 3020 for BladeSystem
provides an integrated switching platform with Cisco resiliency, advanced
y
security, enhanced manageability, and reduced cabling requirements.
er
Cisco Catalyst Blade Switch 3120 (451438-B21/451439-B21) — As the next
generation in switching technology, the Cisco Catalyst Blade Switch 3120 Series
liv
introduces a switch stacking technology that treats individual physical switches
within a rack as one logical switch. This innovation simplifies switch operations
and management. The Cisco Catalyst Blade Switch 3120 Series is supported by
de
be either copper or fiber using optional SX SFP fiber modules. This switch is
supported by ProLiant server blades only.
HP 1:10Gb Ethernet BL-c Switch (438031-B21) — This easy-to-manage
rT
interconnect provides sixteen 1Gb downlinks and four 1Gb uplinks, along with
three 10Gb uplinks and a single 10Gb cross-connect. This switch is supported
by ProLiant server blades only.
HP 1Gb Ethernet Pass-Thru Module (406740-B21) — This 16-port Ethernet
Fo
interconnect provides 1:1 connectivity between the server and the network. A
pair of pass-thru modules offers a redundant connection from the servers to the
external switches. It is supported by both ProLiant and Integrity server blades.
HP 10GbE Pass-Thru Module (538113-B21) — The HP 10GbE Pass-Thru Module is
designed for BladeSystem customers requiring a nonblocking, one-to-one
connection between each server and the network. This pass-thru module
provides 16 uplink ports that accept both SFP and SFP+ connectors.
5 –2 Rev. 12.31
EEthernet Conne ctivity Options for HP BladeSyystem
HP 612
20XG Eth
hernet Bllade Switch
ly
Designedd for the Blad deSystem encclosure, the H
HP 6120XG Blade Switch h provides
on
sixteen 10
0Gb downlin nks and eighht 10G SFP+ uplinks (including a dua al-personalityy
CX4 and SFP+ 10G uplink,
u and two
t 10Gb crross-connectss). A robust sset of industrry-
standard Layer 2 swittching functioons, quality oof service (Q
QoS) metering g, security, a
and
high-avaiilability featu
ures round ou
ut this extrem
mely capablee switch offerring.
y
The 61200XG switch iss suited for data
d migrating to next-generation 10Gb h
centers m high-
er
performaance architecctures. With the
t support o of dual speed
ds (1Gb andd 10Gb) on tthe
uplinks and Converge ed Enhanced d Ethernet (C
CEE) hardware capabilityy, the 6120XG
G
provides true future-proofing and investment pprotection.
liv
The 6120 0XG blade sw witch brings consistency and interopeerability acro
oss existing
network investments
i to help reducce the compl exity of netw
work manage ement throug gh
de
resilient core-to-edge
c connectivity and automa ated provisio
oning technologies. With a
variety off connection interfaces, the 6120XG sswitch offers excellent invvestment
protection as ease of deeployment and reduced
n, flexibility, and scalability, as well a
operation nal expense.
TT
Rev. 12.31 5 –3
Implemen
nting HP BladeS
System Solutions
HP 6120XG Ethe
ernet Blade
e Switch — Front pa
anel
ly
The follow
wing table id
dentifies the front
f panel ccomponents o
of the HP 612
20XG Blade
e
Switch.
on
Description
1 Port 17 (10GBA ASE-CX4)*
2 Console
C port (U
USB 2.0 mini-AB connector)
3 Clear
C button
y
4 Port 17 SFP+ (10GbE) slot*†
5 Port 18 SFP+ (10GbE) slot†
er
6 Port 19 SFP+ (10GbE) slot†
7 Port 20 SFP+ (1 10GbE) slot†
8 Port 21 SFP+ (10GbE) slot†
liv
9 Port 22 SFP+ (1 10GbE) slot†
10 Port 23 SFP+ (10GbE) slot*†
11 Port 24 SFP+ (10GbE) slot*†
de
Port 17 co
onsists of a CX4
C port mu ultiplexed witth an SFP+ p
port. Only onne port can bbe
active. Th
he SFP+ portt takes precedence—if it contains a mmodule, it is the active po
ort
and the CX4
C port is inactive.
rT
Ports 23 and
a 24 are eache multiple
exed with intterswitch linkk ports on the
e blade switcch
backplan ne. Either the
e SFP+ port on
o the front p
panel or the backplane p port can be
active, buut both cannot be active at the same time. The SFFP+ port on tthe frontplan ne
Fo
takes preecedence—if it contains a module, it is the active port and its correspondiing
backplan ne port is ina
active.
5 –4 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
HP 612
20G/XG
G Blade Switch
S
ly
Designed d for the BladdeSystem encclosure, the H
HP 6120G/X XG Blade Sw witch providees
on
sixteen 1Gb downlinks, four 1Gb b copper upl inks, and tw
wo 1Gb SFP u uplinks, alon
ng
e 10Gb uplin
with three nks and a sin
ngle 10Gb ccross-connectt. It also inclu
udes a robusst
set of ind
dustry-standard Layer 2 swwitching funcctions, QoS metering, security, and
high-avaiilability featu
ures.
y
The 6120
0G/XG bladee switch is id
deal for data
a centers in trransition, wh
here a mix off
er
1Gb andd 10Gb netw
work connectiions are requuired.
The 61200G/XG blade e switch provvides consisttency and intteroperabilityy across
liv
existing network
n invesstments to he elp reduce thhe complexityy of network management
through resilient
r core--to-edge con nnectivity and
d automated provisioning g technologie es.
With a va ariety of con
nnection interrfaces, the 6120G/XG b blade switch o offers excelle
ent
de
Rev. 12.31 5 –5
Implementing HP BladeSystem Solutions
The following table identifies the front panel components of the HP 6120G/XG Blade
ly
Switch.
on
Description
1 Port C1 (10GBASE-CX4)
2 Port X1 XFP (10GbE) slot*
3 Port X2 XFP (10GbE) slot*
4 Port S1 SFP (1GbE) slot**
y
5 Port S2 SFP (1GbE) slot**
6 Console port (USB 2.0 mini-AB connector)
er
7 Clear button
8 Ports 1–4 (10/100/1000BASE-T)
9 Reset button (recessed)
liv
* Supports 10GBASE-SR XFP and 10GBASE-LR XFP pluggable optical
transceiver modules
** Supports 1000BASE-T SFP, 1000BASE-SX SFP, and 1000BASE-LX SFP
de
5 –6 Rev. 12.31
EEthernet Conne ctivity Options for HP BladeSyystem
Manag
ging HP blade sw
witches
ly
on
Menu interfacce view
y
switch sta
atus and perfformance. HP offers the ffollowing inteerfaces for itts blade
switches:
er
Menu interface — A menu-drriven interfacce offering a subset of sw
ugh the built--in VT100/A
throu ANSI consolee.
witch comma
ands
liv
Com nterface (CLI)) — An interfface offering
mmand line in g the full set o
of switch
commands through the VT10 00/ANSI connsole built into the switch h.
de
inclu
uded in-box with
w all mana ageable ProC Curve devicees. Features include
automatic device
e discovery, network
n statuus summary, topology an nd mapping,
and device mana agement.
rT
ProC
Curve Manag M+) — A com
ger Plus (PCM mplete Wind dows-based n network
management solution that proovides both the basic fea
atures offeredd with PCM as
well as more advvanced mana agement feaatures such ass in-depth tra
affic analysiss,
grou
up and policyy manageme ent, configura
ation manag gement, devicce software
Fo
Rev. 12.31 5 –7
Implementing HP BladeSystem Solutions
ly
from Cisco Systems
One external console port
on
One Fast Ethernet connection to the BladeSystem Onboard Administrator
The Fa0 (port 0) is dedicated to OA management. No data is routed to
the Fa0 port
y
Note
er
Ports 23 and 24 are configured by default as external-facing ports, but they can be
configured to provide an internal crossover connection to an associated Cisco Catalyst
Blade Switch 3020. If the cross-connects are enabled, the external ports 23 and 24 are
liv
automatically disabled.
5 –8 Rev. 12.31
EEthernet Conne ctivity Options for HP BladeSyystem
Catalysst Blade Sw
witch 302
20 front beezel
ly
on
y
er
liv
The switcch module ha as 18 LEDs. You
Y can use the switch mmodule LEDs tto monitor
switch mo odule activity
y and performance. Grap
phical repressentations off the LEDs are
e
de
Rev. 12.31 5 –9
Implemen
nting HP BladeS
System Solutions
ly
on
The Cisco
o Catalyst Blade Switch 3120
3 Cisco Catalyyst Blade Sw
series i ncludes the C witch
y
3120G and Cisco Ca atalyst Blade Switch 3120 0X models. TThe Catalyst BBlade Switchh
3120 seriies introduce
es the Cisco stacking
s techhnology that eliminates th
he need to
er
manage multiple swittches per racck. Key featu res are:
Cisco
o stacking te
echnology
liv
Combine up
p to nine swittches into a ssingle logica
al switch
Use a single
e IP address and routing domain
de
Enable 64G
Gb stack bandwidth
Mix and ma
atch any com
mbination of 3
3120 series sswitches
Enha
anced perforrmance
TT
Imprroved manag
geability
Manage mu ultiple switche witch with a single IP address
es as a singlle logical sw
and a single
e Spanning Tree
T Protocoll (STP) node
Support CisccoWorks software, whichh provides m multilayer featture
configurations such as ro
outing protoccols, ACLs, a
and QoS parrameters
Support the Embedded Events Mana
ager (EEM) a
and Generic On-line
Diagnostics (GOLD)
Support the Cisco Network Assistantt
5 –10 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
Catalysst Blade Sw
witch 3120
0 front beezel
ly
on
y
er
liv
de
Note
The preceding diagram disp
plays 13 LEDs. TThe rest of the LLEDs are visible
e in the device
manager.
Fo
HP Gb
bE2c Lay
yer 2/3 Ethernet
E B
Blade Sw
witch
ly
The HP GbE2c
G Layer 2/3 Etherne et Blade Swittch provides Layer 2 swittching plus th
he
on
additiona
al capabilitie
es of Layer 3 routing.
Using Layyer 3 routingg, inter-VLAN
N routing beccomes more sscalable and d more efficieent
than equivalent Layerr 2 networks that rely on STP alone. I P forwardingg enables traaffic
to be forw
warded betw ween VLANs without an eexternal routeer or Layer 3 switch. Thiss
y
reduces traffic
t in the core
c networkk by making Layer 3 routting decision
ns within the
er
BladeSystem enclosurre. Layer 3 ro outing also reeduces the n
number of brroadcast
domains,, increasing network perfformance annd efficiency.
The Virtua
al Router Reddundancy Prrotocol (VRRPP) maximizess availability in complex
liv
network environments
e s by allowing
g multiple sw
witches to pro
ocess traffic iin an active-
active configuration. All
A switches in a VRRP grroup can pro ocess traffic ssimultaneoussly,
de
4096
6 Address Re
esolution Pro
otocol (ARP) eentries
Glob
bal default ro
oute
Static routing sup
pport with 12
28 routing ta
able entries
rT
Dyna
amic routing support with
h up to 4,00
00 entries in a routing tab
ble
Routing Informatiion Protocol (RIP) and Op
pen Shortestt Path First (O
OSPF)
The GbE22c Layer 2/3 3 switch provvides 16 inteernal downlin nks and two internal crosss-
Fo
connects in a single low-cost bladde switch. It ffeatures five uplinks, fourr of which ca
an
be coppeer or fiber ussing optional SFP fiber m odules.
Note
The HP GbE2
2c Layer 2/3 Fiiber SFP Optionn Kit (440627-BB21) contains tw wo SX SFP fiberr
modules. Only SFP moduless with this part nnumber operatee in the Layer 2
2/3 switch.
5 –12 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
ly
on
y
er
liv
The front bezel of the GbE2c Laye er 2/3 Etherrnet Blade Sw witch feature
es two LEDs
de
(health and UID), one e serial port, and five Ethhernet ports.
The healtth LED in the GbE2c Laye
er 2/3 Etherrnet Blade Sw
witch can be
e in one of th
hree
states:
—Not powere
Off— ed up
TT
ly
on
The HP 1:10Gb Ethern net BL-c Switcch is designeed specifically for the data center
transitioning from 1G
Gb to 10Gb. It enables cuustomers to uuse an existin ng
10/100/ /1000Mb inffrastructure to
o move to 10 0Gb as the n need develop ps.
Designed d for the Blad 1:10Gb Etherrnet BL-c Switch provides
deSystem encclosure, the 1
y
more than 34Gb of uplink
u bandwwidth to hand dle the most demanding applicationss. It
er
delivers sixteen
s 1Gb downlinks, four 1Gb upllinks along w with three 100Gb uplinks
(CX4, XFP), and a 10 0Gb cross-co
onnect in a siingle-bay forrm factor. Perrformance
features include low latency, wire speed perfo ormance for Layer 2 and Layer 3
liv
packets, and low pow wer consumpption.
The XFP
X (10Gb SFP)
S Multi-Source Agreem ment (MSA) iss a specification for a
pluggable, hot-sw
wappable op ptical interfa ce for 10Gb
b SONET/SD DH, Fibre
de
in InfiniBand tech
hnology. It iss designed to
o work up to a distance o of 15 m (49 ft.).
This technology has
h the lowe est cost per p
port of all 10G
Gb interconn nects, but at the
expeense of rangee. Each devicce capable o of supporting
g a 10GbE m module uses
some e MSA to proovide the acttual module connectivity within the device to the
rT
outsiide connecto
or.
Additiona
al features in
nclude:
Industry-standard yer 2 switchiing and Layeer 3 routing functions
d Ethernet Lay
Fo
QoS
S
Secu
urity
High
h-availability features
Gb Ethernet BL-c Switch reduces cabl ing and pow
The 1:10G wer and coolling
requireme
ents compared to stand-aalone switchees.
It is comp
patible with all
a server bla
ades in a Bla
adeSystem c7
7000 enclosure.
5 –14 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
1:10Gb
b Ethernet BL-c Switcch front beezel
ly
on
y
er
liv
The front bezel of the 1:10Gb Ethe ernet BL-c Sw
witch providees following ttwo LEDs perr
port for th
he front pane
el Ethernet ports:
p
de
RJ-45
5 port speed
d LED
RJ-45
5 and 10Gb
b link/activity
y LED
TT
rT
Fo
HP 1G
Gb Ethern
net Pass-TThru Mod
dule
ly
used in an
a enclosure;; however, th he actual perrformance deepends on en nd-to-end
connectivvity.
on
The 1Gb Ethernet Passs-Thru Modu 16 internal 1Gb downlinkks and 16
ule delivers 1
external 1Gb RJ-45 copper
c uplinkks. Designedd to fit into a single I/O b
bay of the
BladeSystem enclosurre, the 1Gb Ethernet Passs-Thru module should be installed in
pairs to provide
p redundant uplinkk paths.
y
Note
er
The 1Gb Etheernet Pass-Thru module (PN: 4406740-B21) shiips as a single unit and should
d
be ordered in
n quantities of two.
t Cables aree not included.
liv
This Ethernet pass-thru
u module is designed
d forr customers w
who want an n unmanaged d
nnection betw
direct con ween each server
s blade within the enclosure and d an external
network device
d such as
a a switch, router, or huub. There is n
no need for eextra LAN
de
Because of the additional cost of cabling and d extra ports on director-cclass switchees,
the 1Gb Ethernet Pass-Thru Module is an expeensive way ffor a custome er to connectt to
networks. It is targete
ed toward cusstomers limitted to direct 1:1 connectio ons between
n
rT
the server and networks. Pass-thru u modules al so offer direct pass-throu ugh for
customerss who do no ot want embe edded switchhing or an exxtra layer of LAN manag ged
switches; however, HP P Virtual Con
nnect is a moore cost-effecctive alternattive.
Fo
Important
Pass-thru app
proaches are sim
mple, but addinng many cabless could lead to reliability
problems andd risks of human error. No Virttual Connect suupport is availa
able for pass-thrru
modules.
5 –16 Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
ly
The HP 10GbE Pass-TThru Module is designed for BladeSysstem and HPP Integrity
Superdomme 2 customers requiringg a nonblockking, one-to-oone connectiion between
on
each servver and the network.
n Thiss pass-thru m
module provid
des 16 uplinkk ports that
accept booth SFP and SFP+ conne ectors.
The HP 10GbE Pass-TThru Module can supportt 1Gb and 10Gb connecctions on a p port-
by-port basis.
b Optical as well as Direct Attachh Copper (DA AC) cables a
are also
y
supported d. Both standdard Etherne
et as well as Converged E Enhanced Etthernet (CEE))
traffic to an
a FCoE cap pable switch is possible w
when using tthe appropriate NIC or
adapter module.
FlexNICs.
m
er
Thiss module sup
pports all NICCs and mezzzanine adappters includin
ng
liv
This module contains following po
orts:
Sixte
een internal 1Gb/10Gb
1 downlinks
de
HP 10G
GbE Pass-TThru Modu
ule compo
onents
ly
Front view of
o an HP 10Gb
bE Pass-Thru Mo
odule
on
Description
UID LED Blue light on—The pass-tthru module is a activated.
y
Blue light off—The pass-tthru module is d deactivated.
er
Off—The pass-thru moduule is powered o off.
Green—The pass-thru mo odule is powereed up, and all p ports
match.
Health LED
liv
Amber—A An issue exists, such as a port mismatch. For more
informatioon, see the HP B BladeSystem En nclosure Setup a
and
Installation
n Guide.
Green—Liink is 10G.
de
Flashing green—10G
g lin k activity is dettected.
Amber—LLink is 1G.
Ethernet
port
Flashing amber—1G
a linkk activity is deteected.
Flashing alternately
a gree n and amber— —A link mismatcch
condition exists. For moree information, ssee the HP
TT
SFP+ portss to support SFPP and SFP+ trannsceiver modulees and Direct A Attach
Cables (DAACs)
Fo
5 –18 Rev. 12
2.31
Ethernet Connectivity Options for HP BladeSystem
Learning check
1. List the available interconnect modules supported for ProLiant server blades in a
c7000 enclosure.
.................................................................................................................
.................................................................................................................
.................................................................................................................
2. Match each interconnect with its description.
ly
a. Cisco Catalyst Blade ............ A 16-port Ethernet interconnect that
Switch 3020 provides 1:1 connectivity between the
on
server and an external switch port
b. GbE2c Layer 2/3 ............ An integrated Layer 2+ switch that
Ethernet Blade Switch features 16 internally facing ports
c. 1:10Gb Ethernet Switch ............ A high-performance, affordably
y
priced, low-latency switch with
er
20 ports (16 downlinks and 4 uplinks)
d. 1Gb Ethernet Pass-Thru ............ A switch with a full set of Layer 3
Module routing that uses optional SX SFP fiber
liv
modules
e. Cisco Catalyst Blade ............ A switch that provides switch stacking
Switch 3120 technology, which combines up to
de
3. Name the key differences between the Cisco Catalyst 3120G and 3120X.
TT
.................................................................................................................
.................................................................................................................
rT
Fo
4. How many VLAN IDs does the Cisco Catalyst Blade Switch 3120 support?
a. 1,024
b. 1,005
c. 1,000
d. 1,010
5. Which Ethernet module features five uplinks, four of which can be copper or
fiber using optional SFP fiber modules?
ly
a. Virtual Connect Flex-10 10Gb module
b. 1/10Gb VC-Enet module
on
c. GbE2c Layer 2/3 Ethernet Blade Switch
d. 10GbE Pass-Thru Module
y
er
liv
de
TT
rT
Fo
Objectives
After completing this module, you should be able to:
ly
Describe the HP BladeSystem Fibre Channel interconnect modules available for
HP BladeSystems
on
Cisco MDS 9124e Fabric Switch
Brocade 8Gb SAN Switch
Describe the Serial-Attached SCSI (SAS) switches available for BladeSystems
y
er
Rev. 12.31 6 –1
Implemen
nting HP BladeS
System Solutions
Fibre
e Chan
nnel inte
erconne
ect options
The BladeeSystem architecture offe
ers several c hoices for co
onnecting se
erver blades to
Fibre Cha
annel networks. These Fiibre Channeel interconnecct modules a are currently
available
e for the Blad
deSystem:
Cisco
o MDS 9124 4e Fabric Swiitch — A Fib
bre Channel switch that ssupports link
speeeds up to 4GGb/s. The Cissco MDS 91 24e Fabric Switch can o operate in a
fabriic containing witches or as the only swiitch in a fabric.
g multiple sw
Broccade 8Gb SA AN Switch — An easy-to-m manage emb bedded Fibrre Channel
ly
switcch with 8Gb/s performance. The Bro ocade 8Gb S SAN Switch hot-plugs intto
the back
b of the BladeSystem
B enclosure. TThe integrateed design fre
ees up rack
on
spacce, enables shared
s poweer and coolinng, and reduuces cabling and the
number of small form factor pluggable (S SFP) transceivers. The Bro
ocade 8Gb
SANN Switch provvides enhancced trunking
g support and d new featurres in the Pow
wer
Packk+ option.
y
Cisco
o MDS 91
124e Fab
bric Switcch for Bla
adeSystem
er
liv
de
The Ciscoo MDS 9124e Fabric Switch for BladeeSystem featuures 16 logiccal internal
ports (numbered 1 through 16) that connect seequentially to o server bayys 1 through 16
through the enclosure
TT
e midplane. Server
S bay 1 is connecteed to switch p port 1, server
bay 2 is connected to
o switch portt 2, and so fo
orth. The exteernal ports a
are labeled
EXT1 throough EXT4 (left bank) and EXT4 throuugh EXT8 (rig ght bank).
Up to six zero-footprint switches are
a supported d per enclosure. The hot--swappable
rT
6 –2 Rev. 12
2.31
Storage Connectivity Options for HP BladeSystem
The Cisco MDS 9124e Fabric Switch is available in two port-count options as well as
with the option of an upgrade license for lower cost of entry:
Important
! These are the same physical switch; available port options are dependent on the
license purchased.
ly
Four external 4GB ports
Two preinstalled short wavelength small form-factor pluggable (SFP)
on
modules
Licensing for port activation in eight-port increments (the first eight ports are
licensed by default)
y
Cisco MDS 9124e 24-port Fabric Switch (PN: AG642A)
er
Sixteen internal 4GB ports
Eight external 4GB ports
liv
Four preinstalled short wavelength SFPs
Licensing available for port activation in eight-port increments (the first eight
de
Rev. 12.31 6 –3
Implementing HP BladeSystem Solutions
Note
ly
PortChannel includes up to eight ports in one logical bundle.
on
Universal ports with self discovery
Nondisruptive software upgrades
SAN-OS level 3.1(2) or later
y
Standard and optional software
er
The standard software components are:
SAN-OS — Delivers advanced storage networking capabilities.
liv
Cisco Fabric Manager — Provides integrated, comprehensive management of
larger storage area network (SAN) environments, enabling you to perform vital
de
advanced security features recommended for all enterprise SANs. The following
additional features are bundled together in the Cisco MDS 9000 Enterprise
package:
Fo
6 –4 Rev. 12.31
S
Storage Conne ctivity Options for HP BladeSyystem
ly
on
y
The prece
er
eding graphic shows the Cisco MDS 9124e Fabriic Switch layyout and
liv
compone ents.
de
TT
rT
Fo
Rev. 12.3
31 6 –5
Implemen
nting HP BladeS
System Solutions
Dynam
mic Ports on
o Deman
nd
ly
on
y
er
liv
Sta
atic mapping co
onfiguration
Static ma
apping descrribes the relationship betw ween the devvice bays an nd the interna
al
de
switch po
orts. Specific device bayss must be pop pulated to m
match a corre esponding
active sw
witch port. This configuration significa ntly enhancees usability fo
or low-touch
server customers.
With Dyn namic Ports ono Demand (DPOD),
( youu can map any device ba ay to an activve
TT
port. Portts are allocatted on a firstt-come-first, sserved basis to any locattion, includin
ng
external ports.
p The nuumber of pre e-reserved po orts decreasees the numbeer of ports fro
om
the pool of ports. Rem moving a serrver or externnal port (exceept a pre-resserved port)
rT
6 –6 Rev. 12
2.31
S
Storage Conne ctivity Options for HP BladeSyystem
Broca
ade SAN
N switches
ly
Brocade 8Gb SA
AN switch
on
Switch.
The switcch hot-plugs into the enclo
osures, uses power and ccooling provvided by the
enclosurees, and features 24 auto-sensing portts (16 interna
al and 8 exte
ernal). The
switches can be mana aged locallyy and remoteely using the HP BladeSysstem Onboard
y
Administrrator and Broocade Fabricc OS configuuration and m management tools.
er
This switcch also suppoorts DPOD, a feature tha at automatica ally discoverss online portts
and assig gns an availaable license to them. Thiss feature ena ables you to connect servver
liv
blades to
o switch portss without reggard for the sserver slot poopulated; thee associated
switch poorts automatically activate e as the servver ports are deployed. PPorts are
activated on a first-co
ome, first-servved basis forr any combin nation of locaations,
de
g external po
including orts.
The Broca
ade 8Gb SA
AN Switch features:
8Gb
b performancce
TT
Man
nagement fea
atures in the Power Pack+
+ bundle
Fo
Rev. 12.3
31 6 –7
Implementing HP BladeSystem Solutions
ly
Full fabric connectivity
HP B-Series 8/24c SAN Switch (PN: AJ821A)
on
8Gb SAN Switch with 24 ports enabled (16 internal and 8 external ports)
Four short-wave 8Gb SFPs
Full fabric connectivity
y
HP B-Series 8/24c SAN Switch Power Pack+ (PN: AJ822A)
er
8Gb SAN Switch with 24 ports enabled (16 internal and 8 external ports)
Four short-wave 8Gb SFPs
liv
Full fabric connectivity
Power Pack+ bundle
de
6 –8 Rev. 12.31
Storage Connectivity Options for HP BladeSystem
ly
(APM)
Advanced zoning — Enables administrators to organize a physical fabric into
on
logical groups and prevent unauthorized access by devices outside the zone
Web tools — Enable organizations to monitor and manage single Fibre Channel
switches and small SAN fabrics
y
Dynamic Path Selection — Improves performance by routing data traffic
dynamically across multiple links and trunk groups using the most efficient path
er
in the fabric
Secure Fabric OS — Provides policy-based security protection for more
liv
predictable change management, assured configuration integrity, and reduced
risk of downtime
Security methods include digital certificates and digital signatures, multiple
de
Rev. 12.31 6 –9
Implementing HP BladeSystem Solutions
ly
Fabric Manager — Manages up to 80 switches across multiple fabrics in real
time, helping SAN administrators with SAN configuration, monitoring, dynamic
on
provisioning, and daily management—all from a single seat
y
er
liv
de
TT
rT
Fo
SAS storag
ge solutions for BladeeSystem
m serverrs
HP 3G
Gb SAS BL Switch
ly
on
HP 3Gb SAS B
BL Switch
The HP 3Gb
3 SAS BL Switch
S for HP BladeSysteem enclosurees is an integ
gral part of H
HP
direct-con
nnect SAS sto orage, enabling a straighhtforward, exxternal zone
ed SAS or
y
shared SAAS storage solution.
s The SAS architeccture combinnes an HP P7700m Smart
Array conntroller in ea
ach server with 3Gb SAS BL switches connected to o either an H
HP
600 Mod dular Disk Sy
Storage Modular
M Sma
er
ystem (MDS6 600) enclosuure for zoned
art Array (MSA2000sa) for shared S
d SAS or an H
SAS storage.
HP 2000sa
liv
The 3Gb SAS BL Swittch enables two
t external architectures for BladeSyystem serverss:
Zone ed SAS — Usse the HP Virrtual SAS Ma anager (VSMM) software oof the 3Gb S SAS
de
stora
age providedd by the MSA A2000sa. Thhe MSA2000 0sa creates a shared
stora
age environm
ment where more
m than onne server blade can acce ess a storage
e
logiccal unit.
Fo
Rev. 12.3
31 6 –11
Implemen
nting HP BladeS
System Solutions
HP Virtual SA
AS Manag
ger
ly
on
y
er Example of the VSM
M Maintain tab
liv
HP Virtuaal SAS Mana ager (VSM) iss embedded d in the 3Gb SAS BL Swittch firmware
and is the
e software application ussed to createe hardware-b
based zone ggroups to
control acccess to exte
ernal SAS sto
orage enclosuures and tap
pe devices.
de
VSM ena
ables you to perform the following tassks:
Enter switch para
ameters
Crea
ate zone groups
TT
Assig
gn zone grou
ups to serverrs
Rese
et the switch
rT
Update firmware
e
Note
Storage is configured,
c forrmatted, and partitioned ussing software utilities such a
as
Fo
Note
For more infformation abo
out HP Virtual SAS Manageer, consult the HP Virtual SA
AS
Manager 2.2.4.x User Guide
G availabl e from the HPP website.
6 –12 Rev. 12
2.31
S
Storage Conne ctivity Options for HP BladeSyystem
4X InfiniBa
and switch modules
QLogic BLc 4X
X QDR IB Switcch for HP BladeeSystem
ly
The 4X In
nfiniBand Sw
witch moduless for BladeSyystem are do
ouble-wide swwitch module
es
based onn the Mellano
ox technolog
gy. The 4X InnfiniBand Sw
witch module has
on
16 downlink ports to connect up to
t 16 server blades in thee enclosure.
A subnet manager is required to manage and d control an InfiniBand faabric. The
subnet manager functionality can be provided d by either a rack-mount InfiniBand
switch with an embed dded fabric manager
m (alsso known ass an internally managed
y
switch) orr by host-bassed subnet manager
m softwware on a seerver conneccted to the
er
fabric.
The 4X In
nfiniBand sw
witch moduless available fo
or a BladeSyystem environ
nment are:
liv
HP 4X
4 QDR Infin
niBand Switch
h Module
Compatible with only the
e BladeSysteem c7000 en
nclosure
de
inter-switch links or to co
onnect to exteernal serverss
0Gb/s (QDR
Supports 40 R) bandwidth
HP 4X
4 DDR InfiniiBand Gen2 Switch Mod ule
rT
Rev. 12.3
31 6 –13
Implementing HP BladeSystem Solutions
ly
Includes 16 internal 4X QDR downlink ports
Includes 16 external 4X QDR QSFP uplink ports
on
Uses the QLogic TrueScale ASIC architecture
Designed to cost-effectively link workgroup resources into a cluster or
provide an edge switch option for a larger fabric
y
Supports an optional management module that includes an embedded
er
subnet manager
Supports optional InfiniBand Fabric Suite software
liv
Enables up to a 288-node fabric using only the management capability of
the unit
de
Depending on the mezzanine connectors used for the InfiniBand host channel
adapter (HCA), the switch module must be inserted into interconnect bays 3 and 4,
5 and 6, or 7 and 8.
TT
rT
Fo
Mez
zzanine
e cards and adapterss
Similar to
o the PCI slotts and cards used in the ProLiant servvers, mezzannine slots andd
cards in the
t BladeSysstem provide a connectio on from the seerver bladess to Ethernet,
Fibre Chaannel, and InnfiniBand swwitches.
Mezz
zanine ca
ard and slot optio
ons availlable for BladeSyystem
ly
on
y
HP NC550m
N 10Gb 2-port PCIe xx8 Flex-10 Ethe rnet Adapter
Note
Both Type I and Type II mezzanine
m carrds use the samme 450-pin connector (2000
signal/250 gnd) to connect the powerr and PCIe sig gnals and the connections
from the serrver blades to the interconn ect bays.
Fo
Rev. 12.3
31 6 –15
Implementing HP BladeSystem Solutions
ly
Administrator detects any mismatch between the mezzanine card and the switch
ports and will not allow the connection if it is misconfigured.
on
The basic architecture of the Type I mezzanine card has the following specifications:
PCIe x4 or x8 bus width
Maximum power is 15W
y
3.97” (100.84mm) x 4.46”(113.28mm)
er
Type II cards and slots are eight-lane (x8) only. Type II mezzanine slots:
liv
Are the higher-positioned slots on the server blade system board
Accept either Type I or Type II mezzanine cards
de
The Type II mezzanine card operates only in Type II mezzanine slots and is typically
used for high-powered Gigabit applications such as 10Gb Ethernet.
All Type II mezzanine cards support eight lanes of connections to:
TT
The basic architecture of the Type II mezzanine card has the following features:
25W maximum power
PCIe x8 bus width
Fo
HBAss availab
ble
The host bus adapter (HBA) mezz
zanine cardss available fo
or BladeSyste
ems are:
QLog
gic QMH2562 8Gb Fibrre Channel H
HBA (PN: 45
51871-B21)
Emulex LPe1205--HP 8Gb/s Fibre
F Channeel HBA (PN: 456972-B21
1)
Broccade 804 8G
Gb FC HBA for
f HP BladeeSystem (PN: 590647-B21)
QLog
gic QMH
H2562 8G
Gb Fibre Channeel HBA
ly
on
y
er
liv
de
TT
QLogic QM
MH2562 8Gb Fibre Channel HBA
The QLog gic QMH256 62 8Gb Fibre e Channel HHBA is a dua al-channel PC CIe mezzanin ne
form factoor card desig gned for BladeSystem so olutions. It deelivers twice tthe data
rT
throughput as the pre evious generaation 4Gb m mezzanine ca ard. It is optimized for
virtualization, low pow wer usage, management
m t, security, reeliability, ava
ailability, and
d
serviceabbility. It is also backward compatible with 4Gb and 2Gb Fibrre Channel
speeds and is compa atible with all BladeSystem
m server blades. It is opttimized for H HP
Fo
storage devices
d and isi supported by third-parrty SAN vend dors.
Rev. 12.3
31 6 –17
Implementing HP BladeSystem Solutions
ly
Reduced power consumption
Saves power with the latest-generation technology
on
Reduces overall power consumption by reducing the number of components
on each Fibre Channel HBA
Requires lower airflow so it lowers power consumption
y
Multipath support for redundant HBAs and paths, including Linux driver failover
er
Optimized reliability, availability, and serviceability (RAS), security, and
manageability
liv
de
TT
rT
Fo
Emule
ex LPe1205-HP 8G
Gb/s Fib
bre Chan
nnel HBA
A
ly
on
y
Emulex LPe1205-HP 8Gb/ss Fibre Channeel HBA
performa
er
The Emulex LPe1205-HP dual-portt Fibre Chan nel HBA pro
ance 8Gb/s connectivity. In addition to providing
ovides reliable, high-
g greater ban ndwidth, the
liv
LPe1205-HP HBA also o provides fe
eatures such a
as data integ grity, securityy, and
virtualization, which are
a all comp plementary too initiatives im
mportant to tthe enterprisee
data centter.
de
Commprehensive virtualization
v n capabilitiess with suppoort for N_Portt ID
ualization (NPIV) and Virttual Fabric — Provides support for up
Virtu p to 255 VPo
orts,
whicch improves server
s consolidation capa abilities and asset utiliza
ation
Supeerior perform
mance capab ble of sustain
ning up to 20
00,000 I/Os per second per
Fo
Rev. 12.3
31 6 –19
Implementing HP BladeSystem Solutions
ly
Message Signaled Interrupts eXtended (MSI-X) Support for Greater Host CPU
on
Utilization — Streamlines interrupt routing to improve overall server efficiency
y
er
liv
de
TT
rT
Fo
Broca
ade 804 8Gb Fib
bre Chan
nnel Hostt Bus Ada
apter
ly
on
y
Brocade 804 8Gb
8 Fibre Cha
annel Host Bus A
Adapter
er
Gb Fibre Cha annel HBA offfers high-performance co
es to the servver and appl ications, and
extends fabric feature
onnectivity,
d integrates sseamlessly wwith
liv
managem ment software e such as HP P Data Centeer Fabric Ma anager to pro
ovide a
complete
e end-to-end data
d center solution.
s
de
Up to
o 1600MB/ss throughputt per port
Fabrric-based boo
ot LUN disco
overy, which enables simplified deplo
oyment of bo
oot-
over--SAN environ
nments
rT
Rev. 12.3
31 6 –21
Implemen
nting HP BladeS
System Solutions
HP 4X
X InfiniBa
and Mez
zzanine HCAs
H
ly
on
y
er
QLogic 4X QDR IB Dual-Po
ort Mezzanine HCA
liv
The 4X In
nfiniBand Me
ezzanine HC
CAs for BladeeSystem encllosures includ
de:
de
HP 4X
4 QDR IB Du
ual-Port Mezzzanine HCA
A
Based on the ConnecX-2
2 technologyy from Mellan
nox or on the
e TrueScale
technology from
f QLogic
Designed ass a dual-portt 4X QDR InffiniBand PCI Express G2 Mezzanine
TT
card
Designed fo
or PCI Expresss 2.0 x8 connnectors on B
BladeSystem G6 server
blades
rT
Supported on
o ProLiant BL280c G6, B
BL2x220c G
G6, BL460c G
G6, and
BL490c G6 server bladees
Supported with
w the Volta
aire OFED Li nux driver sta
ack and Win
nOF 2.0 on
Fo
Microsoft Windows
W HPC
C Server 20008
6 –22 Rev. 12
2.31
Storage Connectivity Options for HP BladeSystem
ly
on
Supported on ProLiant and Integrity server blades
y
up to 40Gbps (QDR) bandwidth (dual port) for performance-driven server and
er
storage clustering applications in High-Performance Computing (HPC) and enterprise
data centers. Key features include:
Based on the Mellanox ConnectX-3 IB technology
liv
Learning check
1. Advanced zoning and frame filtering are standard software with the Brocade
8Gb SAN Switch.
True
False
2. Which switch supports Dynamic Ports on Demand?
a. Cisco MDS 9124e Fabric Switch
ly
b. Brocade 8Gb SAN Switch
on
c. HP 4Gb VC-FC Module
d. HP 4Gb FC Pass-Thru Module
3. The QLogic QMH2462 4Gb Fibre Channel HBA fits all ProLiant server blades.
y
True
er
False
4. Which feature enables the Brocade 8Gb SAN Switch to facilitate interoperability
with other SAN fabrics and eliminate domain considerations while improving
liv
SAN scalability?
a. ISL Trunking
de
b. Frame filtering
c. Access Gateway mode
d. Fabric QoS
TT
5. Which HBA provides support for up to 255 VPorts, which improves server
consolidation capabilities and asset utilization?
a. QLogic QMH2562 8Gb Fibre Channel HBA
rT
Objectives
After completing this module, you should be able to explain how to configure:
An HP GbE2c Layer 2/3 Ethernet Blade Switch
ly
A Cisco Catalyst Blade Switch 3020 or 3120
An HP 1:10Gb Ethernet BL-c Switch
on
An HP 6120XG or 6120G/XG switch
y
er
liv
de
TT
rT
Fo
Rev. 12.31 7 –1
Implementing HP BladeSystem Solutions
ly
on
User, operator, and administrator access rights
The user interface provides multilevel password-protected user accounts. To enable
better switch management and user accountability, three levels or classes of user
access have been implemented on the switch. Levels of access to the command line
y
interface (CLI), web management functions, and screens increase as needed to
er
perform various switch management tasks. Access classes are:
User
liv
Operators
Administrators
de
Access to switch functions is controlled through the use of unique surnames and
passwords. After you are connected to the switch through the local console, telnet, or
Secure Shell (SSH) encryption, you are prompted to enter a password.
TT
Note
HP recommends that you change default switch passwords after the initial
configuration and as regularly as required under your network security policies.
For more information, see “Setting Passwords” in the GbE2c Ethernet Blade
rT
7 –2 Rev. 12.31
Configuring Ethernet Connectivity Options
Access-level defaults
The default user names and passwords for each access level are:
User
User interaction with the switch is completely passive. The user has no direct
responsibility for switch management. He or she can view all switch status
information and statistics, but cannot make any configuration changes to the
switch.
The password is user.
ly
Operator
on
Operators can only make temporary changes on the switch. These changes
will be lost when the switch is rebooted or reset. Operators have access to
the switch management features used for daily switch operations. Because
any changes an operator makes are undone by a reset of the switch,
y
operators cannot severely impact switch operation.
er
By default, the operator account is disabled and has no password.
Admin
liv
Only administrators can make permanent changes to the switch
configuration; these changes are persistent across a reboot or reset of the
switch. The administrator has complete access to all menus, information,
de
Rev. 12.31 7 –3
Implementing HP BladeSystem Solutions
ly
9600 baud rate, 8 data bits
on
No parity, 1 stop bit
No flow control
To access the GbE2c switch remotely:
y
1. By default, the switch is set up to obtain its IP address from a Bootstrap Protocol
er
(BOOTP) server existing on the attached network. From the BOOTP server, use
the interconnect media access control (MAC) address to obtain the switch
IP address.
liv
Important
! By default, BOOTP is enabled at the factory. To establish a static IP address, you
de
Note
The GbE2c switch can obtain its IP address from either BOOTP or DHCP.
TT
2. From a computer connected to the same network, use the IP address to access
the switch by using a web browser or telnet application, which enables you to
access the switch Browser-Based Interface (BBI) or CLI.
rT
To access the switch remotely, you must set an IP address in one of the following
ways:
Management port access — This is the most direct way to access the switch.
Fo
7 –4 Rev. 12.31
Configuring Ethernet Connectivity Options
ly
workstation/server or to the network containing the workstation.
on
Important
! Verify that the interconnect is not being modified from any other connections
during the remaining steps.
3. Open a telnet connection by using the IP address set earlier. When the login
y
prompt displays, the connection locates the switch in the network.
er
4. Enter the password. The default password is admin. If passwords have not been
changed from the default value, you are prompted to change them. You can do
liv
one of the following:
Enter new system passwords.
de
Note
You can create up to two simultaneous admin sessions and four user sessions.
TT
5. Verify that the login was successful. A successful login displays the switch name
and user ID to which you are connected.
rT
Fo
Rev. 12.31 7 –5
Implementing HP BladeSystem Solutions
ly
to 31 network cables from the back of the server blade enclosure.
on
Note
On a heavily used system, using a single uplink port for 32 Ethernet signals can
cause a traffic bottleneck. For optimum performance, HP recommends that at
least one uplink port per switch be used.
y
Redundant crosslinks
er
The two switches are connected through redundant 10/100/1000 crosslinks. These
two crosslinks provide an aggregate throughput of 2Gb/s for traffic between the
switches.
liv
Redundant paths to server bays
de
Redundant Ethernet signals from each server blade are routed through the enclosure
backplane to separate switches within the enclosure. Two Ethernet signals are routed
to Switch 1 and two are routed to Switch 2. This configuration provides redundant
paths to each server bay; however, specific switch port to server mapping varies,
TT
7 –6 Rev. 12.31
Configuring Ethernet Connectivity Options
After a switch is configured, you can back up the configuration to a TFTP server as a
ly
text file. You can then download the backup configuration file from the TFTP server to
restore the switch back to the original configuration. This restoration could be
on
necessary if:
The switch configuration becomes corrupted during operation.
The switch must be replaced because of a hardware failure.
y
Configuring multiple GbE2c Switches
er
You can configure multiple switches by using scripted CLI commands through telnet
or by downloading a configuration file using a TFTP server.
liv
Using scripted CLI commands through telnet — The switch CLI enables you to
execute customized configuration scripts on multiple switches. You can tailor a
configuration script for one of the multiple switches and then deploy that
de
Rev. 12.31 7 –7
Implementing HP BladeSystem Solutions
Obtaining an IP address
IP addresses can be assigned to two of the switch interfaces:
ly
The fa0 Ethernet interface — This Layer 3 Ethernet interface is connected to the
Onboard Administrator. It is used only for switch management traffic, not for
on
data traffic.
The VLAN 1 interface — You can manage the switch module from any of its
external ports through virtual LAN (VLAN) 1.
y
Administrator
er
For the switch module to obtain an IP address for the fa0 interface through the
Onboard Administrator, these conditions must be met:
liv
The c7000 enclosure must be powered on and the Onboard Administrator must
be connected to the network.
de
Note
rT
After you install the switch, it powers on and begins the power-on self-test (POST).
Fo
You can verify that the POST has completed by confirming that the system and status
LEDs remain green.
Important
! If the switch module fails the POST, the system LED turns amber. POST errors are
usually fatal. Call Cisco Systems immediately if your switch module fails POST.
After you install the switch module in the interconnect bay, the switch automatically
obtains an IP address for its fa0 interface through the Onboard Administrator.
7 –8 Rev. 12.31
Configuring Ethernet Connectivity Options
ly
Host name, system contact, and system location
After completing the initial setup, you can configure these optional parameters
on
through the Cisco Express Setup program:
Local access password
Telnet access password
y
SNMP read and write community strings (if you plan to use a network-
er
management program such as CiscoWorks)
When you first set up the switch module, you can use Express Setup to enter the
initial IP information. Doing this enables the switch to connect to local routers and the
liv
Internet. You can then access the switch through the IP address for further
configuration.
de
Rev. 12.31 7 –9
Implemen
nting HP BladeS
System Solutions
Assig
gning the VLAN 1 IP addrress
ly
on
Mode button
b location on Cisco switch
To assign
n the VLAN 1 IP address::
y
1. Veriffy that no de
evices are connected to thhe switch, beecause during Express
er
Setup, the switchh listens for a DHCP serveer.
2. If your laptop ha
as a static IP address, beffore you beg
gin, change yyour laptop
liv
settin
ngs to tempo
orarily use DHHCP.
Important
!
de
You must iniitiate this proccess immediattely after instaalling the switcch module in tthe
server bladee. If you miss the opportuni ty to assign th he IP address this way, you
will need to remove and then t reinstall tthe switch mo
odule.
gree
en. This takess approximattely three secconds. Relea
ase the Mode e button.
Note
If you have held the Modde button for m
more than two o minutes and the LEDs have e
Fo
7 –10 Rev. 12
2.31
Configuring Ethernet Connectivity Options
5. Connect a CAT-5 Ethernet cable to any Ethernet port on the switch module front
panel. Connect the other end to the Ethernet port on the laptop or workstation.
Caution
Do not connect the switch module to any device other than the laptop or
workstation being used to configure it.
6. Verify that the port status LEDs on both connected Ethernet ports are green.
7. After the port LEDs turn green, wait at least 30 seconds and launch a web
browser on your laptop or workstation.
ly
8. Enter the IP address 10.0.0.1 (or 10.0.1.3 or 10.0.2.3, depending on the
firmware version).
on
9. Continue the configuration by completing the Express Setup fields.
y
er
liv
de
TT
rT
Fo
ly
blade resides. The Onboard Administrator must be configured to run as a DHCP
server, or the EBIPA feature must be enabled for the appropriate interconnect
on
bay.
3. Install the switch in the interconnect bay. After approximately two minutes, the
switch automatically obtains an IP address for its fa0 interface through the
Onboard Administrator.
y
4. After you have installed the switch, it powers on. When it powers on, the switch
er
begins the POST, which might take several minutes. Verify that the POST has
completed by confirming that the system and status LEDs remain green. If the
liv
switch fails the POST, the system LED turns amber. POST errors are usually fatal.
Call Cisco Systems immediately if the switch fails the POST.
5. Wait approximately two minutes for the switch to get the software image from its
de
displays.
9. On the left side of the Device Manager GUI, click Configuration Express
Setup. The Express Setup home page displays.
Fo
Configuring
g an HP 1:10G
Gb Etheernet BL-c Switch
HP 1:10G
Gb Ethernet BL-c
B Switch iss a single-wid
de switch witth 10Gb uplinks.
Plann
ning the 1:10Gb
1 Ethernet
E BL-c
B switcch config
guration
ly
1:1
10Gb Ethernet BL-c Switch
on
HP recommmends that you plan the e configuratio
on before yoou actually co onfigure the
switch. When
W you deevelop your plan,
p consideer your defauult settings and assess thee
particular server environment to determine
d any requiremen nts.
ettings are:
Default se
y
All downlink
d and
d uplink portss enabled
efault VLAN assigned to each port
A de
The VLAN
V
er
ID (VID) set to 1
liv
This default configura
ation enabless you to connnect the serve
ver blade encclosure to the
e
network by
b using a siingle uplink cable from aany external Ethernet connnector.
de
Servver type
Ope
erating system
m
NICss enabled on
n the server
rT
Note
Port 18 is re
eserved for the connection to the Onboa ard Administra
ator module. TThe
Onboard Administrator
A performs
p the fo
ollowing funcctions:
Fo
Enables you
y perform fuuture firmwaree upgrades
Controls all
a port enabling by matchiing ports betw
ween the serve
er and the
interconnect bay
Verifies thhat the server NIC option m
matches the swwitch bay that is selected an
nd
enables alla ports for the NICs installled before pow
wer up
Rev. 12.3
31 7 –13
Implementing HP BladeSystem Solutions
ly
IP address.
on
3. Use the IP address to access the switch BBI or CLI, from a computer connected to
the same network, using a web browser or telnet application.
To access the switch locally:
1. Connect the switch DB-9 serial connector, using a null modem serial cable to a
y
local client device with VT100 terminal emulation software.
er
2. Open a VT100 terminal emulation session with these settings: 9600 baud rate,
eight data bits, no parity, one stop bit, and no flow control.
liv
de
TT
rT
Fo
ly
on
y
To enablee better switcch managemment and useer accountability, three levvels or classe
es
of user acccess have been
b implemented on thee switch. Leveels of access to CLI, web
er
managem ment function
ns, and scree
ens increase as needed to o perform vaarious switch
managem ment tasks. Conceptually,
C , access classses are defin
ned as:
liv
Userr interaction with
w the switch is compleetely passive.. Nothing ca an be change ed
on th
he switch. Ussers can disp
play informattion that has no security or privacy
implications, such as switch statistics
s and
d current operational state
e information
n.
de
impaact switch op
peration.
Admministrators are the only ones
o that cann make perm
manent chang ges to the sw
witch
configuration, wh hich are changes that aree persistent aacross a rebooot or reset o
of
rT
the switch.
s Administrators can access swiitch functionss to configure
e and
troubbleshoot probblems on the e switch. Bec ause adminiistrators can also make
tempporary (opera ator-level) changes as weell, they mustt be aware oof the
interractions betw
ween tempora ary and perm manent chan
nges.
Fo
Rev. 12.3
31 7 –15
Implementing HP BladeSystem Solutions
ly
Note
See the HP 1:10Gb Ethernet BL-c Switch Command Reference Guide available on
on
the HP website for more information on using these management interfaces to
configure the switch.
y
Configure multiple switches by using scripted CLI commands through telnet or by
er
downloading a configuration file by using a TFTP server.
and then that configuration can be deployed to other switches from a central
deployment server.
If you are planning for the base configuration of multiple switches in a network to be
the same, manually configure one switch, upload the configuration to a TFTP server,
and use that configuration as a base configuration template file. Switch IP addresses
are acquired by default using BOOTP; therefore, each switch has a unique IP
rT
address. Each switch is remotely accessed from a central deployment server and an
individual switch configuration is downloaded to meet specific network requirements.
Note
Fo
See the HP 1:10Gb Ethernet BL-c Switch Command Reference Guide on the HP
website for additional information on using a TFTP server to upload and
download configuration files.
Switch IP configuration
Configuring the switch with an IP address expands your ability to manage the switch
and use its features. By default, the switch is configured to automatically receive
IP addressing on the default VLAN from a DHCP/BOOTP server that has been
ly
configured correctly with information to support the switch. However, if you are not
using a DHCP/BOOTP server to configure IP addressing, use the menu interface or
on
the CLI to manually configure the initial IP values. After you have network access to a
device, you can use the web browser interface to modify the initial IP configuration if
needed.
The switch IP address can be assigned by:
y
Using the CLI Manager-level prompt
er
Using a web browser interface
3. If you need further information on using the web browser interface, click [?] to
access the web-based help available for the switch.
ly
To connect to the CLI interface through the Onboard Administrator:
1. Connect a workstation or laptop computer to the serial port on the
on
HP BladeSystem c3000 or c7000 OA module using a null-modem serial cable
(RS-232).
2. Using a terminal program such as HyperTerminal or TeraTerm, open a
y
connection to the serial port using connection parameters of 9600, 8, N, 1.
er
3. Press Enter. OA prompts you for administrator login credentials.
4. Enter a valid user name and password. The OA system prompt displays.
liv
5. Enter the command:
connect interconnect <bay_number>
de
where <bay_number> is the number of the bay containing the blade switch. OA
connects you to the initial screen of the blade switch CLI.
6. Press Enter. The blade switch CLI prompt displays. You can now enter blade
switch CLI commands.
TT
rT
Fo
ly
4. In the text box, enter 6120XG and then click Go.
5. Click the link for correct operating system.
on
6. Download the Utilities package.
7. Install the driver by double-clicking the HPProCurve_USBConsole.msi file.
Connect the small end of the supplied USB console cable to the mini-USB port.
y
8. Connect the standard end of the supplied USB console cable to a workstation or
laptop computer. The computer will recognize the presence of a new USB device
er
and will load the driver for it.
9. Using a terminal program such as HyperTerminal or TeraTerm, open a
liv
connection to the USB port. (By default, this port will appear as COM4.)
10. Press Enter twice. The blade switch CLI prompt displays. You can now enter
de
from a PC or UNIX computer on the network, and a VT100 terminal emulator. This
method requires the blade switch to have an IP address, subnet mask, and default
gateway. The IP address, subnet mask, and default gateway can be supplied by a
Dynamic Host Configuration Protocol (DHCP) or Bootp server, or you can manually
rT
configure them using the CLI. By default, the blade switch gets its IP address through
DHCP or Bootp; see the next section for instructions on manually configuring a static
IP address.
Fo
To communicate with a blade switch that has an IP address, subnet mask, and
default gateway:
1. Use a ping command to verify network connectivity between the blade switch
and your workstation or laptop computer.
2. Using a terminal program such as HyperTerminal or TeraTerm, open a
connection using the IP address, telnet protocol, and port 23 of the blade switch.
3. Press Enter twice. The blade switch CLI prompt displays. You can now issue
blade switch commands.
ly
2. From the manager’s CLI prompt (#) on the blade switch, enter:
on
config
3. Specify the VLAN of the port that attaches to the network. By default, all ports
are in VLAN 1.
y
vlan <vlan_id>
4. Enter an IP address and subnet mask for the switch. Both the IP address and
er
subnet mask are in the x.x.x.x format.
ip address <ip_address> <subnet_mask>
liv
5. Enter a default gateway IP address in the x.x.x.x format.
ip default-gateway <ip_address>
de
ly
In the factory-default configuration, the default VLAN (named DEFAULT_VLAN) is
on
the primary VLAN of the switch. The switch uses the primary VLAN for learning
the default gateway address. The switch can also learn other settings from a
DHCP or BOOTP server, such as (packet) Time-To-Live (TTL) and TimeP or SNMP
settings.
y
Note
er
Other VLANs can also use DHCP or BOOTP to acquire IP addressing. However,
the gateway, TTL, and TimeP or SNTP values of the switch, which are applied
globally and not per-VLAN, will be acquired through the primary VLAN only,
liv
unless manually set by using the CLI, Menu, or web browser interface. If these
parameters are manually set, they will not be overwritten by alternate values
received from a DHCP or BOOTP server.
de
The IP addressing used in the switch should be compatible with your network.
That is, the IP address must be unique and the subnet mask must be appropriate
for your IP network.
TT
If you change the IP address through either telnet access or the web browser
interface, the connection to the switch will be lost. You can reconnect by either
restarting telnet with the new IP address or entering the new address as the URL
in your web browser.
rT
Fo
ly
When ip preserve is entered as the last line in a configuration file stored on a
TFTP server, the following conditions are true:
on
If the current IP address for VLAN 1 was not configured by DHCP/BOOTP,
IP Preserve retains the current IP address, subnet mask, and IP gateway address
of the switch when the switch downloads the file and reboots. The switch adopts
all other configuration parameters in the configuration file into the startup-config
y
file.
er
If the current IP addressing for VLAN 1 of the switch is from a DHCP server,
IP Preserve is suspended. In this case, whatever IP addressing the configuration
file specifies is implemented when the switch downloads the file and reboots. If
liv
the file includes DHCP/BOOTP as the IP addressing source for VLAN 1, the
switch will configure itself accordingly and will use DHCP/BOOTP. If instead, the
de
file includes a dedicated IP address and subnet mask for VLAN 1 and a specific
gateway IP address, the switch will implement these settings in the startup-config
file.
The ip preserve statement does not appear in the show config listings. To
TT
verify IP Preserve in a configuration file, open the file in a text editor and view
the last line.
rT
Fo
Learning check
1. On the GbE2c Layer 2/3 Ethernet Blade Switch, the operator account is
disabled by default and has no password.
True
False
2. List the conditions that must be met for the switch module to obtain an IP address
for the fa0 interface through the Onboard Administrator.
ly
.................................................................................................................
.................................................................................................................
on
.................................................................................................................
.................................................................................................................
3. List the available management interfaces for the HP 6120XG and 6120G/XG
y
switches.
er
.................................................................................................................
.................................................................................................................
liv
.................................................................................................................
.................................................................................................................
de
.................................................................................................................
4. List three types of privileges available on 1:10Gb Ethernet BL-c switch.
.................................................................................................................
TT
.................................................................................................................
.................................................................................................................
rT
Fo
ly
on
y
er
liv
de
TT
rT
Fo
Objectives
After completing this module, you should be able to explain how to configure the
following switches:
Brocade 8Gb SAN Switch for HP BladeSystem
ly
on
HP 3Gb SAS BL Switch for HP BladeSystem
y
er
liv
de
TT
rT
Fo
Rev. 12.31 8 –1
Implementing HP BladeSystem Solutions
ly
Verify that the switch is installed
on
Choose one of the following methods to set the Ethernet IP address:
Using Enclosure Bay IP Addressing (EBIPA)
Using the external Dynamic Host Configuration Protocol (DHCP)
y
Setting the IP address manually
er
Using EBIPA
To set the Ethernet IP address using EBIPA:
liv
1. Open a web browser and connect to the active HP Onboard Administrator.
2. Enable EBIPA for the corresponding interconnect bay.
de
Administrator GUI.
3. Verify the IP address using a telnet or SSH login to the switch, or select the
switch in the Rack Overview window.
8 –2 Rev. 12.31
Configuring Storage Connectivity Options
ly
information provided by your network administrator. By default, the IP address is
set to 10.77.77.77 for switches with revision levels earlier than 0C.
on
3. Verify that the enclosure is powered on.
4. Identify the active Onboard Administrator in the BladeSystem enclosure.
5. Connect a null modem serial cable from your computer to the serial port of the
active Onboard Administrator.
y
6. Configure the terminal application as follows:
er
In a Windows environment, enter:
liv
Bits per second — 9600
Databits — 8
de
Parity — None
Stop bits — 1
Flow control — None
TT
9. Identify the interconnect bay number where the switch is installed. At the
Onboard Administrator command line, enter:
connect interconnect x
Fo
Rev. 12.31 8 –3
Implementing HP BladeSystem Solutions
ly
12. Enter the remaining IP addressing information, as prompted.
on
13. Optionally, enter ipaddrshow at the command prompt to verify that the
IP address is set correctly.
14. Record the IP addressing information, and store it in a safe place.
y
15. Enter Exit, and press Enter to log out of the serial console.
er
16. Disconnect the serial cable.
liv
de
TT
rT
Fo
8 –4 Rev. 12.31
Configuring Storage Connectivity Options
Note
For instructions about configuring the switch to operate in a fabric containing
switches from other vendors, refer to the HP SAN Design Reference Guide
available from: http://www.hp.com/go/sandesignguide
ly
The following items are required for configuring and connecting the 8Gb SAN
on
switch for use in a network and fabric:
8Gb SAN switch installed in the enclosure
IP address and corresponding subnet mask and gateway address recorded
y
during the setting of the IP address
er
Ethernet cable
Small form-factor pluggable (SFP) transceivers and compatible optical cables, as
required
liv
Access to an FTP server for backing up the switch configuration (optional)
The date and time are used for logging events. The operation of the 8Gb SAN
switch does not depend on the date and time; a switch with an incorrect date and
time value will function properly. To set the date and time, use the command line
TT
interface (CLI).
Rev. 12.31 8 –5
Implementing HP BladeSystem Solutions
ly
determined by this fabric license of the switch.
on
Disabling and enabling a switch
By default, the switch is enabled after power on and after the diagnostics and
switch initialization routines complete. You can disable and re-enable the switch as
necessary.
y
Using DPOD
er
Dynamic Ports On Demand (DPOD) functionality does not require a predefined
assignment of ports. Port assignment is determined by the total number of ports in
liv
use as well as the number of purchased ports.
In summary, the DPOD feature simplifies port management by:
de
HBA present. A server blade that does not have a functioning HBA will not be
treated as an active link for the purpose of initial DPOD port assignment.
8 –6 Rev. 12.31
Configuuring Storage C
Connectivity Op
ptions
Reset button
ly
on
y
er Reset button lo
ocation
liv
The Resett button on th he Brocade SAN
S switchees is located to the left off the status
LEDs. It iss a small, reccessed micro
o switch that is accessed by inserting a pin or
similarly sized object in the small hole to pushh the button.
de
Note
The Reset buttton does not re
eturn the switch to factory-defa
ault settings.
rT
Fo
Rev. 12.3
31 8 –7
Implemen
nting HP BladeS
System Solutions
Mana
agement tools
ly
on
y
The mana agement too
er
ols built into the
t 8Gb SAN N switch can n be used to monitor fabbric
liv
topology, port status, physical staatus, and othher informatioon used for performancee
analysis and
a system debugging.
d When
W runni ng IP over Fiibre Channeel, these
managem ment tools must be run on n both the Fi bre Channel host and th he switch, an
nd
de
8 –8 Rev. 12
2.31
Configuring Storage Connectivity Options
ly
environment or TERM in a UNIX environment
A null modem serial cable
on
y
3. Connect a null modem serial cable from the computer to the serial port of the
er
active Onboard Administrator.
4. Configure the terminal application as follows:
liv
In a Windows environment, enter:
Baud rate: 9600 bits per second
de
8 data bits
None (No parity)
1 stop bit
TT
No flow control
In a UNIX environment, enter: tip /dev/ttyb –9600
rT
Rev. 12.31 8 –9
Implementing HP BladeSystem Solutions
ly
Items required for configuration
on
To configure and connect the Cisco MDS 9124e Fabric Switch for use in a network
and fabric, you need:
Switch installed in a BladeSystem enclosure
y
IP address and corresponding subnet mask and gateway address
er
Ethernet cable
SFP transceivers and compatible optical cables, as required
liv
Access to an FTP server for backing up the switch configuration (optional)
The date and time are used for logging events. The operation of the Cisco MDS
9124e Fabric Switch does not depend on the date and time; a switch with an
incorrect date and time value will function properly. Use the CLI to set the date and
time.
TT
ly
You might need to recover the administrator password on the Cisco MDS 9124e
switch if the user does not have another user account on the switch with network-
on
administrator privileges. Refer to the Cisco MDS 9000 Family Fabric Manager
Configuration Guide and to the Cisco MDS 9000 Family CLI Configuration Guide
for detailed instructions.
y
er
liv
de
TT
rT
Fo
ly
on
Ciscco MDS 9124e Fabric Switch management feeatures table
y
The manaagement tools built in to the Cisco MMDS 9124e Fa abric Switch can be used d to
monitor fabric topology, port statu
us, physical sstatus, and o
other informaation used fo
or
er
ance analysiss and system debugging.. When runn
performa ning IP over FFibre Channeel,
these management to ools must be run on both the Fibre Ch hannel host aand the switcch,
and they must be suppported by thhe Fibre Cha nnel host driiver.
liv
You can connect
c a management station
s to on e switch thro ough Etherneet while
managing other switcches connectted to the firsst switch thro ough Fibre Channel. To d
do
de
8 –12 Rev. 12
2.31
Configuring Storage Connectivity Options
ly
The 3Gb/s SAS Switch is only supported in BladeSystem enclosures (c7000 and
c3000). You can install the 3Gb SAS BL Switch in up to four interconnect bays in the
on
c7000 and in up to two interconnect bays in the c3000.
Two SAS switches are required in the same BladeSystem enclosure interconnect bay
row for redundancy. Single-switch configurations are supported as nonredundant.
For the c3000 enclosure, the SAS switch can be placed in interconnect bays 3
y
and 4 only.
er
For the c7000 enclosure, the SAS switch can be placed in interconnect bays 3
and 4, 5 and 6, or 7 and 8 only.
liv
For the c3000 enclosure, using mezzanine slot 1 of a server along with enclosure
interconnect bay 2 is not supported. Use the 3Gb/s SAS BL Switch Virtual SAS
Manager (VSM) software to configure external SAS storage.
de
Note
Supported Internet browser versions are Microsoft Internet Explorer 6.0/7.0 and Mozilla
Firefox 3.
TT
Confiiguring th
he 3Gb SAS
S BL Switch
S
ly
on
y
Zoning proceedures
er
Key configuration tassks include:
liv
Enab
bling or disa
abling multi-in
nitiator modee.
Crea
ating the follo
owing zone groups:
de
Switch-port zone groupss — For sharred SAS storrage enclosures and tape
e
libraries
one groups — For zoned
Drive-bay zo d SAS storag
ge enclosures
Assig
gning zone groups
g to servers.
TT
For firmw
ware versionss earlier than n 2.0.0.0, no e available. The
o configuration tasks are
switch is configured using
u the VSM applicatio on. As shown in the precceding table,
configuraation (zoning
g) procedure es are the sam
me for shareed SAS stora
age enclosure es
and tapee devices, buut differ for zoned SAS sttorage encloosures.
Fo
8 –14 Rev. 12
2.31
Configuuring Storage C
Connectivity Op
ptions
Accessing the
e 3Gb SA
AS BL Sw
witch
The switcch is configured and man
naged throug
gh the Onbo
oard Adminisstrator and
VSM app plications.
To accesss VSM:
1. Acce
ess the Onbo
oard Administrator of thee enclosure. (The 3Gb SA
AS BL Switch is
supp
ported on On
nboard Adm
ministrator 2.4 40 and laterr.)
2. In the Onboard Administrato
A or Systems annd Devices trree, expand the Interconnect
Bayss option and select the 3G
Gb SAS BL SSwitch.
ly
3. Afterr selecting th
he SAS switch
h to managee, click Mana nsole and wa
agement Con ait
on
a few
w moments for the VSM application
a tto open.
y
er Firmware versio n position
liv
Firmware e is preinstalled on each switch
s in thee factory, but updated, allternative, or a
preferred version mig ght be available. The follo
owing types of firmware are available
de
the MSL
M G3 tape e libraries. All
A server blad de bays havve access to a all storage
encloosures conneected to the switch.
s Thesee settings aree preconfigured and cannot
be altered.
a To restrict access to the storag
ge, use the feeatures proviided with the
e
stora
age management softwarre.
rT
Rev. 12.3
31 8 –15
Implementing HP BladeSystem Solutions
Learning check
1. List the tools used to configure the 3Gb SAS switch.
…………………………………………………………………………………………
…………………………………………………………………………………………
2. How do you set the IP address on a Cisco MDS switch?
…………………………………………………………………………………………
…………………………………………………………………………………………
ly
…………………………………………………………………………………………
on
…………………………………………………………………………………………
…………………………………………………………………………………………
3. With Dynamic Ports on Demand (DPOD), port assignment is determined by the
y
total number of ports in use as well as the number of purchased ports.
er
True
False
liv
de
TT
rT
Fo
Objectives
After completing this module, you should be able to:
ly
Describe the HP Virtual Connect portfolio and the basic technology
on
Plan and implement a Virtual Connect environment
Configure a Virtual Connect module
Manage a Virtual Connect domain
y
Explain how to use Virtual Connect modules in a real-world environment
er
liv
de
TT
rT
Fo
Rev. 12.31 9 –1
Implemen
nting HP BladeS
System Solutions
HP Virtual
V Connecct portffolio
Virtual Connect is an industry-stan ndard-basedd implementaation of serve
er-edge I/O
ation. It puts an abstractio
virtualiza on layer betw
ween the serrvers and the
e external
networks so that the LANL and sto
orage area n etwork (SAN N) see a poo ol of servers
rather tha
an individual servers.
HP 1/
/10Gb VC
V Ethernet
ly
on
y
er
Simplify and
a make th he customer’ss data centerr change-rea ady. The inno
ovative HP
1/10Gb Virtual Conn nect Ethernett Module forr the HP BladdeSystem is th
he simplest,
most flexiible connectiion to networks. The Virtuual Connect Ethernet Mo odule is a neww
liv
class of blade
b interco
onnect that simplifies servver connectio
ons by cleanlly separating
g
the server enclosure from
f LAN, simmplifies netw
works by reducing cabless without
adding sw witches to manage, and allows a chhange in servvers in just m minutes, not
de
days.
HP 1/
/10Gb-FF VC Ethe
ernet
TT
rT
Fo
The HP 1/10Gb Virtu ual Connect Ethernet Mo odule for the HP BladeSysstem is the
simplest, most flexible
e network coonnection. Th e Virtual Co onnect Ethern
net Module iss a
class of blade
b interco
onnect that simplifies servver connectio
ons by cleanlly separating
g
the server enclosure from
f LAN, sim mplifies netw
works by reducing cabless without
adding sw witches to manage, and allows channging serverss in just minu utes, not dayss.
This model is similar to the HP 1/ /10Gb VC Etthernet Module, but offerrs optical
uplinks.
9 –2 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
HP Virtual Co
onnect Fle
ex-10 10G
Gb Etherrnet
ly
on
The HP Virtual
V Conneect Flex-10 10
0Gb Ethernett Module is a class of bla ade
interconn
nects that sim
mplifies server connection s by cleanly separating tthe server
enclosure
e from LAN. It simplifies networks
n by reducing cables without adding
switches to manage, allowing a server
s chang e in just minutes, and taiilors networkk
y
ons and spee
connectio eds based on n applicationn needs.
HP Flex-10 technology
number of
er
o NICs per connection
c
y reduces inffrastructure ccosts by incre
y significantly
without
w addinng extra blad
easing the
de I/O modules, and
liv
reducing cabling upliinks to the da
ata center neetwork.
de
TT
rT
Fo
Rev. 12.3
31 9 –3
Implemen
nting HP BladeS
System Solutions
HP Virtual Co
onnect 4G
Gb Fibre Channeel Module
e
ly
on
The HP Virtual
V Conne ect 4Gb FC Module
M expa ands existingg Virtual Con nnect
capabilities by allowiing up to 1288 virtual macchines (VMs)) running on the same
physical server
s to acccess separatee storage ressources. Provvisioned stora age resourcee is
associate
ed directly to a specific VM,
V even if thhe virtual serrver is re-allo
ocated within
n
y
the Blade
eSystem. Storrage manage ement is no longer consttrained to a ssingle physiccal
er
HBA on a server blad de. SAN adm ministrators ccan manage virtual HBAss with the sa ame
methods and viewpoiint of physica al HBAs.
The HP Virtual
V Conne
ect 4Gb Fibree Channel MModule cleannly separatess the server
liv
enclosure
e from the SA
AN, simplifie
es SAN fabriccs by reducin
ng cables wiithout addingg
switches to the domain, and allow
ws a fast cha
ange in serveers.
de
HP Virtual Co
onnect 8G
Gb 20-po
ort Fibre Channel Module
e
TT
rT
Fo
The HP Virtual
V Conne ect 8Gb 20-p port FC Mod dule enables up to 128 V VMs running on
the same physical serrver to accesss separate sstorage resouurces. Provisioned storagee
resource is associated
d directly to a specific VM
M, even if thee VM is re-allocated withhin
the Blade
eSystem. Storrage manage ement of VM Ms is no long
ger limited byy the single
physical HBA on a se erver blade: SAN adminiistrators can manage virttual HBAs w with
the same methods an nd viewpoint of physical HBAs.
9 –4 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
HP Virtual Co
onnect 8G
Gb 24-po
ort Fibre Channel Module
e
ly
on
HP Virtua
al Connect 8Gb 24-port Fibre Channnel Module kkey features include:
Eight 2/4/8Gb Auto-negotia
ating Fibre C
Channel uplin
nks connecte
ed to externa
al
SANN switches
Two Fibre Chann
nel SFP+ Tran
nsceivers included with th
he Virtual Co
onnect Fibre
y
Channel Modulee
er
Sixte
een 2/4/8G
Gb Auto-nego
otiating Fibree Channel do
ownlink portss for maximu
um
HBAA performancce
liv
HBAA aggregation on uplinks ports using ANSI T11 sta
andards-based N_Port ID
D
Virtu
ualization (NPIV) technolo
ogy
Up too 255 VMs running
r on th
he same phyysical server can access sseparate
de
stora
age resourcess
Extre
emely low-lattency through
hput for switcch-like performance
This module is compa atible with cu
urrent releasees of ProLian
nt and Integriity servers
TT
blades th
hat support th
he QLogic QMH2462
Q 4GGb FC HBA and QMH2562 8Gb FC C
HBA or Emulex
E LPe1105-HP 4Gb HBA and LPee1205 8Gb HBA for HP BladeSystem m.
rT
Fo
Rev. 12.3
31 9 –5
Implemen
nting HP BladeS
System Solutions
HP Virtual Co
onnect Fle
exFabric moduless
ly
on
Flex
xFabric connecction options
HP Virtua
al Connect FllexFabric mo
odule is a log
gical combinnation of Flexx-10 technolo
ogy
y
with indu
ustry standard
d VC Fibre Channel
C techhnology in a single intercconnect modu ule.
er
The VC FlexFabric
F Moodule and FlexFabric Ad apters conveerge Ethernet and Fibre
Chanel trraffic within the
t BladeSysstem enclosu re and then separate the e two at
liv
enclosuree edge. Conn nectivity to both
b the exterrnal Ethernett and native Fibre Channnel
from the same
s module allows custtomers to red duce the com mplexity without disruptin
ng
existing LAN
L and SAN infrastructure and elim minates the neeed for Fibree Channel
de
G1/
/G5 LOM — 1Gb Ethern
net, one NIC only
You can connect
c the VC
V FlexFabric uplinks to 1GbE netwo
orks using SFFP transceive
ers
for an ea
asy transition to 10Gb latter.
Fo
9 –6 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
FlexFa
abric adap
pter — Phy
ysical funcctions
ly
on
FlexFabric LOM overview
y
Each FlexxFabric adap pter has two 10Gb physiccal ports tha
at can be parrtitioned into
o
er
four physsical functions (PF):
PF 1,, 3, and 4 on each port are always a
and can onlyy be Etherne
et.
liv
The second
s PCIe
e function (PFF) can be Ethhernet, Fibre Channel ove er Ethernet
(FCo
oE), or iSCSI.. It absolutely
y must have the same co onfiguration b
between ports 1
and 2 on the samme FlexFabric adapter.
de
There
efore, port 1 FCoE and port
p 2 iSCSI cannot be o
on the same a
adapter.
TT
rT
Fo
Rev. 12.3
31 9 –7
Implemen
nting HP BladeS
System Solutions
Flex-10 adapter ma
apping with
h VC Flex-10
0 modules
ly
on
FlexFabric LOM and VC
C Flex-10 moduule
y
cards. Fo
our Ethernet ports
p are ava
ailable from any LOM.
FlexFabrric adapter mapping with
w VC Flex
xFabric mod
dules
er
liv
de
TT
utilized as
a a NIC, FC CoE, or iSCSI device and is recognizeed as such byy the server
operatingg system.
9 –8 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Rx side allocation
a fo
or FlexFabrric
Individua
al Ethernet, iS
SCSI, or FCooE functions rreceived trafffic (Rx) flowss are not limited
and couldd consume up u to the full line rate of 1
10Gb. With FCoE, howe ever, Enhance ed
Transmisssion Selection flow controol managem ent guaranteees minimum m bandwidth set
by the Virtual Connecct Manager. Thus, when there is no ccongestion, FFCoE or LAN N
bandwidth can excee ed the specified data ratees for traffic fflowing from
m VC to
FlexFabric adapter. Under
U congessted conditioons, the VC m module will e enforce a fair
n of bandwid
allocation dth as determ
mined by thee FCoE functiion rate limitt defined in tthe
server pro
ofile. The rem
mainder will be set as thee aggregate rate limit for the FlexNIC Cs.
ly
On the trransmitted tra
affic (Tx) side
e, FlexNIC iss limited by tthe server pro
ofile definitio
on
and set as
a the maxim mum in the ne etwork definiition.
on
FlexFabrric adapter mapping with
w 10G Pa
ass-Thru mo
odules
y
er
liv
de
When co onnecting to 10G Pass-Thru Modules, the FlexFabric adapters lose most off
TT
their adva
anced featurres:
The PCIe function
ns have fixed
d configuratio
ons and can
nnot be easilyy changed o
or
disabled.
rT
The only
o two avaailable config
gurations aree one NIC and one stora
age (either
FCoEE or iSCSI de
epending on n the server m
model).
The only
o adjustable bandwid
dth control is between NIC and FCoE
E
Fo
There
e is no:
Virtualization of Ethernett MAC addreesses or Fibrre Channel W
World Wide
Name (WW WNs)
Centralized managemen
nt of SAN (Fiibre Channeel or iSCSI) boot parametters
Integration with
w BladeSy ystem Matrix or upper layyer software tools such as
HP Infrastruccture Orchestration
Rev. 12.3
31 9 –9
Implementing HP BladeSystem Solutions
ly
Determine which Ethernet networks will be connected to or contained within the
domain. Most installations have multiple Ethernet networks, each typically
mapped to a specific IP subnet. The VC Manager enables definition of up to 64
on
different Ethernet networks that can be used to provide network connectivity to
server blades. Each physical NIC on a server blade can be connected to any
one of these Ethernet networks.
y
Virtual Connect Ethernet networks can be completely contained within the
domain for server-to-server communication or connected to external networks
er
through rear panel port cable connections (uplinks). For each network, the
administrator must use the VC Manager to identify the network by name and to
liv
define any external port connections.
Determine the Ethernet MAC address and Fibre Channel WWN range to be
used for the servers within the enclosure. Server and networking administrators
de
should fully understand the selection and use of MAC address ranges before
configuring the enclosure.
Name the fabric that servers will connect to. The setup wizard enables you to
specify the Fibre Channel fabrics that will be made available. Each VC-FC
TT
ly
The LAN administrator defines the Ethernet networks and connections.
on
The SAN administrator defines the storage fabrics and connections.
The network administrator:
Ensures that the appropriate uplink cables are dropped to the rack (for
y
example, using two 10Gb links or a bundle of two 8 x 1Gb links, primary
and standby)
er
Configures the data center switch so that selected networks are made
available to the enclosure
liv
Documents the network names and VLAN IDs
The server administrator:
de
Note
Stacking links are used to interconnect VC-Enet modules when more than two modules
are installed in a single enclosure. This feature enables all Ethernet NICs on all servers
in the Virtual Connect domain to have access to any VC-Enet module uplink port. By
ly
using these module-to-module links, a single pair of uplinks can function as the data
center network connections for the entire Virtual Connect domain.
on
3. Cable the Virtual Connect Ethernet uplinks to the data center networks.
4. Connect the data center Fibre Channel fabric links (if applicable).
y
5. Note the default network settings for VC-Enet module in bay 1 (from the tear-off
tag).
6.
7.
er
Note the default network settings for the Onboard Administrator.
Apply power to the enclosures.
liv
8. Use Onboard Administrator for basic setup of the enclosures (enclosure name,
passwords, and so forth).
de
Virtua
al Conne
ect Ethern
net stackiing
ly
on
y
er
VC with stackiing links
Virtual Co
onnect stackking rules:
liv
Any port can be used for stacking. Stackking cables a
are auto-dete
ected.
All VC
V Ethernet modules
m have e at least on e internal sta
acking link th
hrough the
de
midpplane. The 10
0/10Gb VC--Ethernet mo odule has two o internal sta
acking links ffor
a tottal of 20Gb of cable-free
e stacking.
Best practice for stacking is to
o connect eaach Ethernet module to tw
wo different
Ethernet moduless. In the precceding graphhic, every moodule is conn
nected to twoo
TT
different moduless. Each module connectss to the adjaccent bay usinng the internal
midpplane path (tthe orange lines). Then, eeither 1Gb o
or 10Gb cab bles are usedd to
stackk to another module (the blue lines).
rT
Fo
Rev. 12.3
31 9 –13
Implemen
nting HP BladeS
System Solutions
ly
on
y
erVirtual Connect
C modul es stacking linkks
liv
In the preeceding grap phic, stacking links are sshown to be both externa al and intern
nal.
The intern nal 10Gb linnks are connnected by wa ay of the sign
nal midplane e inside the
enclosure e and connecct the modules horizonta ally. The exteernal links arre both 10Gb
de
(CX4) and 1Gb (RJ-45) and can extende a serrver’s networrk connection ns across
multiple VC
V modules.. These exterrnal links cann also conneect to the exte ernal
infrastruccture switches.
TT
Notice th
hat all the mo
odules in the
e graphic aree Ethernet-ba
ased indicating that the V
VC
Fibre Chaannel modules do not pa articipate in the stacking example.
rT
Fo
9 –14 Rev. 12
2.31
Virtual Connect Installation and Configuration
ly
and node WWNs for each Fibre Channel HBA port. Although the hardware ships
with default WWNs, Virtual Connect can assign WWNs that will override the
on
factory default WWNs while the server remains in that Virtual Connect enclosure.
When configured to assign WWNs, Virtual Connect securely manages the WWNs
by accessing the physical Fibre Channel HBA through the enclosure Onboard
Administrator and the iLO interfaces on the individual server blades.
y
When assigning WWNs to a Fibre Channel HBA port, Virtual Connect assigns both
er
a port WWN and a node WWN. Because the port WWN is typically used for
configuring fabric zoning, it is the WWN displayed throughout the Virtual Connect
user interface. The assigned node WWN is always the same as the port WWN
liv
incremented by 1.
Configuring Virtual Connect to assign WWNs in server blades maintains a
de
consistent storage identity even when the underlying server hardware is changed.
This method allows server blades to be replaced without affecting the external Fibre
Channel SAN administration.
The naming convention is as follows:
TT
HP has set aside a dedicated range of Fibre Channel WWNs. You can set each
Virtual Connect domain to either a WWN defined by Virtual Connect or a factory-
default WWN.
50:06:0B:00:00:C2:62:00 to 50:06:0B:00:00:C3:61:FF
Equals 64KB WWNs
Virtua
al Conne
ect Fibre Channel port typ
pes and lo
ogins
ly
on
y
er
Config
guration of SAN
N with VC mod
dules and witho
out VC modules
Key Fibre
e Channel po
ort types:
liv
N_Po
ort (End Port)
ort (Fabric Port) addressab
F_Po ble by the NN_Port attacheed to it with a common
de
E_Po
ort (Expansio
on Port)—A sw
witch port ussed for switch-to-switch co
onnections
Fibre Channel
C lo
ogins
rT
1. Link is establishe
ed.
2. N_Poort sends a Fabric
F Login (FLOGI) fram
me to the well-known fabric address ((FF
FF FE
E).
3. Fabrric responds with an Acce
ept (ACC) fra
ame.
9 –16 Rev. 12
2.31
Virtual Connect Installation and Configuration
Important
ly
! Each switch must have its own domain between 1 and 254 with no duplicate IDs in the
same fabric.
on
y
er
liv
de
TT
rT
Fo
N_Po
ort_ID virttualizatio
on
ly
on
y
er
liv
de
N_Port_ID
N virtu alization
A VC-FC module funcctions as an HBA aggreg gator and usees NPIV, whiich assigns
multiple N_Port_IDs
N to
o a single N_Port,
N thereb
by enabling multiple distiinguishable
TT
entities.
NPIV funcctions within a Fibre Cha
annel HBA a nd enables uunique WW WNs and IDs for
each virtu
ual machine within a server. A VC-FC C module funnctions as a transparent
rT
Fibre
e Channel De
evice Attach (FC-DA) Speecification, S
Section 4.13
Fibre
e Channel Lin
nk Services (FC-LS)
( Speciification
9 –18 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
ly
on
1—FFabric login using HBA aggregator
a W
WWN (WW
WN X)
Establishes the
t buffer cre
edits for the o
overall link
y
Receives an overall Port ID
2a to
o 4a—Serve
er
er HBA logs in
i normally uusing the WW
WNs
liv
2b to
o 4b—Serve
er HBA fabricc logins are ttranslated to
o Fabric Disco
overy (FDISC
C)
5—TTraffic for all four N_Port IDs are carrried on the sa
ame link
de
TT
rT
Fo
Rev. 12.3
31 9 –19
Implemen
nting HP BladeS
System Solutions
N_Porrt_ID virtua
alization
ly
on
y
NPIV is in
ndependent of both operrating system
ms and device drivers, an
nd the standa
ard
er
Qlogic and Emulex Fibre Channe el HBAs supp port NPIV.
NPIV doees not interfe
ere with serve
er/SAN commpatibility. Affter the serve
er is logged in,
liv
Fibre Cha
annel framess pass throug gh unchangeed.
Important
!
de
Configuring
g Virtua
al Conn
nect
TT
rT
Fo
9 –20 Rev. 12
2.31
Virtual Connect Installation and Configuration
used to stack Virtual Connect modules and enclosures as part of a single Virtual
Connect domain.
Networks must be defined within the VC Manager so that specific, named networks
can be associated with specific external data center connections. These named
networks can then be used to specify networking connectivity for individual servers.
A single external network can be connected to a single enclosure uplink, or it can
make use of multiple uplinks to provide improved throughput or higher availability. In
addition, multiple external networks can be connected over a single uplink (or set of
uplinks) through the use of VLAN tagging.
ly
The simplest approach to connecting the defined networks to the data center is to
map each network to a specific external port. An external port is defined by the
on
following:
Enclosure name
Interconnect bay containing the Virtual Connect Ethernet module
y
Selected port on that module (1-8, X1, X2, . . .)
er
liv
de
TT
rT
Fo
Virtua
al Conne
ect logica
al flow
The Virtua
al Connect configuration
c n process usees a consistent methodolo
ogy.
Create
e a VC do
omain
ly
on
y
One of th
er
he first requirrements in se
etting up a V
VC environmeent is to establish a VC
liv
domain through the web-based
w VC
V Manager interface.
A Virtual Connect do osure and a set of associated module
omain consistts of an enclo es
and serveer blades tha
at are manag ged togetherr by a singlee instance of the VC
de
Managerr. The Virtuall Connect do omain contai ns specified networks, seerver profiless,
and user accounts thaat simplify th
he setup and administratiion of server connectionss.
Establishiing a Virtual Connect do omain enablees administraators to upgrrade, replace
e,
or move servers
s within their enclo
osures withouut changes being visible to the extern
nal
TT
9 –22 Rev. 12
2.31
Virtual Connect Installation and Configuration
ly
All Ethernet modules interconnected.
All enclosures must have the identical VC-FC configuration (no stacking of Fibre
on
Channel modules).
It supports:
Up to four c7000s enclosures
y
Up to 16 Virtual Connect Ethernet modules
er
Up to 16 Virtual Connect Fibre Channel modules
Stacking cable options:
liv
Fibre cables (SFP+)
10Gb copper Ethernet cables with CX-4 connectors. (Do not use InfiniBand
cables because they are tuned differently.)
de
Note
HP currently limits each domain to 16 Ethernet modules and 16 Fibre Channel modules. If
more than 16 are detected, the domain will be degraded with a
DOMAIN_OVERPROVISIONED statement.
rT
Fo
Note
Latency is a function of the bridge chip used in the module. Both of the VC 1/10 modules
use the same bridge chip and, therefore, will have identical latency. The Flex-10 module
uses a bridge chip with much lower latency.
ly
Important
on
! A switch that does not understand Link Aggregation Control Protocol (LACP) or Link Layer
Discovery Protocol (LLDP) (such as Nortel 8500-series switches) can introduce a loop. If
the switch does not support LACP, change the uplink port mode from Auto to Failover.
y
PortFast
er
The Spanning Tree PortFast feature was designed for Cisco switch ports connected to
edge devices, such as server NIC ports. This feature allows a Cisco switch port to
bypass the “listening” and “learning” stages of spanning tree and quickly transition
liv
to the “forwarding” stage. By enabling this feature, edge devices are allowed to
immediately begin communicating on the network instead of having to wait for
Spanning Tree to determine whether it needs to block the port to prevent a loop—a
de
process that can take 30+ seconds with default Spanning Tree timers. Because edge
devices do not present a loop on the network, Spanning Tree is not needed to
prevent loops and can be effectively bypassed by using the PortFast feature. The
benefit of this feature is that server NIC ports can immediately communicate on the
TT
network when plugged in rather than timing out for 30 or more seconds. This
strategy is especially useful for time-sensitive protocols such as PXE and DHCP.
Important
rT
! Using features such as Portfast and BPDU Guard enable uplink failover to occur more
quickly and offer protection against the possibility of a loop.
Fo
Because VC uplinks operate on the network as an edge device (like teamed server
NICs), Spanning Tree is not needed on the directly connected Cisco switch ports.
Thus, PortFast can be enabled on the Cisco switch ports directly connected to VC
uplinks.
Note
The interface command to enable PortFast on a Cisco access port is: spanning-tree
portfast
The interface command to enable PortFast on a Cisco trunk port is: spanning-tree
portfast trunk
BPDU Guard
BPDU Guard is a safety feature for Cisco switch ports that have PortFast enabled.
Enabling BPDU Guard allows the switch to monitor for the reception of Bridge
Protocol Data Unit (BPDU) frames (spanning tree configuration frames) on the port
configured for PortFast. When a BPDU is received on a switch port with PortFast and
BPDU Guard enabled, BPDU Guard will cause the switch port to err-disable (shut
down). Since ports with PortFast enabled should never be connected to another
switch (which transmits BPDUs), BPDU Guard protects against PortFast-enabled ports
from being connected to other switches. This arrangement prevents:
ly
Loops caused by bypassing Spanning Tree on that port
Any device connected to that port from becoming the root bridge
on
Because Virtual Connect behaves as an edge device on the network, and because
VC does not participate in the data center spanning tree (that is, does not transmit
BPDUs on VC uplinks), BPDU Guard can be used, if desired, on Cisco switch ports
y
connected to VC uplinks.
er
Note
The interface command to enable BPDU Guard on a Cisco port is: spanning-tree
bpduguard enable.
liv
de
TT
rT
Fo
VC base
e enclosure
ly
on
y
er
Multiplle enclosures sttacked togetherr
9 –26 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Enclosurre removal
Before removing enclo osures, you need
n to knoww the locatio
on of the upliinks and whiich
e. If the activve uplinks are
are active e in the basee enclosure, the following
g steps shoulld
be non-disruptive:
Fromm the VC Manager’s Dom
main Settingss window, deelete the enclosure (be su
ure
to re
emove the rig
ght one)
Unplug the intere
enclosure sta
acking links
VC-Fibre
e Channel co
onfiguration
ly
on
y
er
liv
VC-FC configuration with multiple enclossures
de
Rev. 12.3
31 9 –27
Implemen
nting HP BladeS
System Solutions
ly
on
y
er VC-FC
C connected to tthe same SAN
liv
Although VC-FC doess not stack, th
he same nummber of conn nections to ea
ach SAN is nnot
required. In the exam
mple, enclosure 1 and encclosure 2 aree both conneected to
SAN_A, but enclosure 1 has four connectionss to that SAN N, while enclosure 2 has
de
Note
TT
The uplink po
ort assignment within
w the VC D anager.
Domain is enforrced by VC Ma
Note
rT
A single-enclo
osure VC Doma ain that containns multiple VC-FFC modules witthin the SAME
chassis does not have this re
estriction, as lonng as it is not w
within a VC Dom
main stack.
Fo
9 –28 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
c3000 sttacking
ly
on
c3000 encl osure
y
c3000 enclosure if th
he following requirementss are met:
There
e is a maxim
mum of two enclosures.
e
er
Rev. 12.3
31 9 –29
Implemen
nting HP BladeS
System Solutions
Define
e Ethernet networks
ly
on
y
Defining Ethernet nnetworks flow
er
After the domain has been create ed, you can define the Etthernet netw
works. The
Network Setup Wiza ard establishe
es external E
Ethernet netw
work connecttivity for a
BladeSysstem enclosure using Virtual Connect . A user account with ne
liv
etwork
privilegess is required to perform these
t operattions.
Use this wizard
w to:
de
netw
works
These connections caan be uplinkss dedicated tto a specific Ethernet nettwork or sha
ared
hat carry multiple Etherne
uplinks th et networks w
with the use of VLAN tag gs.
rT
Fo
9 –30 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Define
e Fibre Ch
hannel SAN connecctions
ly
on
Defining FC SAN co
onnections flow
y
The Virtuaal Connect Fibre
F Channeel Setup Wizzard configurres external FFibre Channel
er
connectivvity for a BladeSystem en nclosure using
g Virtual Connect. A useer account wiith
storage privileges
p is required
r to perform
p thesee operations..
liv
Use this wizard
w to:
Identify WWNs to be used on
o the serverr blades dep
ployed within this Virtual
Connect domain
de
Defin
ne available SAN fabricss
TT
rT
Fo
Rev. 12.3
31 9 –31
Implemen
nting HP BladeS
System Solutions
Create
e server profiles
ly
on
y
er
The Virtua
al Connect Manager
M Serrver Profile W
Wizard allowws you to quickly set up aand
configure
e network/SAAN connectio
ons for the seerver bladess within your enclosure.
With the wizard, you can define a server proffile template that identifie es the serverr
liv
connectivvity to use on
n server blad
des within thee enclosure. The template e can then be
used to automatically
a y create and apply serverr profiles to uup to 16 servver blades. TThe
de
individua
al server proffiles can be edited
e indeppendently.
Before be
eginning the server profille wizard, do
o the followin
ng:
Com
mplete the Ne
etwork Setup
p Wizard.
TT
Defin
ne a server profile
p templa
ate.
Assig
gn server pro
ofiles.
Nam
me server pro
ofiles.
Fo
9 –32 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Implem
menting th
he server profile
p
ly
on
y
er Settting up a profille workflow
liv
Follow these steps to set up the se
erver profile:
1. Conffigure the server profile using
u the VC Manager usser interface.
de
4. VC Manager
M wriites the serve
er profile info
ormation to tthe server.
5. Powe
er on the serrver.
6. CPU BIOS and NIC/HBA
N op
ption ROM so
oftware writee the profile information tto
rT
ace.
the interfa
7. The successful
s write is commu
unicated to tthe VC Mana h the Onboard
ager through
Administrrator.
Fo
8. The server
s boots using the se
erver profile p
provided.
Important
! When a blad de is inserted in
nto a bay that hhas a VC Mana ager profile assigned, the VC
Manager dettects the insertio on through com mmunications w with the Onboarrd Administratorr
and must gen nerate profile in
nstructions for thhat server beforre the server is allowed to pow wer
on. If VC Manager is not co ommunicating w with the Onboa ard Administrato or at the time th
he
server is inserted, the Onboard Administra tor will continuee to deny the se erver iLO powe er
request until the
t VC Manage er has updated d the profile. If a server is not p
powering on,
verify that the
e VC Manager has established d communicatio ons with that OOnboard
Administratorr.
Rev. 12.3
31 9 –33
Implemen
nting HP BladeS
System Solutions
ly
on
y
er
Now thatt the VC Dom
main has bee en created w
with Ethernet networks, Fibre Channel
SANs, an
nd assigned server profile
es, you can:
Replace a failed server witho out logging i n to VC Man use the server
nager becau
liv
profiile is assigne
ed to the bay
y
Copy
y a server prrofile from on
ne bay to annother
de
9 –34 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Virtua
al Conne
ect – Servver profile migration
ly
on
y
Virtual Coonnect can take
t
er
a serverr profile from
m server A an
nd migrate th
hat profile to a
liv
spare serrver if server A were to fa
ail or go offliine.
The profile contains th
he “personallity” of the seerver, includiing:
de
Virtu
ual Connect MAC
M addressses
Virtu
ual Connect Fibre
F Channe
el WWNs
LAN and SAN assignments
a
TT
Boott parameters
rT
Fo
Rev. 12.3
31 9 –35
Implemen
nting HP BladeS
System Solutions
ly
on
y
er
Server
S profile m
migration
9 –36 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Virtu
ual Con
nnect Manage
M er
ly
on
y
er
Virtual Connect Mana
ager homepagee
The VC Manager
M runs embedded d on the VC-EEthernet Mod dule in bay 1 or 2 of the
e
liv
base encclosure and iss accessible through the Onboard Ad dministrator managemen nt
interface.. The VC Maanager conne ects directly to the active Onboard A Administrator
module in n the enclosu
ure and has the following
g functions:
de
Defin
nes and man
nages server I/O profiless
The VC Manager
M conntains utilitiess and a Profiile Wizard to
o develop templates to
rT
create an
nd assign pro
ofiles to multiple servers a at one time. The I/O pro
ofiles include the
physical NIC MAC addresses, Fib bre Channel HBA WWN Ns, and the SSAN boot
ations.
configura
The VC Manager
M pro
ofile summary y page inclu des a view o of server stattus, port, and
d
Fo
network assignments.
a . You can alsso edit the prrofile details,, reassign the profile, and
examine how HBAs and
a NICs are e connected..
Rev. 12.3
31 9 –37
Implemen
nting HP BladeS
System Solutions
Accesssing the
e Virtual Connect
C Manageer
ly
on
Accessing
A VC Manager
M from Onboard Adm
ministrator
y
Access to
o the VC Manager is ove er the same E Ethernet conn nection used to access thhe
enclosure
VC Mana
A
ager for the first
f
er
e Onboard Administrator r and server blade iLO co
time, you
onnections. TTo access the
u can either log in using a web brow wser to the
e
liv
Onboard d Administrattor and then select the VC C Manager link, or use tthe dynamic
DNS nam me printed on n the tear-offf tag for the V
VC-Ethernet Module in In nterconnect bbay
1 (enter the
t DNS nam me in the bro owser addresss text field).
de
Note
The VC Mana ager typically runs
r on the Virtuual Connect Ethhernet module iin bay 1 unlesss
that module is
i unavailable, causing a failo over to the VC MManager runnin ng in bay 2. If yyou
cannot conne ect to the VC Manager
M ard Administrator
in Interrconnect bay 1,, use the Onboa
rT
9 –38 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Virtua
al Conne
ect Mana
ager login page
ly
on
y
Log on ussing the userr name (Adm
ministrator) annd password
d from the De efault Netwoork
er
Settings toe
t tag for Innterconnect bay
b 1. After yyou log in fo
or the first tim
me, the Virtua
al
Connect Manager Se etup Wizard screen displ ays.
liv
To set up the Virtual Connect
C dom
main and nettwork, follow
wing these ste
eps:
Log in and run th
he Domain Setup
S Wizard
d.
de
Laun
nch the Netw
work Setup Wizard.
W
Select a MA
AC address ra
ange.
Fo
Rev. 12.3
31 9 –39
Implemen
nting HP BladeS
System Solutions
After an enclosure
e is imported into a Virtual C
Connect dom main, server bblades that
have not been assign ned a server profile are issolated from all networkss to ensure th
hat
only prop
perly configuured server blades are atttached to da ata center ne
etworks.
A predep
ployment pro ofile can be defined
d bay so that the server bla
for eeach device b ade
can be powered
p on and
a connecte ed to a deplo oyment netw
work. These pprofiles can la
ater
be modiffied or replacced by anoth
her server prrofile.
Virtua
al Conne
ect Mana
ager hom
me page
ly
on
y
er
liv
de
This scree
en provides access
a for the managemeent of enclossures, serverss, and
TT
networkinng. It also se
erves as the launch point for the initia
al setup of VC
C Manager.
The VC Manager
M navvigation syste
em consists o
of a tree view
w on the left side of the
page tha at lists all of the system deevices and avvailable actiions. The tree
e view remaiins
visible at all times.
rT
Note
The Home Pa
age will look slig depending on tthe firmware revvision.
ghtly different d
9 –40 Rev. 12
2.31
Virtual Connect Installation and Configuration
ly
Update firmware (Virtual Connect Ethernet and Virtual Connect Fibre
on
Channel)
Networking
Configure network default settings
y
Select the MAC address range to be used by the Virtual Connect domain
er
Create/delete/edit networks
Create/delete/edit shared uplink sets
liv
Storage (SAN)
Configure storage-related default settings
de
By default, all users have read privileges in all roles (not being in any of the privilege
classes gives read-only access). Each user can have any combination of the four
privileges. The Administrator account is defined by default, and additional local user
accounts can be created.
Note
The VC Manager user account is an internal Onboard Administrator account created and
used by VC Manager to communicate with the Onboard Administrator. This account can
appear in the Onboard Administrator system log and cannot be changed or deleted.
Virtua
al Conne
ect Mana
ager failo
over
ly
on
y
Redundant pair of VC modules
The VC Manager
M
Ethernet modules
m in bay
er
runs as a high-a
b 1 and ba
availability p
pair and runss on Virtual C
ay 2. The acctive VC Man nager is usua
Connect
ally on bay 1
1.
liv
Redundancy daemon ns on modulees 1 and 2 d determine thee active manaager.
Heartbea
ats can be maintained ovver multiple p
paths:
de
Backkplane
Ethernet link
Each time
e a configuraation changees, it is writteen to local fla
ash memory and check-
TT
pointed to
o the standbby module (and written to o flash memo ory). Configu
urations can
also be backed
b up to
o a workstatio
on.
A failover will cause a restart of th
he VC Mana ager, a restore from the ssaved
ation, and wiill require re--login by anyy web users.
configura
rT
Note
A single static IP address may be configured for thee VC Manager.
Fo
Note
The c7000/cc3000 enclosurre configurationn and setup is sstored on the O Onboard
Administratorr (in flash). Thesse include encloosure name, En nclosure Bay IP Address (EBIPA A)
settings, powwer mode, SNM MP settings, and so on. If you h have a second O OA, that
information iss also kept there, so that if youu lose or replacce an OA, you do not lose you ur
settings. In ad
ddition, the OAA does keep som me VC profile-rrelated signaturres. VC Ethernett
Modules store e VC-related coonfigurations suuch as networkss and profiles. O Other Ethernet
(Cisco, BNT) and Fibre Channel modules sstore their config gurations in the
eir own flash.
9 –42 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Virtu
ual Con
nnect Enterprisse Man
nager
ly
on
y
er
liv
HP Virtuaal Connect Enterprise Ma anager (VCEM M) is a softw
ware applicaation that
de
centralizees connection n manageme ent and workkload mobility for BladeS System server
blades thhat use Virtuaal Connect to
o control acccess to LANs,, SANs, and d converged
network infrastructure
i es. VCEM helps organiza ations increasse productivity, respond
faster to infrastructure
i e and worklooad changes,, and reducee operating ccosts. VCEM
TT
the VCEM M central connsole enablees you to pro grammatically administe er LAN and
SAN add dress assignmments, perforrm group-bassed configura ation managgement, and
rapidly deploy, move,, and fail over server connnections an nd their workkloads for up
p to
250 Virtuual Connect domains (1,0 000 enclosurres and 16,0 000 servers wwhen used w with
Fo
Rev. 12.3
31 9 –43
Implementing HP BladeSystem Solutions
ly
For more information, visit: http://www.hp.com/go/vcem/
on
VCEM presents its own dedicated homepage to perform the following core tasks:
Discover and import existing Virtual Connect domains
Aggregate individual Virtual Connect address names for LAN and SAN
y
connectivity into a centrally administered VCEM address repository
Create Virtual Connect domain groups
er
Assign and unassign Virtual Connect domains to Virtual Connect domain groups
liv
Define server profiles and link to available LAN and SAN resources
Assign server profiles to BladeSystem enclosures, enclosure bays, and Virtual
Connect domain groups
de
numbers inside server profiles
Tracking VCEM job status — The Jobs list provides detailed information about
jobs that have occurred and are related to VCEM.
VCEM
M compa
ared with
h VC Manager
ly
on
y
Virtual Coonnect Mana
er
ager is a weeb console b uilt into the ffirmware of V
Virtual Connect
liv
Ethernet modules,
m dessigned to coonfigure and manage a ssingle Virtuall Connect
domain, up to 64 serrvers. This co ould be a sinngle enclosurre, or a multi enclosure
domain containing
c upp to four phy
ysically linked
d enclosuress in the samee or adjacentt
de
profile for multiple Virtual Connecct domains thhat connect tto the same networks. W With
VCEM, administrators
a s can move profiles
p and server worklloads betwee en any
enclosurees that belong to the samme or differennt domain grroup, which ccould be the
same racck, across the e data centerr or even a d n. The domain
different physsical location
Fo
group fun nctionality in VCEM also simplifies thhe addition oof new/bare metal
enclosurees, helping organizations
o s develop mo ore consisten
nt infrastructure
configuraations as the datacenter expands.
e
Important
! If a customer plans to only use
u the freely in cluded VC Ma nager instead o of purchasing
VCEM to manage a Virtual Connect Fibre C Channel module, then at leastt one Virtual
Connect Ethe ernet module mu ust also be insta
alled in the Virttual Connect do
omain. The reason
use the VC Mannager software only runs on a Virtual Connecct
for this requirrement is becau
Ethernet module.
Rev. 12.3
31 9 –45
Implemen
nting HP BladeS
System Solutions
VCEM
M licensing
ly
VCEM is licensed perr BladeSystem m enclosure, with separa ate options fo
or-c3000 and
on
c7000 en nclosures. One VCEM liccense is requuired for each h enclosure tto be managged
in both siingle and muulti-enclosuree domain connfigurations. Licenses are e non-
transferab
ble. Full deta
ails are conta ained in the End User Liccense Agreem ment. License
es
are addittive such thatt multiple lice
enses can bee combined together for the total
number ofo BladeSyste em enclosure e licenses youu have purchhased.
y
For each purchased license, a lice ment certificate is delivere
ense entitlem ed. The licensse
er
entitlement certificate contains infformation neeeded to redeeem license a activation ke
eys
online or via fax. Thiss electronic redemption
r p
process enab bles easy liceense
liv
managem ment and serrvice and sup pport trackin g.
9 –46 Rev. 12
2.31
Virtual Connect Installation and Configuration
Installing VCEM
VCEM can be installed in a variety of configurations that include a physical stand-
alone console, as a plug-in to HP SIM 6.0 or later, and as a virtual machine.
Use the Insight Software DVD to install VCEM. Run the Insight Software Advisor to
test and evaluate the hardware and software configuration before beginning the
installation.
When an upgrade to a new and different central management server (CMS) is
performed, or is moved to a 64-bit CMS, it may be necessary to migrate old data
ly
using the HP SIM data migration tool. If an upgrade to a new version of VCEM on
the same CMS is performed, data migration with the HP SIM data migration tool is
on
not necessary.
Note
Installation of VCEM requires Virtual Connect firmware 2.10 or later. For complete
hardware and software minimum requirements and other information, see the HP Systems
y
Insight Manager Installation and Configuration Guide for Microsoft Windows available
from the HP website.
Note
For more information about Virtual Connect and Virtual Connect Manager, see
Fo
VCEM
M user in
nterfaces
ly
on
y
You can access
a VCEM
er
M through either a graphhical user inteerface (GUI)) or a comma
line interfface (CLI). Th
he VCEM GU UI enables yyou to:
and
liv
Man mains and d omain group
nage Virtual Connect dom ps
Man
nage server profiles
p and profile failovver
de
Perfo
orm central address
a mana
agement (MA
AC, WWN, serial numb
bers)
The CLI can
c be used as an alterna ative method
d or if no bro
owser is avaiilable.
Available
e operations from the CLII include:
TT
Perfo
orm profile fa
ailover on sp
pecified Virtua
al Connect d
domain bay server
List details
d for specified VCEM
M job
rT
Show
w CLI usage online help
Using the
e CLI can be useful in the
e following sccenarios:
HP management
m t applications, such as H P SIM or Insight Control tools, can
Fo
9 –48 Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
VCEM
M profile fa
ailover
ly
on
Using
g a spare serveer with VCEM
y
domain group
g with minimal
m admiinistrator inteervention.
er
Virtual Co
onnect Profile Failover is a VCEM fea ables the automated
ature that ena
movemen nt of Virtual Connect
C servver profiles a
and associateed network cconnections tto
customer--defined spare servers in a Virtual Co onnect doma ain group. Thhe manual
liv
movemen nt of a Virtua
al Connect se eps to complete
erver profile requires the following ste
the opera over combines these separate steps into
ation, but Virrtual Connecct Profile Failo
one seammless task:
de
1. Powe
er down the original or source
s serverr.
4. Selecct a new targ
get server.
5. Movve the Virtual Connect serrver profile to
o the target sserver.
TT
6. Powe
er up the new
w server.
When se electing a targ
get server fro
om a pool off defined spa
are systems, Virtual Conn nect
rT
Profile Fa
ailover autommatically choooses the sam
me server model as the so ource server.
The proce ess can be innitiated from the VCEM G GUI as a onee-button ope eration or from
the CLI. When
W used with
w the auto omatic event handling funnctionality in HP SIM,
Virtual Coonnect Profile Failover opperations cann be automaatically trigge
ered-based,
Fo
user-defin
ned events.
Precondittions for a Viirtual Connect Profile Faiilover are:
Sourrce and desig e servers must be part off the same Virtual Conne
gnated spare ect
domain.
The source
s and target
t serverss must be co
onfigured to b
boot from SA
AN.
The designated
d spare
s serverss must be po
owered off.
A sp
pare server must
m be the sa
ame model a
as the sourcee server.
Rev. 12.3
31 9 –49
Implementing HP BladeSystem Solutions
Learning check
1. List the components of Virtual Connect technology.
.................................................................................................................
.................................................................................................................
.................................................................................................................
.................................................................................................................
7. To install Fibre Channel in a Virtual Connect environment, the enclosure must
ly
have at least one Virtual Connect Ethernet module because the VC Manager
software runs on a processor resident on the Ethernet module.
on
True
False
8. Match each item with its correct description.
y
a. FL_Port ......... A switch port used for switch-to-
er
switch connections
liv
b. NL_Port ......... An F_Port that contains arbitrated
loop functions
de
9. A single user can have any combination of server, network, domain, or storage
privileges.
rT
True
False
Fo