Uploaded by Денис Куликов

HP Blade System

advertisement
rT
TT
de
liv
er
y
on
ly
Implem
menting
g HP Bla
adeSysstem
Solutio
ons
Fo
Stude
ent Guide
G
Volume 1
Rev. 12.31
1
Fo
TT
rT
y
de
liv
er
on
ly
rT
TT
de
liv
er
y
on
ly
Implem
menting
g HP Bla
adeSysstem
Solutio
ons
Fo
Stude
ent Guide
G
Rev. 12.31
1
Use of this m
material to deliveer training withou
ut
prior writtenn permission from
m HP is prohibited
d.
on
ly
 Copyright 2012 Hewlett-Packard Development Company, L.P.
y
The information contained herein is subject to change without notice. The only warranties for HP
products and services are set forth in the express warranty statements accompanying such products
and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
Printed in USA
de
liv
er
This is an HP copyrighted work that may not be reproduced without the written permission of HP.
You may not use these materials to deliver training to any person outside of your organization
without the written permission of HP.
Fo
rT
TT
Implementing HP BladeSystem Solutions
Student guide
July 2012
Contents
Volume 1
Module 1 — Portfolio Introduction
Fo
rT
TT
de
liv
er
y
on
ly
Objectives ...................................................................................................... 1
HP BladeSystem positioning .............................................................................. 2
BladeSystem evolution................................................................................ 3
Transitioning to the ProLiant Gen8 servers .................................................... 4
Key Gen8 technologies ....................................................................... 4
BladeSystem portfolio ....................................................................................... 6
BladeSystem enclosures.............................................................................. 7
BladeSystem c3000 enclosure .............................................................. 7
BladeSystem c7000 enclosure .............................................................. 7
BladeSystem server blades ......................................................................... 8
HP ProLiant Blade Workstation Solutions ...................................................... 9
HP ProLiant WS460c G6 Blade Workstation ........................................ 10
HP ProLiant xw2x220c Blade Workstation ........................................... 10
BladeSystem storage and expansion ........................................................... 11
HP storage blades ............................................................................. 11
Ultrium Tape Blades ...........................................................................12
PCI Expansion Blade ..........................................................................13
Ethernet interconnects ...............................................................................14
Ethernet mezzanine cards ...................................................................15
Storage interconnects ...............................................................................16
Storage mezzanine cards ...................................................................17
Integrity NonStop BladeSystem ..................................................................18
NonStop NB 54000c and NB5000c BladeSystems .............................. 19
Integrity Superdome 2 ............................................................................. 20
Virtual Connect technology ............................................................................. 22
Virtual Connect FlexFabric ........................................................................ 23
Virtual Connect FlexFabric ................................................................. 24
Virtual Connect Flex-10 technology ............................................................ 24
How Flex-10 works ............................................................................ 25
Virtual Connect modules .......................................................................... 30
Virtual Connect environment with BladeSystem enclosure ........................31
Virtual Connect environment—Three key components .............................31
Virtual Connect Ethernet modules........................................................ 32
HP BladeSystem 10Gb KR Ethernet ..................................................... 33
Rev. 12.31
i
Implementing HP BladeSystem Solutions
on
ly
Management and deployment tools ................................................................. 34
ProLiant Onboard Administrator ................................................................ 34
Onboard Administrator modules ............................................................... 35
Insight Display ........................................................................................ 37
Main Menu ...................................................................................... 37
Enclosure Settings Menu .................................................................... 38
iLO Management Engine ......................................................................... 39
HP Insight Control ................................................................................... 40
HP Systems Insight Manager..................................................................... 42
Advantages of HP SIM ...................................................................... 43
Learning check .............................................................................................. 44
Module 2 — BladeSystem Enclosures
Fo
rT
TT
de
liv
er
y
Objectives ...................................................................................................... 1
BladeSystem enclosure family ............................................................................ 2
BladeSystem enclosure features ................................................................... 3
BladeSystem enclosure comparison ............................................................. 4
BladeSystem c7000 enclosure .................................................................... 6
BladeSystem c3000 enclosure .................................................................... 7
BladeSystem c3000 enclosure — Rear view ........................................... 8
BladeSystem enclosure management hardware and software ............................... 9
HP Onboard Administrator ......................................................................... 9
Onboard Administrator module components............................................... 10
Redundant Onboard Administrator modules .......................................... 11
Dual Onboard Administrator tray ........................................................12
Onboard Administrator link module .....................................................13
HP Insight Display ....................................................................................14
iLO Management Engine ..........................................................................16
Agentless Management ......................................................................17
Active Health System ..........................................................................18
HP Intelligent Provisioning .................................................................. 20
Communication between iLO and server blades ................................... 21
HP iLO Advanced for HP BladeSystem ................................................. 22
ii
Rev. 12.31
Contents
Fo
rT
TT
de
liv
er
y
on
ly
BladeSystem power and cooling ...................................................................... 26
BladeSystem enclosure design challenges ................................................... 27
PARSEC architecture ................................................................................ 28
BladeSystem c7000 enclosure airflow ........................................................ 30
Active Cool Fans ......................................................................................31
Fan location rules .................................................................................... 32
The c7000 enclosure ......................................................................... 32
The c3000 enclosure ........................................................................ 32
Fan population ....................................................................................... 33
The c7000 enclosure ......................................................................... 33
The c3000 enclosure ........................................................................ 34
Fan failure rules ................................................................................ 35
Fan quantity versus power.................................................................. 36
Self-sealing BladeSystem enclosure...................................................... 37
Cooling multiple enclosures................................................................ 38
Thermal Logic ......................................................................................... 39
Power Regulator technologies ................................................................... 40
Power Regulator for ProLiant ...............................................................41
Power Regulator for Integrity .............................................................. 42
iLO 4 power management ................................................................. 43
Dynamic Power Saver........................................................................ 47
Dynamic Power Capping ................................................................... 48
Power delivery modes .............................................................................. 49
Non-Redundant Power ....................................................................... 49
Power Supply Redundant ................................................................... 50
AC Redundant ...................................................................................51
HP Intelligent Power Discovery Services ...................................................... 52
HP Intelligent PDUs ........................................................................... 53
HP power distribution units ....................................................................... 54
PDU benefits .................................................................................... 55
HP 16A to 48A Modular PDUs ........................................................... 55
HP Monitored PDUs .......................................................................... 55
BladeSystem c7000 PDUs .................................................................. 56
BladeSystem c3000 PDUs .................................................................. 57
BladeSystem enclosure power supplies ....................................................... 58
HP Common Slot Power Supplies ........................................................ 58
BladeSystem c7000 enclosure power supplies .......................................61
Power modules and cords .................................................................. 63
Single-phase AC power supply placement ........................................... 64
DC power configuration rules ............................................................. 65
Total available power ........................................................................ 66
BladeSystem c3000 enclosure power supplies ...................................... 67
Power supply placement .................................................................... 68
Total available power ........................................................................ 69
BladeSystem DVD-ROM drive options ............................................................... 70
Learning check .............................................................................................. 71
Rev. 12.31
iii
Implementing HP BladeSystem Solutions
Module 3 — HP BladeSystem Server Blades
on
ly
Objectives ...................................................................................................... 1
ProLiant Gen8 server blade portfolio .................................................................. 2
ProLiant BL420c Gen8 server blade ............................................................. 2
ProLiant BL460c Gen8 server blade ............................................................. 3
ProLiant BL465c Gen8 server blade ............................................................. 4
Integrity i2 server blade portfolio ....................................................................... 5
Integrity BL860c i2 .................................................................................... 5
Integrity BL870c i2 .................................................................................... 7
Integrity BL890c i2 .................................................................................. 10
Learning check ...............................................................................................12
y
Module 4 — HP BladeSystem Storage and
Expansion Blades
Fo
rT
TT
de
liv
er
Objectives ...................................................................................................... 1
HP BladeSystem storage and expansion blades ................................................... 2
HP storage blades ..................................................................................... 2
HP D2200sb Storage Blade ................................................................. 3
HP X1800sb G2 Network Storage Blade ............................................... 4
HP X3800sb G2 Network Storage Gateway Blade................................. 5
Direct Connect SAS Storage for HP BladeSystem..................................... 6
BladeSystem tape blade portfolio ................................................................ 7
HP Ultrium Tape Blades ....................................................................... 7
BladeSystem tape blades — Feature comparison .................................... 8
HP Storage Library and Tape Tools ....................................................... 9
Features and benefits of L&TT ............................................................. 10
PCI Expansion Blades ............................................................................... 11
HP PCI Expansion Blade — PCI card details .........................................12
HP IO Accelerator .............................................................................13
Smart Array controller portfolio ........................................................................15
Standard features of Smart Array controllers ................................................16
I/O bandwidths in Smart Array controllers ............................................17
Smart Array controller classification ............................................................18
HP Smart Array P822 controller .................................................................18
HP Smart Array P220 and HP Smart Array P222 controllers ......................... 19
HP Smart Array P420 and P420i controllers ............................................... 19
Learning check .............................................................................................. 20
iv
Rev. 12.31
Contents
Module 5 — Ethernet Connectivity Options
for HP BladeSystem
de
liv
er
y
on
ly
Objectives ...................................................................................................... 1
Available Ethernet interconnect modules ............................................................. 2
HP 6120XG Ethernet Blade Switch ............................................................. 3
HP 6120XG Ethernet Blade Switch — Front panel .................................. 4
HP 6120G/XG Blade Switch ..................................................................... 5
HP 6120G/XG Ethernet Blade Switch — Front panel ............................. 6
Managing HP blade switches ..................................................................... 7
Cisco Catalyst Blade Switch 3020 features .................................................. 8
Catalyst Blade Switch 3020 front bezel ................................................ 9
Cisco Catalyst Blade Switch 3120 features ................................................ 10
Catalyst Blade Switch 3120 front bezel ............................................... 11
HP GbE2c Layer 2/3 Ethernet Blade Switch................................................12
GbE2c Layer 2/3 Ethernet Blade Switch front bezel ..............................13
HP 1:10Gb Ethernet BL-c Switch ................................................................14
1:10Gb Ethernet BL-c Switch front bezel ..............................................15
HP 1Gb Ethernet Pass-Thru Module ............................................................16
HP 10GbE Pass-Thru Module.....................................................................17
HP 10GbE Pass-Thru Module components ............................................18
Learning check .............................................................................................. 19
TT
Module 6 — Storage Connectivity Options
for HP BladeSystems
Fo
rT
Objectives ...................................................................................................... 1
Fibre Channel interconnect options .................................................................... 2
Cisco MDS 9124e Fabric Switch for BladeSystem .......................................... 2
Cisco MDS 9124e Fabric Switch features and components ....................... 4
Standard and optional software ........................................................... 4
Cisco MDS 9124e Fabric Switch layout .................................................. 5
Dynamic Ports on Demand ................................................................... 6
Brocade SAN switches............................................................................... 7
Brocade SAN switch licensing .............................................................. 8
Brocade SAN switch software .............................................................. 9
SAS storage solutions for BladeSystem servers .................................................... 11
HP 3Gb SAS BL Switch ............................................................................. 11
HP Virtual SAS Manager...........................................................................12
Rev. 12.31
v
Implementing HP BladeSystem Solutions
on
ly
4X InfiniBand Switch modules ..........................................................................13
Mezzanine cards and adapters ........................................................................15
Mezzanine card and slot options available for BladeSystem ..........................15
Type I mezzanine cards and slots ........................................................16
Type II mezzanine cards and slots........................................................16
HBAs available ........................................................................................17
QLogic QMH2562 8Gb Fibre Channel HBA ...............................................17
Emulex LPe1205-HP 8Gb/s Fibre Channel HBA .......................................... 19
Brocade 804 8Gb Fibre Channel Host Bus Adapter .................................... 21
HP 4X InfiniBand Mezzanine HCAs .......................................................... 22
HP IB QDR/EN 10 Gb 2P 544M Mezzanine Adaptor ................................ 23
Learning check .............................................................................................. 24
y
Module 7 — Configuring Ethernet Connectivity
Options
Fo
rT
TT
de
liv
er
Objectives ...................................................................................................... 1
Configuring an HP GbE2c Layer 2/3 Ethernet Blade Switch ................................. 2
User, operator, and administrator access rights ............................................. 2
Access-level defaults ............................................................................ 3
Accessing the GbE2c switch ....................................................................... 4
Logging in through the Onboard Administrator ............................................. 5
Configuring redundant switches .................................................................. 6
Redundant crosslinks ........................................................................... 6
Redundant paths to server bays ............................................................ 6
Manually configuring a GbE2c switch ......................................................... 7
Configuring multiple GbE2c switches .................................................... 7
Configuring a Cisco Catalyst Blade Switch 3020 or 3120 ..................................... 8
Obtaining an IP address ............................................................................ 8
Obtaining an IP address for the fa0 interface through the Onboard
Administrator ..................................................................................... 8
Using a console session to assign a VLAN 1 IP address .......................... 9
Cisco Express Setup ............................................................................ 9
Assigning the VLAN 1 IP address .............................................................. 10
Obtaining an IP address for the fa0 interface through the Onboard
Administrator ...........................................................................................12
vi
Rev. 12.31
Contents
de
liv
er
y
on
ly
Configuring an HP 1:10Gb Ethernet BL-c Switch ..................................................13
Planning the 1:10Gb Ethernet BL-c switch configuration ..................................13
Switch port mapping ................................................................................13
Accessing the 1:10Gb Ethernet BL-c switch ...................................................14
User, operator, and administrator access rights ............................................15
Manually configuring a switch ...................................................................16
Configuring multiple switches ....................................................................16
Using scripted CLI commands through telnet .........................................16
Using a configuration file ...................................................................16
Configuring an HP 6120XG or 6120G/XG switch ...............................................17
Switch IP configuration ..............................................................................17
Using the CLI Manager-level prompt .....................................................17
Configuring the IP address by using a web browser interface ..................17
Accessing a blade switch from the Onboard Administrator .....................18
Accessing a blade switch through the mini-USB interface (out of band) ... 19
Accessing a blade switch from the Ethernet interface (in band) .............. 19
Assigning an IP address to a blade switch ........................................... 20
IP addressing with multiple VLANs ...................................................... 21
IP Preserve: Retaining VLAN-1 IP addressing across configuration file
downloads ....................................................................................... 22
Learning check .............................................................................................. 23
Module 8 — Configuring Storage Connectivity
Options
Fo
rT
TT
Objectives ...................................................................................................... 1
Configuring a Brocade 8Gb SAN switch ............................................................ 2
Setting the switch Ethernet IP address ........................................................... 2
Using EBIPA ....................................................................................... 2
Using external DHCP .......................................................................... 2
Setting the IP address manually ............................................................ 3
Configuring the 8Gb SAN switch ............................................................... 5
Items required for configuration............................................................ 5
Setting the date and time ..................................................................... 5
Verifying installed licenses ................................................................... 5
Modifying the Fibre Channel domain ID (optional) ................................. 6
Disabling and enabling a switch .......................................................... 6
Using DPOD ...................................................................................... 6
Backing up the configuration ............................................................... 6
Reset button ....................................................................................... 7
Management tools .................................................................................... 8
Rev. 12.31
vii
Implementing HP BladeSystem Solutions
de
liv
er
y
on
ly
Configuring a Cisco MDS 9124e Fabric Switch.................................................... 9
Setting the IP address ................................................................................ 9
Configuring the fabric switch .................................................................... 10
Items required for configuration.......................................................... 10
Setting the date and time ................................................................... 10
Verifying installed licenses ................................................................. 10
Modifying the Fibre Channel domain ID (optional)................................. 11
Recovering the administrator password ................................................. 11
Fabric switch management tools .................................................................12
Configuring an HP 3Gb SAS BL Switch .............................................................13
Configuration rules for the 3Gb/s SAS Switch .............................................13
Configuring the 3Gb SAS BL Switch ...........................................................14
Accessing the 3Gb SAS BL Switch .............................................................15
Confirming the firmware version .................................................................15
Learning check ...............................................................................................16
Module 9 — Virtual Connect Installation and
Configuration
Fo
rT
TT
Objectives ...................................................................................................... 1
HP Virtual Connect portfolio.............................................................................. 2
HP 1/10Gb VC Ethernet ............................................................................ 2
HP 1/10Gb-F VC Ethernet .......................................................................... 2
HP Virtual Connect Flex-10 10Gb Ethernet .................................................... 3
HP Virtual Connect 4Gb Fibre Channel Module ............................................ 4
HP Virtual Connect 8Gb 20-port Fibre Channel Module ................................ 4
HP Virtual Connect 8Gb 24-port Fibre Channel Module ................................ 5
HP Virtual Connect FlexFabric modules ........................................................ 6
FlexFabric adapter — Physical functions ................................................ 7
Planning and implementing Virtual Connect ...................................................... 10
Building a Virtual Connect environment ....................................................... 11
Virtual Connect out-of-the-box steps ............................................................12
Virtual Connect Ethernet stacking ...............................................................13
Virtual Connect Ethernet module stacking .............................................14
viii
Rev. 12.31
Contents
Fo
rT
TT
de
liv
er
y
on
ly
Using VC-FC modules .....................................................................................15
Virtual Connect Fibre Channel WWNs .......................................................15
Virtual Connect Fibre Channel port types and logins ....................................16
Fibre Channel logins ..........................................................................16
Fibre Channel zoning and SSP ..................................................................17
N_Port_ID virtualization ............................................................................18
Fabric login using the HBA aggregator’s WWN .................................. 19
N_Port_ID virtualization ..................................................................... 20
Configuring Virtual Connect ............................................................................ 20
Virtual Connect logical flow...................................................................... 22
Create a VC domain ......................................................................... 22
Virtual Connect multi-enclosure VC domains ......................................... 23
Define Ethernet networks.................................................................... 30
Define Fibre Channel SAN connections ................................................31
Create server profiles ........................................................................ 32
Implementing the server profile ........................................................... 33
Manage data center changes ............................................................ 34
Virtual Connect – Server profile migration .................................................. 35
Server profile migration for a failed server ........................................... 36
Virtual Connect Manager ............................................................................... 37
Accessing the Virtual Connect Manager .................................................... 38
Virtual Connect Manager login page ........................................................ 39
Virtual Connect Manager home page........................................................ 40
Virtual Connect role-based privileges ..........................................................41
Virtual Connect Manager failover ............................................................. 42
Virtual Connect Enterprise Manager ................................................................ 43
VCEM compared with VC Manager .......................................................... 45
VCEM licensing ...................................................................................... 46
Installing VCEM ...................................................................................... 47
Typical environments for VCEM ................................................................. 47
VCEM user interfaces .............................................................................. 48
VCEM profile failover ........................................................................ 49
Learning check .............................................................................................. 50
Rev. 12.31
ix
Implementing HP BladeSystem Solutions
Volume 2
Module 10 — Introduction to HP SAN Solutions
de
liv
er
y
on
ly
Objectives ...................................................................................................... 1
HP MSA2000/P2000 portfolio ......................................................................... 2
P2000 G3 MSA ....................................................................................... 2
Key features.............................................................................................. 5
HP 2000i MSA ......................................................................................... 6
Management tools .................................................................................... 7
EcoStore technology .................................................................................. 7
Active/active controllers ............................................................................. 8
Unified LUN presentation ........................................................................... 8
HP P4000 overview ......................................................................................... 9
P4000 product suite ................................................................................ 10
HP SAN/iQ software ........................................................................ 10
P4000 centralized management console ............................................. 10
Storage software ........................................................................................... 20
HP P4000 snapshots ............................................................................... 20
HP P4000 SAN/iQ SmartClone ............................................................... 21
HP P4000 SAN Remote Copy .................................................................. 23
Learning check .............................................................................................. 24
Module 11 — HP Virtualization Basics
Fo
rT
TT
Objectives ...................................................................................................... 1
How does virtualization work? .......................................................................... 2
What is a virtual machine? ........................................................................ 3
ProLiant virtualization with VMware ................................................................... 4
Host operating system-based virtualization.................................................... 4
VMware ESXi: Virtualization platform .......................................................... 5
VMware ESX/ESX1 ............................................................................. 6
VMware ESXi features ......................................................................... 7
VMware ESXi architecture .................................................................... 8
Configuring ESXi ................................................................................ 9
VMware vSphere .................................................................................... 10
Using the vSphere client .................................................................... 10
x
Rev. 12.31
Contents
ProLiant virtualization with Citrix Xen and XenServer ...........................................13
Comparing Xen platforms ...................................................................13
Identifying the XenServer product line...................................................14
Citrix Xen architecture overview ...........................................................15
XenCenter overview ..................................................................................16
ProLiant virtualization with Microsoft products .....................................................17
Windows Server 2008 R2 Hyper-V ............................................................17
Learning check .............................................................................................. 19
on
ly
Module 12 — Configuring and Managing HP
BladeSystem
Fo
rT
TT
de
liv
er
y
Objectives ...................................................................................................... 1
Placement rules and installation guidelines.......................................................... 2
c7000 enclosure zoning ............................................................................ 2
c7000 enclosure placement rules—Half-height server blades .......................... 4
c7000 enclosure placement rules—Full-height server blades ........................... 5
c7000 interconnect bays ............................................................................ 6
c3000 enclosure zoning ............................................................................ 7
c3000 enclosure placement rules—Half-height server blades .......................... 8
c3000 enclosure placement rules—Full-height server blades ........................... 9
c3000 interconnect bays ......................................................................... 10
Installation rules for partner blades ............................................................. 11
HP PCI Express Mezzanine Pass-Thru card ............................................ 11
Using the Onboard Administrator .....................................................................12
Onboard Administrator user interfaces........................................................13
Local I/O cable connection ................................................................14
First Time Setup Wizard ............................................................................15
Rack and enclosure settings .......................................................................16
Enclosure bay IP addressing ......................................................................17
Using configuration scripts ....................................................................... 19
Active to standby transition ....................................................................... 20
Using the service port connection .............................................................. 21
Power Management settings ..................................................................... 23
Device Power Sequence device bays.......................................................... 24
Onboard Administrator authentication....................................................... 26
VLAN configuration ................................................................................. 28
VLAN configuration settings ............................................................... 29
Rev. 12.31
xi
Implementing HP BladeSystem Solutions
Fo
rT
TT
de
liv
er
y
on
ly
Device Summary page............................................................................. 30
Rack firmware .......................................................................................... 31
Flashing the Onboard Administrator firmware ............................................. 32
Other firmware operations ................................................................. 34
Redundant flashing ........................................................................... 35
Recovering the administrator password ...................................................... 36
Resetting the Onboard Administrator to factory defaults ............................... 37
Preparing logs from the Onboard Administrator .......................................... 38
Using HP Insight Display ................................................................................ 39
Health Summary screen ........................................................................... 39
Enclosure settings .....................................................................................41
Enclosure information .............................................................................. 42
Verifying the firmware version ................................................................... 43
Rebooting the Onboard Administrator ....................................................... 44
Blade and port information ...................................................................... 45
Blade information ............................................................................. 46
Port Info view from Insight Display ....................................................... 47
USB Menu .............................................................................................. 48
iLO Management Engine ................................................................................ 49
Configuring iLO ...................................................................................... 49
iLO RBSU ......................................................................................... 49
Browser-based setup.......................................................................... 49
HPONCFG ...................................................................................... 50
HP Lights-Out Online Configuration Utility ............................................ 53
Important blade iLO settings ..................................................................... 54
General security recommendations ............................................................ 56
Attaching a DVD-ROM drive to BladeSystem enclosures ...................................... 57
Connecting to the enclosure DVD-ROM drive — Insight Display .................... 58
Connecting an ISO image as a CD/DVD ............................................ 59
Connecting to the enclosure DVD-ROM drive — Onboard Administrator ........ 60
Mounting an ISO image as a DVD ......................................................61
Enclosure-based DVD-ROM drive status – Insight Display .............................. 62
Enclosure-based DVD-ROM drive status – Onboard Administrator ................. 63
Learning check .............................................................................................. 64
xii
Rev. 12.31
Contents
Module 13 — Insight Control Management Software
Fo
rT
TT
de
liv
er
y
on
ly
Objectives ...................................................................................................... 1
Insight Control ................................................................................................. 2
Insight Control introduction ......................................................................... 2
Insight Control features .............................................................................. 3
Insight Control server deployment ......................................................... 4
Key server deployment features............................................................. 5
BladeSystem deployment optimizations .................................................. 5
Insight Control server migration ................................................................... 6
Insight Control virtual machine management .......................................... 8
Insight Control performance management ............................................ 10
Insight Control remote Management ..................................................... 11
Insight Control power Management .....................................................12
Hardware and software requirements .........................................................14
Insight Software server hardware requirements ......................................14
Database..........................................................................................16
Web browser ....................................................................................16
Virtualization platform ........................................................................16
HP Systems Insight Manager ............................................................................17
HP SIM overview ......................................................................................17
HP SIM architecture ..................................................................................18
Central Management Server ...............................................................18
Management console ........................................................................ 19
Managed systems ............................................................................. 19
HP SIM features ...................................................................................... 19
New features in HP SIM 7.0 ............................................................... 20
Easy and rapid installation ................................................................. 21
Two user interfaces ........................................................................... 22
Manage health proactively ................................................................ 23
Automatic system discovery and identification ...................................... 24
Fault management and event handling ...................................................... 26
Role-based security .................................................................................. 27
HP Version Control .................................................................................. 29
Version Control Repository Manager ......................................................... 30
Version Control Agent ........................................................................31
Learning check .............................................................................................. 32
Rev. 12.31
xiii
Implementing HP BladeSystem Solutions
Module 14 — Insight Control Server Deployment
Fo
rT
TT
de
liv
er
y
on
ly
Objectives ...................................................................................................... 1
Introducing Insight Control server deployment...................................................... 2
HP Insight Control Server Deployment software ............................................. 2
Benefits of Insight Control server deployment ................................................ 4
Insight Control server deployment architecture ............................................... 6
Server components .................................................................................... 7
Deployment Server .............................................................................. 8
Deployment Server Console ................................................................. 9
Deployment Server database.............................................................. 10
PXE server ......................................................................................... 11
Deployment Share .............................................................................12
DHCP server .....................................................................................12
Client components .............................................................................13
Scripted and imaged installation ......................................................................15
Jobs and tasks .........................................................................................15
Jobs .................................................................................................15
Tasks ................................................................................................15
Jobs and tasks working together ................................................................15
Building jobs .....................................................................................15
Scheduling jobs .......................................................................................16
Job categories .........................................................................................17
Firmware Flash ..................................................................................17
Hardware Configuration .....................................................................18
OS Installation.................................................................................. 19
OS Imaging ..................................................................................... 20
Software .......................................................................................... 21
Scripted deployment ................................................................................ 21
Windows configuration file ................................................................ 22
Configuration flow for scripting........................................................... 23
Imaging ................................................................................................. 24
Advantages and disadvantages.......................................................... 25
Imaging preparation ......................................................................... 25
Configuration flow for imaging ........................................................... 26
Advanced imaging options ...................................................................... 27
Media spanning ............................................................................... 27
Partition resizing ............................................................................... 28
Special functionality for HP BladeSystem........................................................... 29
Rip-and-Replace ...................................................................................... 29
Physical Devices view icons .......................................................................31
Creating virtual bays ......................................................................... 32
Learning check .............................................................................................. 34
xiv
Rev. 12.31
Contents
Module 15 — Data Availability and Protection
for an HP Server Blade
TT
de
liv
er
y
on
ly
Objectives ...................................................................................................... 1
Increasing availability through power protection .................................................. 2
Uninterruptible power supplies .................................................................... 2
HP power protection and management portfolio ........................................... 3
Tower UPS models .............................................................................. 3
Rack-mountable UPS models ................................................................. 4
HP UPS features ........................................................................................ 5
UPS options .............................................................................................. 6
Enhanced battery management .................................................................. 6
HP rack and power management software ................................................... 8
HP Power Manager ............................................................................. 8
HP Power Protector UPS Management Software ...................................... 8
Rack and Power Manager ................................................................... 9
HP UPS Management Module .................................................................. 10
HP Modular Cooling System G2 ................................................................12
Data Protection software ..................................................................................14
HP Data Protector.....................................................................................14
Key benefits ......................................................................................14
Key features ......................................................................................15
HP Data Protector Express .........................................................................16
Key features of Data Protector Express ..................................................16
Operating systems supported ..............................................................18
Learning check .............................................................................................. 19
Module 16 — HP BladeSystem Support
Fo
rT
Objectives ...................................................................................................... 1
BladeSystem diagnostics ................................................................................... 2
Tools to collect data................................................................................... 2
HP Active Health System ............................................................................ 4
HP Insight Control performance management ............................................... 5
HP Insight Remote Support ......................................................................... 6
HP Insight Online ...................................................................................... 7
HP iLO Management Engine Event Log ........................................................ 9
Security audits .................................................................................... 9
Integrated Management Log ..................................................................... 10
Array Configuration Utility diagnostics ........................................................12
ACU diagnostic reports ......................................................................13
Automatic Server Recovery ........................................................................14
Rev. 12.31
xv
Implementing HP BladeSystem Solutions
Fo
rT
TT
de
liv
er
y
on
ly
Firmware update tools and options ...................................................................16
Firmware overview ...................................................................................16
Firmware deployment methods ...................................................................17
Available tools for firmware updates ...........................................................17
HP Smart Update Manager ................................................................18
HP BladeSystem Firmware Deployment Tool ......................................... 20
Virtual Connect Support Utility ........................................................... 21
Service Pack for ProLiant .......................................................................... 22
Advantages ..................................................................................... 23
Obtaining firmware with Service Pack for ProLiant ................................. 24
Extended support duration ................................................................. 24
General best practices ............................................................................. 25
HP Services for BladeSystem ........................................................................... 26
Important safety information ..................................................................... 26
Safety symbols ................................................................................. 26
Server warnings and cautions ............................................................ 27
Preventing electrostatic discharge ........................................................ 28
Grounding methods to prevent electrostatic discharge ........................... 28
Troubleshooting flowcharts ....................................................................... 29
Example of troubleshooting power-on problems .......................................... 29
Implementing preventive measures ............................................................. 30
Learning check ...............................................................................................31
xvi
Rev. 12.31
HP BladeSystem Portfolio Introduction
Module 1
Objectives
After completing this module, you should be able to:
Describe the HP BladeSystem positioning

Identify the components of the BladeSystem portfolio

List the key HP BladeSystem Generation 8 (Gen8) server technologies

Name the BladeSystem management and deployment tools
Fo
rT
TT
de
liv
er
y
on
ly

Rev. 12.31
1 –1
Implemen
nting HP BladeS
System Solutions
on
ly
HP BladeSy
B
ystem position
p
ning
y
Three ma
ajor features off HP BladeSysteem
de
liv
er
BladeSystem solutionss provide com
mplete infrasstructures tha
at include serrvers, storage
e,
ng, and pow
wer to facilitate data centeer integration and transfo
ormation. Th
hey
networkin
enable data center cu
ustomers to respond
r
moree quickly and
d effectively to changing
g
business conditions, lighten the lo
oad on the ITT staff, and ccut total owne
ership costs.
BladeSystem has keptt pace with the
t changing
g needs of da
ata center cu
ustomers. The
ese
business requirementss include:


Lowe
er purchase and
a operatio
ons costs wheen adding o
or replacing
compute/storage
e capacity
Lowe
er application deploymen
nt and infrasstructure operrations costs by reducing
g
the number
n
of IT architecture variants
Allow
w easier, faster, and morre economica
al changes to
o server and
d storage setu
ups
witho
out disrupting local area network (LA
AN) and stora
age area ne
etwork (SAN))
domains
rT

Redu
uce connectivvity complex
xity and costss
TT

Allow
w faster mod
dification or addition
a
of a
applications

Supp
port grid com
mputing and service-oriennted architeccture (SOA)
Fo


1 –2
Supp
port third-parrty compone
ent integration with well-d
defined interffaces, such a
as
Ethernet NICs/sw
witches, Fibre
e Channel ho
ost bus adap
pters (HBAs)/
/switches, and
c
adap
pters (HCAs) /switches
InfiniBand host channel
Rev. 12
2.31
HP BladeSystem Portfolio Introduction
BladeSystem has met those challenges by enabling IT to:


Consolidate — Single modular infrastructure integrates servers, storage,
networking, and management software that can be managed with a common,
consistent user experience.
Virtualize — Pervasive virtualization enables you to run any workload, meet high
availability requirements, and support scale out and scale up. It also enables
you to create logical, abstracted connections to LAN/SAN.
Automate — Freeing up IT resources for more important tasks enables you to
simplify routine tasks and processes, saving time while maintaining control.
BladeSystem evolution
on
ly

de
liv
er
y
Many changes have been made since BladeSystem was first introduced to the
market. The BladeSystem infrastructure was designed to reduce the number of cables,
centralize management, and reduce space occupied by servers. All these features
were enabled to reduce the operational and maintenance costs of server
environment.
TT
In 2007, HP introduced Virtual Connect, which simplified connection management
(both Ethernet and Fibre Channel). Using Virtual Connect, administrators can design
networks and SANs on a virtual level. This means that cabling is done only once,
and all other changes are made on the Virtual Connect level. Virtual Connect is able
to replace the physical MAC address and WWN number of a server blade with a
virtual one. The server is visible to the external world using these virtual addresses.
When a network card or Fibre Channel card has to be replaced, administrators have
nothing else to change in the configuration because new MAC and WWN numbers
will be overwritten with the virtual addresses previously assigned to that blade.
rT
In 2008, HP announced Virtual Connect Flex-10, which has all the features of the
original Virtual Connect, but one 10Gb network port is seen as four independent
network ports. Administrators can assign bandwidth to a single port from 100Mb to
10Gb. ProLiant G6 servers are equipped with a dual-port Flex-10 network card. As a
result, customers using Virtual Connect Flex-10 have eight NICs integrated into a halfheight server blade with flexible speeds instead of two 1Gb ports.
Fo
In 2010, HP announced Virtual Connect FlexFabric. This technology was designed
for converging LAN and SAN connections into a single interconnect module.
In 2012, HP introduced a refresh to its ProLiant server blade line of products. Updates
to the ProLiant Gen8 server blades include a faster memory chipset, a lower voltage
memory option, and HP SmartMemory for enhanced support through HP Active
Health.
Rev. 12.31
1 –3
Implementing HP BladeSystem Solutions
Transitioning to the ProLiant Gen8 servers
Current ProLiant BL490c customers can move to the ProLiant BL460c Gen8 server
because it combines the best of the two server blades. Also, current ProLiant BL460c
customers can move to Gen8 to benefit from the improved BL460c Gen8 server
performance, management features, and overall configuration flexibility.
Key Gen8 technologies
on
ly
HP is continually upgrading its server portfolio with the latest technologies to meet
customer requirements.
Key Gen8 server technology includes:
y

Multicore processors — Multi-core Intel Xeon, AMD Opteron, or Intel Itanium 2
processors enable greater system scalability. Customers benefit from software
applications that are developed to take advantage of multi-core processor
technology.
HP SmartMemory — Lower voltage DIMMs allow faster operation speeds and
greater DIMM counts. SmartMemory enhances memory performance and can
be managed through the HP Active Health system. SmartMemory verifies that the
memory has been tested and performance- tuned specifically for HP ProLiant
servers. Types of HP SmartMemory include:
de
liv
er


Registered DIMMs (RDIMM)

Unbuffered with ECC DIMMs (UDIUMM)

Load-reduced DIMMs (LRDIMM)
iLO Management Engine — The HP Integrated Lights-Out (iLO) Management
Engine is a complete set of embedded management features that support the
complete lifecycle of the server, from initial deployment, through ongoing
management, to service alerting and remote support. The iLO Management
Engine ships standard on all ProLiant Gen8 servers. The iLO Management
Engine includes:
Fo
rT

TT
HP SmartMemory allows for greater performance and greater capacity. Some
Gen8 server blades can be equipped with up to 512 GB of memory.
1 –4

HP iLO – Is the core foundation for the iLO Management Engine. iLO
management simplifies server setup, health monitoring, as well as power
and thermal control. iLO enables you to access, deploy, and manage
servers anytime from anywhere.

HP Agentless Management – Begins to work as soon as the server has
power and data connections. The base hardware monitoring and alerting
capability is built into the iLO chipset.
Rev. 12.31
HP BladeSystem Portfolio Introduction

HP Intelligent Provisioning – Enables out of the box single server deployment
and configuration without the need for media.

HP Embedded Remote Support – Builds on the existing functions established
with HP Insight Remote Support that either runs in a stand-alone central
system or as a plug-in to the HP Systems Insight Manager (HP SIM).
on
ly
HP Active Health System – Is an essential part of the iLO management
engine. The Active Health System monitors and records changes in the
server hardware and system configuration. It assists in diagnosing problems
and delivering rapid resolution when system failures occur.
Multifunction network interface cards (NICs) — HP multifunction NICs provide a
high-performance network interface with support for TCP/IP Offload Engine
(TOE), iSCSI, and Remote Direct Memory Access (RDMA) over a single network
connection. Previously, the typical server environment required separate
connectivity products for networking, storage, interconnects, and infrastructure
management. HP multifunction NICs present a single connection supporting
multiple functions, enabling you to manage an entire infrastructure as a single,
unified fabric. They provide high network performance with upgrade options to
enhance memory and storage utilization. Multifunction NICs support multiple
fabric protocols, including Ethernet and iSCSI Fibre Channel.
de
liv
er
y


Note
The NICs in Integrity BL860c/BL870c server blades are not multifunction.
HP Smart Array P700m Controller —This Smart Array controller in a mezzanine
card format allows you to connect external storage to the server blades.
Internal USB and SD Card ports, plus a Trusted Platform Module (TPM) —
Internal card ports and the TPM provide expansion security options in Gen8
server blades.
rT

Flex-10 support — Gen8 servers have an embedded, dual-port Flex-10 network
card. These two ports can function as eight independent network ports with
adjustable bandwidth (VC Flex-10 modules are required to use this functionality).
TT


Power Regulator for ProLiant and Dynamic Power Capping — Power Regulator
and Dynamic Power Capping double the capacity of servers in the data center
through dynamic control of power consumption.
Fo

Rev. 12.31
1 –5
Implementing HP BladeSystem Solutions
de
liv
er
y
on
ly
BladeSystem portfolio
The BladeSystem portfolio offers multiple server options, different enclosures for server
blades, and a wide choice of interconnect options including Fibre Channel, Ethernet,
SAS, and InfiniBand options.
Fo
rT
TT
The BladeSystem portfolio consists of server blades, blade workstations,
interconnects, and multiple storage options such as tape drives and storage blades.
The two BladeSystem enclosures can accommodate any type of server blade that is
available on the market. Any of the server blades can be enhanced with a variety of
mezzanine cards including Ethernet, SAS, Fibre Channel, and InfiniBand options. For
each type of connection, HP offers appropriate interconnect modules including
revolutionary Virtual Connect modules. The whole infrastructure can be managed
from a central location using HP Systems Insight Manager (HP SIM) and other HP
Insight software components.
1 –6
Rev. 12.31
HPP BladeSystem PPortfolio Introduction
Blade
eSystem enclosure
e
es
on
ly
BladeS
System c3
3000 enclo
osure
The Blade
eSystem c30
000 enclosure
e can scale ffrom a singlee enclosure h
holding up to
o
eight blades, to a racck containing
g seven enclo
osures holdin
ng up to 56 blades.
TT
de
liv
er
y
BladeS
System c7
7000 enclo
osure
Fo
rT
The Blade
eSystem c70
000 enclosure
e holds up to
o 16 servers and storage
e blades pluss
redundan
nt network an
nd storage sw
witches. It inncludes a sha
ared, multi-te
erabit highspeed miidplane for wire-once
w
connectivity of server bladees to networkk and shared
d
storage. Power is deliivered throug
gh a pooled power backkplane that ensures the fu
ull
ndant hot-plug power sup
pplies is avaiilable to all b
blades.
capacity of the redun
Rev. 12.3
31
1 –7
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Blade
eSystem server
s
bla
ades
BladeS
System server b
blades portfolio
o
BladeSystem server blades
b
are de
elivered in tw
wo form facto
ors: half-heig
ght and fullerver bladess can be insta
alled (and m
mixed with other server bllades) in c30
000
height. Se
and c700
00 enclosure
es. Different series
s
are deesigned for d
different usag
ge models.
All mode
els can be ca
ategorized into four group
ps:
2xx series – High-density, low
w-cost serverrs optimized for high-perfformance
computing (HPC)) clusters
TT


4xx series – Dua
al-socket macchines for mo
ost typical usse

6xx series – Qua
ad-socket serrvers for virtuualization an
nd demandin
ng applicatio
on
8xx series – Integrity servers supporting H
HP-UX and O
OpenVMS w
with true 64-b
bit
proccessing
rT

Fo
HP has se
erver blades that meet cu
ustomer need
ds, from a sm
mall businesss to the large
est
enterprise
e firm. ProLia
ant server bla
ades supportt the latest A
AMD Opteron
n and Intel X
Xeon
processors and a wid
de variety of I/O options. Integrity seerver blades feature Intel
p
HP
H server blades also fea
ature:
Itanium processors.
1 –8

Virtu
ual Connect technology
t

A va
ariety of netw
work intercon
nnect alternattives

Integ
grated Lights Out (iLO) 4 (Gen8 serveers)

Multtiple redunda
ant features

Embedded RAID
D controllers
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
y
on
ly
HP PrroLiant Blade Wo
orkstation
n Solution
ns
de
liv
er
With an HP Blade Workstation
W
So
olution, the ccomputing po
ower, in the form of blad
de
workstatio
ons, is move
ed to the data
a center wheere the worksstations can be more eassily,
securely, and inexpen
nsively mana
aged.
The HP Blade
B
Worksttation Solutio
on consists off three prima
ary compone
ents:


ProLiiant xw460cc Blade Workkstation or P roLiant xw2xx220c Blade
e Workstation
n
(based on ProLia
ant server bla
ade architectture)
The client
c
compu
uter (the HP Compaq
C
t5730 Thin Clieent is shown in the graphic;
an HP
H dc73 Blad
de Workstation Client is a
also supportted)
HP Remote
R
Grap
phics Softwarre (HP RGS)
TT

Fo
rT
Blade wo
orkstations ca
an be installe
ed in c3000
0 or c7000 eenclosures. O
Other
positionin
ng rules and configuratio
ons are the sa
ame as for sserver bladess, including
managem
ment procedu
ures.
Rev. 12.3
31
1 –9
Implemen
nting HP BladeS
System Solutions
on
ly
HP Pro
oLiant WS
S460c G6
6 Blade Workstation
W
n
y
HP ProLia
ant WS460c G6 Worksta
ation Blade iis ideal for d
desktop powe
er users with
h
computin
ng environme
ents that requ
uire the use o
of high-perfo
ormance grap
phics
applicatio
ons from rem
mote location
ns. The small form factor of the HP Pro
oLiant xw460c
Blade Workstation
W
allows installation of up to
o 64 blade w
workstations iin a single 4
42U
rack.
de
liv
er
ProLiant WS460c
W
G6
6 Blade Worrkstations sup
pport the following opera
ating systemss:

Micrrosoft Windo
ows

Red Hat Enterprise Linux (RHEL)
The optio
onal HP Grap
phics Expanssion Blade m
module is an expansion b
blade that
attaches to the top off the ProLiantt xw460c bla
ade and ena
ables use of ffull-size
h as NVIDIA Quadro FX 5600. With
hout the
standard PCIe graphiics card such
expansio
on blade, sma
all form-facto
or graphics a
adapters are installed internally in the
e
blade wo
orkstation.
TT
HP Pro
oLiant xw2
2x220c Blade Worrkstation
rT
The HP ProLiant xw2x
x220c Blade
e Workstationn is a high-d
density mid-ra
ange
workstatio
on with two independentt workstationn nodes in a single half-h
height blade
package.. Each worksstation node has its own processor, m
memory, diskk drive, and
mezzanin
ne slot which
h can be fitte
ed with graphhics subsysteem. This allow
ws up to 32
workstatio
ons in a c70
000 enclosurre and 128 w
workstations in a standard
d 42U rack.
Fo
A single HP xw2x220
0c is essentia
ally two workstations in tterms of softw
ware licensin
ng.
ating system with the bla
ade workstatiion, you will be
If you purchase a Windows opera
purchasin
ng two licensses and rece
eive two certi ficates of auuthenticity sticckers on it. A
All
software, both HP and third-party
y, treats one H
HP xw2x220
0c Blade Wo
orkstation ass
two systems.
1 –10
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
Blade
eSystem storage
s
and
a expa
ansion
BladeSystem is built not
n only on servers, but a
also on storag
ge and expa
ansion modu
ules.
o consolidate
e other netwo
ork equipmeent including storage and
d
BladeSystem can also
o
backup options.
de
liv
er
y
on
ly
HP sto
orage blad
des
D2200sb
D
Stora
age Blade
HP offers storage solu
utions design
ned to fit inside the BladeeSystem enclosure, as we
ell
n to virtually unlimited sto
orage capaccity. HP stora
age blades o
offer
as external expansion
e
an
nd work side by side withh ProLiant an
nd Integrity se
erver bladess.
flexible expansion
TT
The HP portfolio
p
of sto
orage blade
es include:

HP Storage
S
D2200sb Storage Blade

HP Storage
S
X380
00sb G2 Ne
etwork Storag
ge Gatewayy Blade
HP Storage
S
X180
00sb G2 Ne
etwork Storag
ge Blade

HP Storage
S
IO Accelerator
A

Direcct Connect SAS
S
Storage for HP BladeeSystem
Fo
rT

Rev. 12.3
31
1 –11
Implemen
nting HP BladeS
System Solutions
on
ly
Ultrium
m Tape Bla
ades
y
Ultrrium SB3000c Tape Blade
de
liv
er
The HP Storage Ultriu
um Tape Blad
des offer a c omplete data
a protection,, disaster
recovery, and archiving solution for BladeSysttem customerrs who need an integrate
ed
on. These ha
alf-height tap
pe blades pro
ovide direct a
attach data
data prottection solutio
protection
n for the adja
acent server and network backup pro
otection for a
all data resid
ding
within the
e enclosure.
TT
Each HP Storage Ultrium Tape Bla
ade solution ships standa
ard with HP D
Data Protecto
or
Express Software
S
Sing
gle Server Ed
dition softwa
are. In addition, each tap
pe blade
supports HP One-Buttton Disaster Recovery (O
OBDR), which
h allows quicck recovery o
of
the opera
ating system, applications, and data from the lateest full backu
up set. HP
Ultrium Ta
ape Blades are
a the indusstry's first tap
pe blades an
nd are develo
oped exclusively
for HP Bla
adeSystem enclosures.
e
The follow
wing models are availab
ble:
HP Storage
S
SB30
000c Tape Blade
B
HP Storage
S
SB17
760c Tape Blade
rT

Fo

1 –12
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
de
liv
er
y
on
ly
PCI Ex
xpansion Blade
B
PCI Expansionn Blade
The HP BladeSystem
B
PCI Expansio
on Blade pro
ovides PCI ca
ard expansio
on slots to an
n
adjacent server blade
e. This blade
e expansion unit uses thee midplane to
o pass standard
PCI signa
als between adjacent
a
encclosure bays,, to allow a sserver blade
e to add off-theshelf PCI--X or PCI-E ca
ards. Custom
mers need onne PCI Expan
nsion Blade ffor each servver
blade needing PCI ca
ard expansio
on. Any PCI ccard from third-party manufacturers tthat
P
DL sservers should
works in HP ProLiant ML and HP ProLiant
d work in thiis PCI
Expansio
on Blade.
Fo
rT
TT
Note
HP does not offer any wa
arranty or sup port for third-p
party PCI man
nufactured
products.
Rev. 12.3
31
1 –13
Implemen
nting HP BladeS
System Solutions
on
ly
Ethernet intercconnects
HP 10GbE Pass-T hru Module
To conne
ect embedded
d and added
d network ca
ards to the p roduction ne
etwork, HP
provides a number off Ethernet inte
erconnects fo
or BladeSysteem.
de
liv
er
y
Ethernet interconnects
i
s allow admiinistrators to connect servver blades in
n a variety of
different methods. Mo
ost interconnects reduce cabling, with
h internal do
ownlinks to
al server blad
des and conssolidated up links.
individua
The HP portfolio
p
of in
nterconnects include:
HP 10Gb
1
Pass Th
hru module

HP GbE2c
G
switch
h and HP Gb
bE2c Layer 2
2/3

Cisco
o Catalyst 30
020 Blade Switch
S
and C
Cisco Catalysst Blade Swittch 3120G/X
X

HP 1:10Gb
1
Etherrnet switch

HP ProCurve
P
612
20XG

HP ProCurve
P
612
20G/XG
Fo
rT
TT

1 –14
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
on
ly
Ethern
net mezzanine cardss
y
HP NC360m Dual Port 1
1GbE BL-c Ada pter
de
liv
er
Mezzanine cards are
e used to add
d more netw
work connections to a servver blade. Th
he
ortfolio of HP
P Ethernet mezzanine ca
ards include:
current po
HP NC325m
N
PCI Express Qu
uad Port Gig
gabit Server A
Adapter

HP NC326m
N
PCI Express Du
ual Port 1Gb Server Adap
pter

HP NC360m
N
Dual Port 1GbE
E BL-c Adaptter

HP NC364m
N
Qu
uad Port 1Gb
bE BL-c Adap
pter

HP NC382m
N
Dual Port 1GbE
E Multifunctio
on BL-c Adap
pter

HP NC522m
N
Dual Port Flex-10 10GbE M ultifunction B
BL-c Adapter

HP 530m
5
Dual Port
P Flex-10 10GbE Ethernnet Adapter

HP NC532m
N
Dual Port Flex-10 10GbE M
Multifunction B
BL-c Adapter
rT
TT

HP NC542m
N
Dual Port Flex-10 10GbE M
Multifunction B
BL-c Adapter

HP NC550m
N
Dual Port Flex-10 10GbE M
Multifunction B
BL-c Adapter

HP NC552m
N
Dual Port Flex-10 10GbE M ultifunction B
BL-c Adapter

HP 554FLB
5
Dual Port FlexFab
bric 10GbE A
Adapter

HP 554m
5
Dual Port
P FlexFabriic 10Gb Ada
apter

HP 10GbE
1
Dual Port Mezzan
nine Adapterr
Fo

!
Rev. 12.3
31
Important
You must insstall an appro
opriate interco
onnect for a m
mezzanine carrd.
1 –15
Implemen
nting HP BladeS
System Solutions
Storage intercconnects
on
ly
HP Brocade 8Gb SAN switch
The HP sttorage intercconnects inclu
ude:
Broccade 8Gb SA
AN switch

Cisco
o MDS 9124
4e

HP In
nfiniBand sw
witch

3Gb
b SAS switch
Fo
rT
TT
de
liv
er

y
To conne
ect server bla
ades to extern
nal SAN or o
other storagee solutions, sspecific stora
age
interconn
nects must be
e used. HP offfers a full po
ortfolio of succh devices, including Cissco
and Broccade Fibre Channel switcches.
1 –16
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
y
on
ly
Storag
ge mezzan
nine cardss
de
liv
er
HP Sm
mart Array P70
00m Controller
Storage mezzanine
m
cards
c
used to
o connect serrver blades to
ANs. HP offe
ers
o external SA
hardware
e iSCSI contrrollers in a mezzanine
m
fo
orm factor an
nd the P700m
m Smart Arra
ay
Controller to connect an MDS600
0 to the enclo
osure. 3Gb S
SAS switchess are require
ed
to use P7
700m and ex
xternal storag
ge devices.
The curre
ent portfolio includes:
i
Broccade 804 8G
Gb FC HBA for
f HP c-Classs BladeSysteem

Emulex LPe1105-H
HP 4Gb FC HBA for HP c-Class Blad
deSystem

Emulex LPe1205--HP 8Gb FC HBA for HP c-Class Blad
deSystem

QLog
gic QMH246
62 4Gb FC HBA for HP c-Class Blad
deSystem

QLog
gic QMH2562 8Gb FC HBA for HP c-Class Blad
deSystem

QLog
gic QMH4062 1GbE iSC
CSI Adapterr for HP Blad
deSystem c-Class

HP Smart
S
Array P700m Conttroller
Fo
rT
TT

Rev. 12.3
31
1 –17
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Integrrity NonS
Stop Blad
deSystem
m
The Integrity NonStop
p BladeSystem offers douuble the perfo
ormance and
d increased
response time using multicore
m
and
d storage sub
bsystem tech
hnology (whe
en compared
d to
NonStop NS16000 in HP labs). The
T cost per transaction is cut in half,, and respon
nse
time and throughput is improved with standarrds-based IP communicattions and
NonStop I/O infrastructure with the latest storrage technolo
ogy.
TT
Managea
ability has also been imp
proved with H
HP SIM Blad
de Plug-in, NonStop Clustter
Essentialss with HP SIM
M, iLO technology, and O
Onboard Ad
dministrator. Improved
middlewa
are and the NonStop
N
operating systeem enhance multiple failu
ure fault
tolerance
e, increase on
nline manag
geability, and
d ease upgra
ades. The Integrity NonStop
BladeSystem:
Proviides the indu
ustry’s best end-to-end tra
ansaction inteegrity for the
e most reliab
ble
data
a
rT

Leverages Intel im
mprovementss in chip-leveel data integrrity and also prevents data
uption end-to
o-end (with Fletcher Checck Sum)
corru
Fo

Better performance, lo
ower cost pe
er transactionn, and impro
oved scalabillity make the
e
NonStop BladeSystem
m ideal for in
ncreasing tra
ansaction volumes in finance,
healthcarre, telecomm
munications, and
a other ap
pplications.
1 –18
Rev. 12
2.31
HP BladeSystem Portfolio Introduction
NonStop NB 54000c and NB5000c BladeSystems
As is typical with other NonStop systems, the NonStop NB 54000c and NB5000c
BladeSystems scale out through built-in clustering of logical processors—up to 4,080
logical processors in the maximum number of clustered systems (8,160 cores). Both
BladeSystems feature 2 – 16 processors per node, with 192 TB maximum memory
per cluster.
on
ly
Multi-core processing capabilities allow the Integrity NonStop BladeSystems to scale
up, providing nearly twice as much processing power per logical processor at a
lower per-transaction cost. To support these multi-core processors, the NonStop
BladeSystem uses NonStop Multi-core Architecture (NSMA)—a performance-oriented
architecture that runs relational database and transaction processing software. In
addition, the NonStop operating system named the J-series has been integrated with
and customized for use in a multi-core architecture environment.
de
liv
er
y
Together, the NSMA and NonStop Operating System J-series help you achieve
double the performance of other Integrity NonStop NS-series systems. To achieve
such high levels of performance, both cores in a dual-core Integrity logical processor
are deployed resulting in improved performance.
The Integrity NonStop BladeSystem uses a novel I/O Infrastructure with a standard
SAS storage adapter called Cluster I/O Module (or Storage CLIM) and a standard
Ethernet controller called IP Cluster I/O Module (or IP CLIM). The Storage CLIM
supports more storage capacity at a lower cost, provides fault tolerance, and delivers
improved performance.
Integrity NonStop BladeSystem NB54000c — The NB5000c features Itanium
9300 series quad-core 1.66 GHz processors with 20 MB L3 cache. Compared
to the NB50000c, the NB54000c scale up provides nearly twice as much
performance capacity per logical processor at a lower per-transaction cost. The
NB54000c system provides near-linear scalability up to 16,320 cores, with
support for up to 192,000 program processes per node, and 48,960,000
program processes in an Expand network. Built on the Integrity BL860c i2server
blade, the NB54000c system ships with expanded availability, reliability,
scalability, and latency features. It also includes an improved I/O offload engine
(incorporating dual CLIM OS disks) with an SAS 2.0 storage subsystem that is
aligned with current industry advancements in disk technology.
Fo
rT

Integrity NonStop BladeSystem NB5000c — The NB5000c features Itanium
9100 series dual-core 1.66 GHz processors with 18 MB L3 cache.
TT

For more information, visit: http://www.hp.com/go/nonstopblade
Rev. 12.31
1 –19
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Integrrity Supe
erdome 2
TT
The Integrity Superdo
ome 2 represents a categ
gory of moduular, mission--critical systems
that scale
es up, out, an
nd within to consolidate all tiers of crritical applications on a
common platform. De
esigned arou
und the BladeeSystem arch
hitecture for tthe Converge
ed
Infrastructure, the Sup
perdome 2 usses modular building blo
ocks that ena
able custome
ers
a they grow”” from mid-ra
ange to highh-end. The mo
odular desig
gn, supportin
ng
to “pay as
up to 1,500 nodes, le
everages stan
ndard eight-ssocket and 1
16-socket building blockss,
anaged from
m a single co
onsole.
and is ma
Fo
rT
The Supe
erdome 2 use
es a 19-inch standard racck and features a bladed
d design, witth
the basic building blo
ock being the
e Superdomee 2-16s enclo
osure. The en
nclosure is
specific to
o the Superd
dome 2 but iss based on tthe technolog
gy of the Bla
adeSystem
c7000 en
nclosure. It shares a common midpla ne, in addition to commo
on fans and
power su
upplies, to givve customerss common, eeasy-to-servicee spares.
1 –20
Rev. 12
2.31
HP BladeSystem Portfolio Introduction
The Superdome 2 is mission-critical by design, with innovations that provide a 450%
boost to infrastructure reliability compared to its predecessor. These innovations
include:


Online, tool-free serviceability, supported by self-diagnosis and self-healing
capabilities
A power-once backplane that is 100% passive, with no single points of failure
The internal high-performance crossbar network that connects processors and
memory, which can be replaced online
on
ly

Some of the business needs that the Superdome 2 addresses include:

Meets researcher’s needs for a high-performance computing environment

Can accommodate peak workload requirements
Provides an always-on infrastructure without going to redundant and failover
configurations
y

Reduces the number of software licenses required to do business

Reduces the cost and complexity of the infrastructure

de
liv
er

Positions the company to rapidly accommodate and leverage dynamic market
conditions

Provides the agility that the existing mainframe environment cannot offer

Frees up space in the data center

Meets a company’s current needs and can scale to meet the future demands of
their data warehouse
Establishes a relationship with a service-oriented partner

Supports large workloads in a large symmetric multiprocessing (SMP) system

Moves large volumes of data quickly in and out of an Oracle database
Fo
rT
TT

Rev. 12.31
1 –21
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Virtu
ual Con
nnect te
echnolo
ogy
Virtua
al Connect map
pping concept
Virtual Co
onnect is an industry-stan
ndard-based
d implementa
ation of serve
er-edge I/O
virtualization. It puts an
a abstractio
on layer betw
ween the servvers and the external
L
and storage area neetwork (SAN
N) see a pool of servers
networks so that the LAN
rather tha
an individual servers.
rT
TT
After the LAN and SA
AN connectio
ons are mad
de to the poo
ol of servers, the server
administrrator uses a VC
V Manage
er user interfa
ace to createe an I/O con
nnection pro
ofile
for each server. Instea
ad of using the
t default M
Media Accesss Control (M
MAC) addressses
ault World-W
Wide Names (WWNs) fo
or all host bu
us adapters
for all NICs and defa
ager creates bay-specificc I/O profilees, assigns un
nique MAC
(HBAs), the VC Mana
addresse
es and WWN
Ns to these profiles,
p
and administers them locallyy.
Fo
Local adm
ministration of
o network addresses
a
is a common in
ndustry techn
nique that
Virtual Connect appliies to a new purpose. N etwork and storage adm
ministrators ccan
d SAN conne
ections oncee during deployment and
d need not m
make
establish all LAN and
connectio
on changes later
l
if servers are chang
ged. When sservers are d
deployed,
added, or
o changed, Virtual Conn
nect keeps thhe I/O profile for that LA
AN and SAN
N
connectio
on constant.
1 –22
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
de
liv
er
y
on
ly
Virtua
al Conne
ect FlexFa
abric
FlexFabric po
ortfolio
Fibre Cha
annel over Etthernet (FCoE) maps Fibrre Channel n
natively over Ethernet while
being ind
dependent off the Etherne
et forwarding
g scheme. Th
he FCoE proto
ocol
specificattion replacess the FC0 an
nd FC1 layerrs of the Fibree Channel sttack with
Ethernet. By retaining
g the native Fibre
F
Channeel constructs, FCoE allow
ws a seamlesss
integratio
on with existing Fibre Cha
annel netwo rks and man
nagement sofftware.
rT
TT
Computers connect to
o FCoE with Converged N
Network Ada
apters (CNA
As), which
contain both
b
Fibre Ch
hannel HBA and Ethernet NIC functio
onality on the
e same adap
pter
card. CN
NAs have one
e or more ph
hysical Ethernnet ports. FC
CoE encapsulation can be
e
done in software
s
with
h a conventio
onal Ethernett network intterface card, however FC
CoE
CNAs offfload (from the CPU) the low level fra
ame processiing and SCS
SI protocol
functions traditionally
y performed by
b Fibre Cha
annel host buus adapters.
Fo
Classical Ethernet hass no flow control, so FCo
oE requires eenhancementts to the
s
to support
s
a flo
ow control meechanism (th
his prevents ccongestion and
Ethernet standard
ensuing frame
f
loss.)
VC FlexFa
abric:
Rev. 12.3
31

Connects data, Fibre
F
Channe
el, and iSCSI

Worrks with existing LAN and
d SAN
1 –23
Implemen
nting HP BladeS
System Solutions
Virtual Connect FlexFabric featuress
Virtual Co
onnect FlexFabric feature
es include:
Embedded dual-port 10Gb Converged
C
N
Network Ada
apter (CNA) with
SI/FCoE on ProLiant
P
G7 server
s
bladees
iSCS
Eight connections on the syste
em board

Emulex-based CN
NA

Flex-10 LAN/Acccelerated iSC
CSI/FCoE
de
liv
er

y

on
ly
Virtual Connect FlexFFabric Module
Virtua
al Conne
ect Flex-10 technology
Flex-10 te
echnology co
omprises two componentss:

HP VC
V Flex-10 10
0Gb Ethernet module

10G
Gb Flex-10 serrver NICs
HP VC Flex-10 10Gb Ethernet mod
dule:

Man
nages the serrver FlexNIC connectionss to the data center netw
work. Each
FlexN
NIC is part of
o a server profile
TT

Inclu
udes
Single wide form factor
rT

Full-duplex 240Gb/s
2
briidging fabricc, with nonbllocking archiitecture

Ethernet portts connect to
Sixteen interrnal facing10
0GBASE-KR E
o the system
board NIC in
i each devicce bay proviiding supporrt for up to eight Flex-10
Ethernet porrts per serverr or up to 32
2 Ethernet po
orts per serve
er
Fo

1 –24
Note
Because of a hardware liimitation, the Broadcom 10
0Gb devices d
do not supportt
9Kb jumbo frames—4Kb is the largestt jumbo framee size.
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
How FlexF 10 wo
orks
de
liv
er
y
on
ly
VC Flex--10 device discovery
d
The opera
ating system discovers up
p to four PCI functions peer Flex-10 port

Indivvidual send/receive queu
ue

Indivvidual driver image
Fo
rT
TT
If Flex-10 network cards is used with
w a non-Fleex-10 intercon
nnect, the op
perating syste
em
only seess two 1Gb in
nterfaces.
Rev. 12.3
31
1 –25
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Flex-10 configuratio
on before boot
b
For each FlexNIC, the
e VC profile configures thhe:
Band
dwidth (from 0.1 to 10Gb
b/s)

Link state

MAC
C address
Fo
rT
TT

1 –26
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
de
liv
er
y
on
ly
Flex-10 NICs mapp
ping
Fo
rT
TT
x-10 network card can be
e mapped to any Etherneet network de
efined on the
e
Each Flex
Virtual Co
onnect. They
y function as completely iindependentt devices.
Rev. 12.3
31
1 –27
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
NIC conffiguration
Fo
rT
TT
Eight Flex
xNICs share two 10Gb pipes,
p
and yo
ou can individually assig
gned bandwiidth
per FlexN
NIC from 0.1Gb to 10Gb
b. The minimuum bandwid
dth is 100Mb
b/s. You cannot
have a FllexNIC witho
out a bandw
width assigned
d.
1 –28
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
de
liv
er
y
on
ly
Bandwid
dth and netw
work alloca
ation screen
n

Band
dwidth is pro
ogrammed th
hrough the VC Manager.

Every
y connection
n gets a minimum of100M
Mb.

Custom and Prefe
erred bandw
width selectio
ons are allocated first.

ndwidth.
Connections set to
t Auto even
nly split all reemaining ban
TT
If Cu
ustom selectio
ons add up to
t more band
dwidth than the interface
e supports,
those
e connection
ns get a prop
portional piecce of the pip
pe.
Fo
rT

Rev. 12.3
31
1 –29
Implemen
nting HP BladeS
System Solutions
Virtua
al Conne
ect modules
on
ly
HP 1/10Gb Virtual Conneect Ethernet Mo
odule
The Virtua
al Connect Ethernet
E
Mod
dule is a blad
de interconnect that

Simp
plifies server connections by cleanly sseparating th
he server encclosure from the
LAN
Strea
amlines netw
works by redu
ucing cables without add
ding switchess to manage

Allow
ws technician
ns to change
e servers in juust minutes, n
not days.
y

de
liv
er
HP Virtua
al Connect offers converg
ged LAN and
d storage connectivity. Flex-10
networkin
ng simplifies data connecctions and co
onsumes thee least amoun
nt of power. HP
continuess to expand this
t technolo
ogy and its ca
apabilities a
across ProLian
nt, Integrity a
and
Storage product
p
liness. Virtual Con
nnect can sim
mplify and co
onverge your server edge
connectio
ons, integrate
e into any sta
andards bassed networkin
duce
ng infrastructure and red
complexity while cuttiing costs.
The HP Virtual
V
Conne
ect modules include:
i
HP 1/10Gb
1
VC Ethernet

HP 1/10Gb-F
1
VC
C Ethernet

HP Virtual
V
Conne
ect FlexFabric

HP Virtual
V
Conne
ect Flex-10 10
0Gb Etherneet
rT
TT

HP Virtual
V
Conne
ect 8Gb 20-port Fibre Chhannel Module

HP Virtual
V
Conne
ect 8Gb 24-p
port Fibre Chhannel Module
Fo

1 –30
Rev. 12
2.31
HPP BladeSystem PPortfolio Introduction
Virtual Connect environm
ment with BladeSyste
B
em enclosuure
The Virtual Connect modules
m
plug
g directly into
o the interco
onnect bays o
of the
e. The modulles can be placed side b
by side for reedundancy. IInitial
enclosure
implemen
ntations inclu
ude the VC-E
Enet module and the VC-FC module.
Important
To install Fib
bre Channel in a Virtual Co
onnect enviro
onment, the en
nclosure must
have at leasst one Virtual Connect Etheernet module, because the V
VC Manager
software run
ns on a proce
essor resident on the Ethern
net module.
on
ly
!
rT
TT
de
liv
er
y
Virtual Connect environm
ment — Thrree key co
omponentss
Three key
y components o
of VC Environm
ment
Fo
HP Virtua
al Connect te
echnology prrovides uniquue capabilitiees and tangiible
interconn
nect value forr BladeSystem
m c-Class cusstomers. It simplifies netw
work
infrastructures by redu
ucing physica
al cabling, ssaves time an
nd costs asso
ociated with
systems deployment
d
and
a operatio
ons, provides server workkload mobilitty, and helpss IT
organizations work sm
marter. In ad
ddition to ena
abling Flex-10 technologyy, Virtual
es the infrastrructure found
dation for oth
her Enterprise
e-class
Connect also provide
ment offering
gs from HP, such as HP V
Virtual Conneect Enterprise
e Manager a
and
managem
HP Insigh
ht Dynamics-V
VSE.
Rev. 12.3
31
1 –31
Implementing HP BladeSystem Solutions
Virtual Connect Ethernet modules

Connect selected server Ethernet ports to specific data center networks

Support aggregation/tagging of uplinks to data center

Are “LAN-safe” for connection to any data center switch environment (such as
Cisco, Nortel, or HP)
Virtual Connect Fibre Channel modules

Connect enclosure to Brocade, Cisco, McDATA, or QLogic data center Fibre
Channel switches
Display as a set of HBA ports to external Fibre Channel switches
Virtual Connect Manager (embedded)

Manages server connections to the data center without impacting the LAN or
SAN
de
liv
er

on
ly

Selectively aggregate multiple server Fibre Channel HBA ports (QLogic/Emulex)
on a Fibre Channel uplink using N_Port ID virtualization (NPIV)
y

Moves/upgrades/changes servers without impacting the LAN or SAN
Fo
rT
TT
Virtual Connect FlexFabric does not require VC-FC.
1 –32
Rev. 12.31
HPP BladeSystem PPortfolio Introduction

KR iss the current IEEE 10Gb standard
s

One
e-lane techno
ology
One transmit pair

e pair
One receive
de
liv
er

y
KR Ethernet co nnection
on
ly
HP Bla
adeSystem
m 10Gb KR Ethernett

Avaiilable as LAN
N on motherboard (LOM ) and a dual-port mezza
anine

Auto
o-sensing 1G
Gb/10Gb
Compatib
bility notes:

Not compatible with
w XAUI-ba
ased c-Class 10Gb switch
h

Embedded Dual Port NC532
2i 10Gb Etheernet Multifun
nction Serverr Adapter

Whe
en running at 1Gb speed
d, the adapteer compatiblle with existin
ng 1GbE
interrconnects:

es and pass--thru
HP 1Gb Ethernet switche
E
swittches
Cisco 1Gb Ethernet
rT

TT

Broadcom 5771
5 1 chipse
et
1Gb Virtual Connect mo
odules
Fo

Rev. 12.3
31
1 –33
Implementing HP BladeSystem Solutions
Management and deployment tools
One of the advantages of HP BladeSystem over other vendors’ solutions is great
manageability and quick deployment. HP offers multiple management and
deployment tools designed especially for HP BladeSystem.
ProLiant Onboard Administrator
on
ly
With Gen8 server blades, HP announced a new integrated Lights-Out (iLO) 4
management card. The features of this card include:

HP Advanced Error Detection Technology Early Video Progress Indicators

Early Fault Detection Messaging

Improved Error Messaging

Enhanced DIMM SPD Failure Logging
HP SmartMemory (Gen8 DIMMs) includes special identifier

System ROM can detect third-party DIMMs
de
liv
er
y

Active Health System Log Support

Error Fault Logging without Health Driver
Fo
rT
TT

1 –34
Rev. 12.31
HPP BladeSystem PPortfolio Introduction
Onbo
oard Adm
ministrato
or modules
de
liv
er
y
on
ly
c7000 Onboard
O
Adminnistrator with KV
VM
c3000 enclosure
e
with OA tray markeed
TT
Unique to
o the BladeSystem, the Onboard
O
Adm
ministrator is the enclosurre managem
ment
processor, subsystem,, and firmwa
are base used
d to support the BladeSyystem enclosu
ures
he managed devices con
ntained withi n the enclosuure. It provid
des a secure
and all th
single po
oint of contacct for users performing ba
asic manageement tasks o
on server
blades orr switches wiithin the encllosure. It is fuully integrateed into all of HP system
managem
ment applica
ations.
The Onbo
oard Adminiistrator modu
ule offers weeb-based and
d command line interface
e
(CLI) man
nageability. ItI has two ma
ajor functionns:
Driving all management featu
ures through the two Inteer-Integrated Circuit (I2C)
nt Chassis Management
M
Bus (ICMB) interfaces
and the Intelligen
rT

eight iLO po
Aggregating up to 16 iLO po
orts in a c70 00 enclosuree and up to e
orts
in a c3000 enclo
osure — simplifying cablle managem
ment and provviding a
grap
phical interface to launch individual sserver iLO ma
anagement iinterfaces
Fo

Rev. 12.3
31
1 –35
Implementing HP BladeSystem Solutions
With the Onboard Administrator with KVM for c7000, you can directly access
Onboard Administrator and server video from the VGA connections on the rear
Onboard Administrator.
The rear of each module has an LED (blue UID) that can be enabled (locally and
remotely) and used to identify the enclosure from the back of the rack.
on
ly
The Onboard Administrator features enclosure-resident management capability and
is required for electronic keying configuration. It performs initial configuration steps
for the enclosure, enables run-time management and configuration of the enclosure
components, and informs users of problems within the enclosure through email,
SNMP, or the Insight Display.
The Onboard Administrator monitors and manages elements of the enclosure such as
shared power, shared cooling, I/O fabric, and iLO.
Fo
rT
TT
de
liv
er
y
The Onboard Administrator can be managed locally, remotely, and through HP SIM
tools. The Onboard Administrator also provides local and remote management
capability through the Insight Display and browser access.
1 –36
Rev. 12.31
HPP BladeSystem PPortfolio Introduction
on
ly
Insigh
ht Display
y
y
Insight Displa
ay view
de
liv
er
The Blade
eSystem Insig
ght Display panel
p
is designed for con
nfiguring and
d
troubleshooting while
e standing ne
ext to the encclosure in a rrack. It provides a quick
ew of enclosu
ure settings and
a at-a-glannce health sta
atus. Green indicates tha
at
visual vie
everything in the encllosure is properly configuured and run
nning within specification
n.
Main Menu
From the Insight Displlay Main Me
enu you can navigate to the main sub
bmenus. For
example, if you want to look at th
he enclosure settings, preess the Down
n button to the
M
Menu items includee:
next menu item. The Main
Health Summary
y

Enclo
osure Setting
gs

Enclo
osure Info
rT
TT

Blad
de or Port Info
o

Turn Enclosure UID on

View
w User Note

Chatt Mode

USB Menu
Fo

Rev. 12.3
31
1 –37
Implementing HP BladeSystem Solutions
Enclosure Settings Menu
Power settings

Onboard Administrator IP address

Enclosure Name

Rack Name

Insight Display Lockout PIN#
Fo
rT
TT
de
liv
er
y

on
ly
From the Enclosure Settings Menu, you can configure the enclosure, update settings,
and make changes directly from the rack. Enclosure settings available from the
Insight Display panel include:
1 –38
Rev. 12.31
HP BladeSystem Portfolio Introduction
iLO Management Engine
With Gen8 servers, HP announced a new iLO management processor. Renamed
from “Integrated Lights-Out” to “Insight Lifecycle Onboard,” iLO simplifies server
setup, engages health monitoring, manages power and thermal control, and
promotes remote administration for ProLiant servers. Features include:
HP Advanced Error Detection Technology Early Video Progress Indicators

Early Fault Detection Messaging

Improved Error Messaging

Enhanced DIMM SPD Failure Logging
on
ly

HP SmartMemory (Gen8 DIMMs) will include special identifier

System ROM can detect non-HP DIMMs
Active Health System Log Support

Error Fault Logging without Health Driver
de
liv
er

y

The hardware monitoring and alerting capability is built in to the system. It starts
working as soon as a power cord and an Ethernet cable are connected to the server.
TT
The iLO management processor is embedded on the system board and ships
standard in every ProLiant Gen8 server, including the ProLiant BL, DL, ML, and SL
Series. It is the core foundation of the iLO Management Engine, which is a set of
embedded management features that support the complete lifecycle of the individual
server, from initial deployment through ongoing management to service alerting and
remote support. The iLO Management Engine enables you to access, deploy, and
manage a server anytime from anywhere with a Smartphone device.
Fo
rT
The iLO Management Engine supports a complete separation of system management
and data processing, not just on the LAN connections, but also within the system
itself. HP Active Health monitoring system captures critical server diagnostics
completely within the iLO management engine.
Rev. 12.31
1 –39
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
HP In
nsight Co
ontrol
Delivered
d on DVD me
edia, Insight Control usess an integrateed installer to deploy and
configure
e HP Systemss Insight Man
nager (HP SIM
M) and esseential infrastru
ucture
managem
ment software
e rapidly and consistently, reducing m
manual insta
allation
procedurres and speeding time to production. These solutio
ons deliver ccomplete
lifecycle managemen
m
t for HP ProLLiant and Bla
adeSystem infrastructure. HP Insight
Control brings
b
a single, consistent managemeent environmeent for rapid
d deploymentt of
the opera
ating system and hardwa
are configura
ation.
rT
TT
HP Insigh
ht Control alsso includes fu
ull capabilitiees to migratee complete sservers (both
physical and
a virtual) to
t new serve
er (both virtua
al and physical), supporting conversion
from physsical to virtua
al and vice-vversa and co
onversion bettween differe
ent virtualizattion
environments.
Fo
In additio
on, Insight Control provid
des proactivee health and performance
e monitoring
g,
power ma
anagement, performance
e analysis, lights-out remote management, and
virtual ma
achine mana
agement for HP ProLiant M
ML/DL 300--700 series servers and
BladeSystem infrastructure.
Insight Co
ontrol also extends the fu
unctionality o
of Microsoft S
System Cente
er and VMw
ware
vCenter Server
S
by pro
oviding seam
mless integra tion of the unique ProLiant and
BladeSystem manage
eability features into Micrrosoft System
m Center and
d VMware
S
management conssoles.
vCenter Server
HP Insigh
ht Control is based on HP
P SIM as the primary ma
anagement co
onsole.
1 –40
Rev. 12
2.31
HP BladeSystem Portfolio Introduction
For customers who have chosen Microsoft System Center as their primary console, we
offer HP Insight Control for Microsoft System Center, which is based on HP Insight
Control, but adds several extensions to make the ProLiant management information
available through the System Center consoles. It also adds monitoring, alerting,
proactive virtual machine management, and ProLiant operating system deployment
and update capabilities to the System Center consoles.
Features of Insight Control 7.x include:
on
ly
For customers who have chosen VMware vCenter Server as their primary console, we
offer HP Insight Control for VMware vCenter Server, which is based on HP Insight
Control, but adds several extensions to make the ProLiant management information
available through the VMware vCenter Server console, enabling comprehensive
monitoring, remote control, and power optimization directly from the vCenter
console.
Support for the latest ProLiant Gen8 servers

Data Center Power Control (DCPC) support for Superdome 2

Power Management for BL Serial 800 Integrity server blades

System Insight Control enhancements
de
liv
er
y


PolyServe SQL database

Improved field tools

Federated central management server (CMS)
ProLiant Agentless Management Pack

ProLiant Linux Management Pack

ProLiant VMware Management Pack

Server Updates Catalog 2
Fo
rT
TT

Rev. 12.31
1 –41
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
HP Sy
ystems In
nsight Ma
anager
HP System
ms Insight Manager
M
(HP SIM) is the ffoundation fo
or the HP uniified serverstorage management
m
t strategy. HP
P SIM is a ha
ardware-leveel manageme
ent product th
hat
supports multiple ope
erating system
ms on HP Pro
oLiant, Integrrity, and HP 9000 serverrs,
HP Storag
ge MSA, EVA
A, XP arrays, and third-p
party arrays. Through a ssingle
managem
ment view of Microsoft Windows,
W
HP--UX 11iv1, HP-UX 11iv2, H
HP-UX 11iv3,,
and Red Hat, and SuSE Linux, HP
P SIM providees the basic managemen
nt features off:
Syste
em discovery
y and identifiication

Single-event view
w

Inven
ntory data co
ollection

Repo
orting
rT
TT

Fo
The core HP SIM softw
ware uses Web-Based
W
Ennterprise Ma
anagement (W
WBEM) to
deliver th
he essential capabilities
c
required to m
manage all H
HP server platforms. HP S
SIM
can proviide systems management
m
t with plug-inns for HP clieents, storage,, power, and
d
printer prroducts.
1 –42
Rev. 12
2.31
HP BladeSystem Portfolio Introduction
Using HP Integrity Essentials you can choose plug-in applications that deliver
complete lifecycle management for your hardware assets:

Workload management

Capacity management

Virtual machine management

Partition management
on
ly
HP Systems Insight Manager can be installed on three different operating systems:
Windows, Linux, and HP-UX. Basic functionality is the same for all versions, but the
Windows version has the greatest scalability and expansion possibilities. HP SIM
can also be easily integrated with other Insight Software components like HP Insight
Server Migration software, HP Insight Control Server Deployment software and
others.
y
HP SIM updates
de
liv
er
HP SIM 7.0 and ProLiant Gen8 server blades introduce features to the management
software.
Updates to the HP SIM management software include:


Agentless Monitoring and Alerting – Hardware health and inventory available
even when the host is off.
iLO Host Health Polling – HP SIM merges the iLO host health polling, inventory,
and alerts to give proper status rollups.
Licensing Reports – Generate two reports: by system or by product, including
additional information such as IP addresses, and total number of licenses and
seats.
rT

Shifting Host Health and Alerting to iLO – Shifting the health and alerting tasks
to iLO provide more processing resources for applications.
TT

Service Pack for ProLiant – SPP is a combination of the ProLiant Support Pack
and the firmware maintenance DVD and is available as an ISO for both
Windows and Linux.
Fo

Rev. 12.31
1 –43
Implementing HP BladeSystem Solutions
Learning check
3.
a.
Microsoft Windows
b.
OpenVMS
c.
Linux
d.
HP-UX
a.
ProLiant rack-mount servers
b.
HP VDI systems
c.
HP BladeSystem
d.
HP Superdome 2
The HP Storage tape blades have a full-height form factor.
 True
 False
4.
on
ly
A customer with a requirement for InfiniBand and more than 2Gb Fibre Channel
would be a good candidate for which platform?
y
2.
Which operating systems are supported on ProLiant server blades? (Select two.)
de
liv
er
1.
Name the tools that you can use to manage a BladeSystem.
.................................................................................................................
.................................................................................................................
TT
.................................................................................................................
.................................................................................................................
.................................................................................................................
List three key Gen8 technologies.
rT
5.
.................................................................................................................
Fo
.................................................................................................................
.................................................................................................................
1 –44
Rev. 12.31
HP BladeSystem Enclosures
Objectives
After completing this module, you should be able to:
on
ly
Module 2
Identify and describe the HP BladeSystem enclosures

Explain how HP Onboard Administrator modules are used

Describe the power architecture used in HP BladeSystem systems, including:
Power modes

Power supplies

Power modules

Power distribution units (PDUs)
de
liv
er

y

Describe Thermal Logic technology

Describe the cooling technologies designed for the BladeSystem systems

Describe the use of a DVD-ROM drive in the enclosures
Fo
rT
TT

Rev. 12.31
2 -1
Implemen
nting HP BladeS
System Solutions
on
ly
Blad
deSystem enclo
osure fa
amily
HP offerss two BladeS
System enclossures:
Blad
deSystem c70
000 enclosurre — An enteerprise versio
on designed for data cen
nter
appllications
y

Blad
deSystem c30
000 enclosurre — A loweer-cost, smalleer version ta
argeted for
remo
ote sites and small and medium
m
businnesses (SMB
Bs)
de
liv
er

The Blade
eSystem c30
000 enclosurre has a sma
aller rack foo
otprint, spanning 6U
compared to the 10U
U of the c700
00 enclosuree. Seven c30
000 enclosurres per 42U
he maximum number of c3000
c
enclo
osures in a fuully populated rack.
rack is th
The c300
00 enclosure
e is designed
d for a small to mid-size ccompany, brranch office,, or
remote siites that have
e little or no rack space. The c3000 enclosure is the right
choice if::
Two and eight se
erver or stora
age blades p
per enclosure are needed

Less than 100 se
ervers exist in
n the compa ny or organization

Servver blade purrchases are spread
s
out o
over time
Simp
ple power co
onnections, such
s
as connnecting to a U
UPS or wall outlets, are
requ
uired
rT

TT

Choose the HP Blade
eSystem c700
00 for large r and dynam
mic data cen
nter
environm
ments. The c7
7000 enclosu
ure is the rig ht choice if:
Eight server or sttorage blade
es are needeed per enclossure
Fo

2 -2

The server
s
enviro
onment is gro
owing rapid ly, with frequent server p
purchases

Power requireme
ents include rack-level PD
DUs or data center UPSs

The highest levells of availability and red undancy aree required

The server
s
blade
es need multiiple rack-bassed shared storage arrayys
Rev. 12
2.31
HP BladeSystem Enclosures
BladeSystem enclosure features
The cost advantage of the BladeSystem is driven by reductions in interconnect
components, which is especially important when considering deploying servers in
LAN and storage area network (SAN) environments.
BladeSystem enclosures feature:
Cable-less server installation

BladeSystem Insight Display and wizards for first-time setup

Onboard Administrator for remote management

Multiple enclosure setup functions

Choice of power input

-48VDC, 110VAC, or 220VAC
on
ly

Ability to handle higher ambient temperatures

Enclosure-based CD/DVD drive and 3-inch LCD Insight Display

Interconnect fabrics of up to 8Gb/s

Choice of redundant and non-redundant fabrics

RoHS compliance for devices changes in a single unit
Fo
rT
TT
de
liv
er
y

Rev. 12.31
2 -3
Implemen
nting HP BladeS
System Solutions
on
ly
Blade
eSystem enclosure
e
e comparison
de
liv
er
y
Both Blad
deSystem enclosures can
n hold comm on critical co
omponents ssuch as serve
ers,
interconn
nects, mezza
anine cards, storage blad
des, and fans. Key differrences in the
BladeSysstem enclosures include rack
r
size, red
dundancy op
ptions, and sscalability.
Following
g is a feature
e comparison of BladeSyystem c7000
0 and c3000
0 enclosures::


Heig
ght

6 height
c3000 — 6U

c7000 — 10U height
Form
m factor, whe
en fully popu
ulated


TT

c3000 — Eight half-heig
ght blades, ffour full-height blades, orr six half-heig
ght
and one full-height
des
c7000 — Sixteen half-height bladess or eight full-height blad
Orie
entation
H
bla
ade orientat ion
c3000 — Horizontal
rT


Power supplies
Fo

2 -4
c7000 — Vertical
V
blade
e orientation

pplies provid
ding 1200W
W each
c3000 — Six power sup

pplies provid
ding 2400W
W each
c7000 — Six power sup
Rev. 12
2.31
HP BladeSystem Enclosures



c7000 — Ten Active Cool 200 fans
Interconnect bays

c3000 — Four interconnect bays

c7000 — Eight interconnect bays
Onboard Administrator

c3000 — Dual Onboard Administrator option

c7000 — Single or dual Onboard Administrator capability with KVM
Midplane

c3000 — Tested up to 6 Gbit

c7000 — Tested up to 10 Gbit
Connection

c3000 — Onboard Administrator serial/USB connections in front

c7000 — Onboard Administrator serial/USB connections in rear
KVM support

c3000 — Enclosure KVM

c7000 — Onboard Administrator with KVM
Fo
rT
TT

c3000 — Six HP Active Cool 100 fans
on
ly


y

System fans
de
liv
er

Rev. 12.31
2 -5
Implemen
nting HP BladeS
System Solutions
2400
0W power supplies
s
Increased po
ower output—
—2400W; s upports moree blades witth fewer pow
wer
supplies

ncy to save energy;
e
proviides 90% effficiency from as low as 10%
High efficien
load

y power that facilitates reeduced poweer consumption when
Low standby
servers are idle
i

Onboard Ad
dministrator 2.30 or late r

200 – 240V
V high line operation onlyy

de
liv
er

TT

y
Following
g are the fea
atures of the c7000 enclo
osure:
on
ly
Blade
eSystem c7000
c
enclosure
Does not interoperate with existing 2
2250W supp
plies
rT
A 16-lice
ense Insight Control
C
suite SKU ships sstandard with
h the c7000
0 enclosure,
preconfig
gured with:
Ten fans
f

Six 2400W
2
high
h efficiency power
p
suppliees
Fo

2 -6
Rev. 12
2.31
HP Blad
deSystem Enclossures
y
on
ly
Blade
eSystem c3000
c
enclosure
e
de
liv
er
c3000
0 Onboard Adm
ministrator trayy
The c300
00 enclosure
e includes fou
ur full-height device bayss or eight ha
alf-height devvice
bays, acccommodating the full arrray of BladeS
System serveer, storage, ttape, and PC
CI
Expansio
on blades.
An integrrated Insight Display is linked to the O
Onboard Ad
dministrator for local
enclosure
e manageme
ent.
TT
The c300
00 enclosure
e ships with two
t
enclosurre dividers to
o support half-height
devices. To install a full-height
f
de
evice, removee the dividerr and the corrresponding
blanks.
Note
Fo
rT
If you are using full-height se
erver blades in the enclosure, any empty full-height device
bays should be
b filled with blade blanks. To
o make a full-heeight blank, join
n two half-heigh
ht
blanks togeth
her.
Rev. 12.3
31
2 -7
Implemen
nting HP BladeS
System Solutions
y
on
ly
BladeS
System c3
3000 enclo
osure — Rear
R
view
de
liv
er
The rear of the c3000
0 enclosure offers four innterconnect b
bays. The avvailable bayss
can supp
port a variety
y of pass-thru
u modules annd switch tecchnologies, including
Ethernet, Fibre Chann
nel, and Infin
niBand. The enclosure suupports up to
o three
independ
dent I/O fab
brics with the ability to co
ombine intercconnect bayys 3 and 4 fo
or a
fully redu
undant fabricc.
The HP In
nfiniBand sw
witch module is double-w ide; two neig
ghboring ba
ays are
combined
d into one ba
ay to supporrt these 20G
Gb switches.
TT
The enclo
osure link mo
odule links enclosures in a rack. Encllosure links a
are designed
d to
support only
o
BladeSy
ystem enclosu
ures in the sa
ame rack.
rT
The availlable enclosu
ure KVM mo
odule enabless local admiinistrators to manage
individua
al servers witthout accessiing the Onbo
oard Administrator or iLO
O managem
ment
processors.
Note
Fo
The KVM mo
odule is an optio
onal componennt that must be o
ordered separa
ately.
Power is delivered by
y single-phasse power sup
pplies installed in the Bla
adeSystem
c3000 enclosure. Base c3000 en
nclosures shiip with two p
power supplies. Howeve
er,
plies may be installed dep
pending on the AC redu
undancy leve
el
up to six power supp
required and the num
mber of devicces installed in the enclo
osure. AC po
ower suppliess
are auto-switching be
etween 100V
VAC and 24
40VAC, provviding custom
mers with
diverse deployment
d
options.
o
2 -8
Rev. 12
2.31
HP Blad
deSystem Enclossures
Blad
deSystem enclo
osure manage
m
ement h
hardwa
are and
d
softw
ware
de
liv
er
y
on
ly
HP Onboard
O
Administtrator
Onboa
ard Administrattor device view
w
The Onbo
oard Adminiistrator provides a singlee point from w
which to view
w the entire
BladeSystem environm
ment and perform basic m
managemen
nt tasks on BladeSystem
devices.
View
w switch statu
us informatio
on
rT

TT
The Onbo
oard Adminiistrator can also
a be used
d to access th
he HP Virtual SAS Manager
(VSM) ap
pplication on
n switches insstalled in thee BladeSystem
m enclosure. After selectiing
a switch, you can use
e the Onboa
ard Administrrator to:

View
w other switch informatio
on

Clickk virtual butto
ons to:
Power off the switch

Reset the sw
witch

Toggle the Unit
U Identifica
ation (UID) liight on or offf
Fo

Rev. 12.3
31

Ope
en the Manag
gement Conssole (VSM)

Ope
en the Port Mapping
M
wind
dow to view
w detailed po
ort mapping information
2 -9
Implemen
nting HP BladeS
System Solutions
Onbo
oard Adm
ministrato
or module compo
onents
on
ly
c7000 Onboard
O
Admiinistrator module
The Onbo
oard Adminiistrator modu
ule provides a single point of control for intelligen
nt
managem
ment of the entire
e
enclosu
ure. It has beeen designed
d for both loccal and remo
ote
administrration of a BladeSystem enclosure.
e

USB port — USB
B 2.0 Type A connector uused for conn
necting supported USB
devicces such as DVD
D
drives, USB key drivves, or a keyyboard or mo
ouse for
enclo
osure KVM use.
u To conne
ect multiple d
devices, a USB hub (not included) is
required.
Seria
al port — Se
erial RS232 DB-9connecto
D
or with PC sttandard pin-o
out. It conne
ects
a co
omputer with a null-modem serial cab
ble to the Onboard Administrator
command line in
nterface (CLI).
VGA
A connector — VGA DB-15 connector with PC stan
ndard pin-ou
ut. To access the
KVM
M menu or Onboard
O
Adm
ministrator CLLI, connect a VGA monito
or or rack KV
VM
monitor for enclo
osure KVM. This
T port is a
available onlyy in the new
west Onboard
d
Adm
ministrator release.
Fo
rT

de
liv
er

Netw
work port — Ethernet 1000BaseT RJ4
45 connectorr, which provvides Etherne
et
acce
ess to the On
nboard Admiinistrator and
d the iLO pro
ocessor on each server
blad
de. It also sup
pports interco
onnect moduules with management prrocessors
configured to use
e the enclosu
ure managem
ment networkk. It auto neg
gotiates
0/100/10 or
o can be con
nfigured to fo
orce 100Mb
b or 10Mb fu
ull duplex.
1000
TT

y
Each Onboard Administrator mod
dule has a nnetwork, USB
B, and serial port and som
me
a have a VGA
V
connecttor.
models also
2 -10
Rev. 12
2.31
HP BladeSystem Enclosures
The uppermost enclosure uplink port functions as a service port that provides access
to all the BladeSystem enclosures in a rack. If no enclosures are linked together, the
service port is the top enclosure uplink port on the enclosure link module. Linking the
enclosures enables the rack technician to access all the enclosures through the open
uplink port.
If you add more BladeSystem enclosures to the rack, you can use the open enclosure
up port on the top enclosure or the down port on the bottom enclosure to link to the
new enclosure.
Redundant Onboard Administrator modules
on
ly
The Onboard Administrator module for the c7000 enclosure is available with or
without KVM support. Firmware for both versions is the same, but the part numbers
are different.
de
liv
er
y
When two Onboard Administrator modules are present in an enclosure, they work in
an active - standby mode, ensuring fully redundant integrated management. Either
module can be the active module. The other becomes the standby module.
If you install two Onboard Administrator modules of the same firmware revision, the
one on the left of the enclosure will be the active one. If two Onboard Administrator
modules installed into the same enclosure have different firmware versions, the
automatic configuration sync is disabled. Both Onboard Administrator modules will
put a clear entry into syslog stating exactly which version is on which Onboard
Administrator and how to upgrade them. However, the different firmware versions do
not affect which module is active or standby. The same rules apply.
TT
Configuration data is constantly replicated from the active Onboard Administrator
module to the standby Onboard Administrator module, regardless of the bay in
which the active module currently resides.
Fo
rT
When the active Onboard Administrator module fails, the standby Onboard
Administrator module automatically becomes active. This happens regardless of the
position of the active Onboard Administrator module. This automatic failover occurs
only when the currently active module comes completely offline and the standby
module can no longer communicate with it. In all other cases, the administrator must
initiate the failover by logging into the standby module and promoting it to active.
After the failed Onboard Administrator module is replaced, it automatically becomes
the standby module and receives the configuration information from the active
module. It remains standby until the administrator manually promotes it to the active
module or the active module fails.
Note
You can hot plug (add without powering down the system) Onboard Administrator
modules but they are not hot-swappable (replaceable without powering down the system).
Rev. 12.31
2 -11
Implemen
nting HP BladeS
System Solutions
on
ly
Dual Onboard
O
Administra
A
ator tray
y
c3000 tray for
f Onboard A
Administrator mo
odules
de
liv
er
An enclosure ships with
w one Onboard Admin istrator moduule and supp
ports up to tw
wo
d Administrattor modules.
Onboard
The stand
dard Onboard Administrator module is preinstalleed in a front-loading trayy,
which ho
ouses the mod
dule and the
e BladeSystem
m Insight Dissplay. The Onboard
Administrrator tray:


Fits into an Onbo
oard Adminiistrator moduule with a slo
ot for a second Onboard
d
Adm
ministrator
Supp
ports either single
s
mode or dual/reduundant modee Onboard A
Administrato
or
modules
Requ
uires a blankk module if th
here is no red
dundant Onboard Administrator
module
Fo
rT

Supp
ports dual Onboard
O
Adm
ministrator mo
odules for an
n enclosure
TT

2 -12
Rev. 12
2.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
Onboard Administrator liink module
The Onbo
oard Administrator link module
m
is se parate from the Onboard
d Administra
ator
module. It is containe
ed within the Onboard A
Administrator module slee
eve. The rearrO
Ad
dministrator link module ccontains RJ-4
45 ports for e
enclosure
loading Onboard
up/down
n links and Onboard
O
Adm
ministrator neetwork accesss.
TT
Compone
ents of the Onboard
O
Adm
ministrator linnk module, a
as shown in tthe graphic,
are:
osure down-link port — Connects to the enclosurre uplink porrt on the
Enclo
enclo
osure below with a CAT5
5 patch cabl e
2.
Enclo
osure up-link
k port
rT
1.
Connects to the enclosurre downlink port on the eenclosure ab
bove with a
CAT5 patch cable

nked enclosu
ures,
On a stand-alone enclossure or top eenclosure in a series of lin
osure uplink port functionns as a service port and temporarily
the top enclo
connects to a PC with a CAT5 patchh cable
Fo

Rev. 12.3
31
3.
OA1 Ethernet connection — Connects to the manageement networrk using a
CAT5 patch cable
4.
OA2
2 Ethernet connection — Reserved forr future enhancements
2 -13
Implemen
nting HP BladeS
System Solutions
Inssight Display m
main screen
on
ly
HP In
nsight Dissplay
de
liv
er
y
Insight Display, powe
ered by the Onboard
O
Ad ministrator, p
provides loca
al management
through an
a LCD display conveniently sited on the front of tthe system.
Insight Display is a sttandard component of c3
3000 and c7
7000 enclosures. It provides
ace that can be used for initial enclossure configurration, and is a valuable
e
an interfa
tool durin
ng the trouble
eshooting prrocess. If a p
problem occuurs, the display changes
color and
d starts to bliink to get the
e attention off an adminisstrator. The In
nsight Display
can even be used to upgrade the Onboard A
Administrator firmware.
us available to an admin
nistrator stand
ding in front of the blade
e enclosure a
are:
The menu

Enclo
osure Setting
gs — Enabless configuration of the enclosure, inclu
uding Power
Mod
de, Power Lim
mit, Dynamic Power, IP ad
ddresses for Onboard Ad
dministrator
modules, enclosu
ure name, an
nd rack nam e. It is also uused for conn
necting a DV
VD
e to the blades and settin
ng the lockouut PIN.
drive
Enclo
osure Info — Displays the
e current encclosure config
guration.
rT

Heallth Summary
y — Displayss the current condition of the enclosurre.
TT

Blad
de or Port Info
o — Presentss basic inform
mation abouut the server blade
configuration and
d port mapp
ping.
Turn Enclosure UID
U on — Illuminates the eenclosure ideentification LLED. When th
his
optio
on is selected
d, the display
y backgrounnd color chan
nges to blue,, and a blue
LED is visible at the
t rear of th
he enclosure..
Fo


2 -14
Rev. 12
2.31
HP BladeSystem Enclosures


Chat Mode — Enables communication between the person in front of the
enclosure and the administrator managing the enclosure through the Onboard
Administrator.
USB Menu — Can be used to update Onboard Administrator firmware or to
save or restore the Onboard Administrator configuration when using a USB stick
plugged into the USB port on an Onboard Administrator module.
Fo
rT
TT
de
liv
er
y
on
ly

View User Note — Displays six lines of text, each containing a maximum of
16 characters. This screen can be used to display contact information or other
important information for users working on-site with the enclosure.
Rev. 12.31
2 -15
Implemen
nting HP BladeS
System Solutions
on
ly
iLO Managem
M
ment Engine
de
liv
er
y
The HP iLLO Managem
ment Engine is a set of em
mbedded ma
anagement ffeatures that
support the complete lifecycle of the
t individua
al server, from
m initial dep
ployment
o
man
nagement to service alertting and rem
mote support.. The iLO
through ongoing
Managem
ment Engine enables you
u to access, d
deploy, and manage a sserver anytim
me
from anyw
where with a smartphone
e device. It ssupports a co
omplete sepa
aration of
system management and
a data pro
ocessing, no
ot just on the LAN connecctions, but also
e system itsellf.
within the
Through use of key iLO
O technologies such as rremote conso
ole with DVR
R, virtual med
dia,
ower, and virrtual serial po
ort, you can remotely control iLO ma
anaged serve
ers
virtual po
as efficiently as if you
u are actually
y at the remo
ote site. The iLO firmware
e innovationss
ou to scale management
m
t of iLO devicces easily thrrough directo
ory services and
enable yo
to provide
e enhanced remote conssole performa
ance through
h Terminal Se
ervices.
iLO Mana
agement Eng
gine ships sta
andard on a
all ProLiant G
Gen8 servers..
iLO managemen
m
t processor — Is the coree foundation of the iLO M
Managementt
Engine. It is emb
bedded on th
he system boa
ps standard iin every
ard and ship
8) server blad
de. HP iLO simplifies servver setup,
ProLiiant Generattion 8 (Gen8
enga
ages health monitoring,
m
manages
m
po
ower and thermal control,, and promo
otes
remo
ote administrration. Furthe
ermore, iLO eenables you to access, deploy, and
manage a serverr anytime fro
om anywheree with a smartphone device.
Fo
rT

TT
Compone
ents of the iLO
O Managem
ment Engine include:

2 -16
Agen
ntless Manag
gement — Iss the base ha
ardware mon
nitoring and alerting
capa
ability built in
nto the system
m (running o
on the iLO ch
hipset) and sttarts working
g as
soon
n as a powerr cord and an Ethernet ca
able are con
nnected to the server.
Rev. 12
2.31
HP BladeSystem Enclosures


Intelligent Provisioning (previously known as SmartStart) — Offers out-of-the box
single-server deployment and configuration without the need for media.
Embedded Remote Support — Builds on Insight Remote Support, which runs on
a stand-alone system or as a plug-in to HP Systems Insight Manager (HP SIM). It
provides phone-home capabilities that can either interface directly with the
backend (which is ideal for smaller customers, or for remote sites without a
permanent connection to the main site), or can use an HP Insight Remote
Support host server as an aggregator.
on
ly

Active Health System — Provides diagnostics tools and scanners in one bundle.
For more information about the iLO Management Engine go to:
http://www.hp.com/go/ilo
Fo
rT
TT
de
liv
er
y
You also can enable Power Regulator on supported server models from the iLO
Standard browser, CLP, and script interfaces. On supported server models, iLO
displays the present power consumption in Watts. The present power is a five-minute
average that is calculated and displayed through all iLO interfaces.
Rev. 12.31
2 -17
Implemen
nting HP BladeS
System Solutions
on
ly
Agenttless Mana
agement
y
HP iLO 4 Agentless Mannagement Conssole
de
liv
er
For customers who wa
ant to enrich the hardwa
are managem
ment with ope
erating syste
em
on and alertting, iLO Management Enngine featurees Agentless Manageme
ent,
informatio
an option
nal applicatio
on loaded in
nto the opera
ating system that routes th
he operating
g
system management information and
a alerts ovver the mana
agement netw
work.
ents are not required; alll SNMP traps and alertin
ng take place
e
Operating system age
ess Managem
ment providees:
from the iLO architectture. Agentle
Incre
eased securitty and stabiliity, even wheen systems are not yet po
owered on

Deta
ailed informa
ation that spe
eeds time to issue diagno
osis and reso
olution
Fo
rT
TT

2 -18
Rev. 12
2.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
Active
e Health Sy
ystem
The Activve Health Sysstem is an esssential comp
ponent of thee HP iLO Ma
anagement
Engine. Itt monitors an
nd records changes in th e server hard
dware and ssystem
configura
ation. It assistts in diagnossing problem
ms and deliveering rapid rresolution wh
hen
system failures occur. This technology monitorrs and secureely logs more
e than 1,600
0
system pa
arameters an
nd 100% of configuration
c
n changes fo
or accurate p
problem
resolution
n. Because Active
A
Health is agentlesss, it does not impact application
performa
ance.
rT
TT
Previously
y, whenever you had a sy
ystem issue w
without an obvious root ccause, you
would relly on running
g diagnostic tools to try tto isolate thee cause. Altho
ough these
tools ofte
en do a good
d job of provviding the neecessary information, theyy can only b
be
used afte
er the fact and often just look at subsyystems individ
dually. Circu
umstances occcur
where the
ese tools can
nnot provide the informattion needed to isolate the
e root cause.
Active He
ealth System technology:
Mon
nitors and seccurely logs more
m
than 1,6
600 system p
parameters a
and 100% off
configuration cha
anges for mo
ore accurate problem ressolution
Fo



Rev. 12.3
31
Enab
bles you to deploy
d
updattes three timees faster with
h 93% less downtime usin
ng
HP Smart
S
Update
e Manager (SUM)
(
Runss as agentlesss system and
d does not im
mpact appliccation perform
mance
2 -19
Implementing HP BladeSystem Solutions
In minutes, customers can securely export an Active Health file to an HP Support
professional to help resolve issues faster and more accurately. When Insight Remote
Support is enabled, HP Support receives this data automatically. With this log, HP
Support can solve even the most elusive, intermittent issues in a minimum amount of
time and with little effort on the customer’s end.
Customers with very tight security requirements can switch off the Active Health
System logging.
Benefits include:
Faster root-cause analysis and problem resolution

Always-on proactive diagnostics rather than reactive

Continuous monitoring for increased stability and shorter downtimes

Rich configuration history

Health and service alerts

Integrated diagnostics tools and scanners

Easy export and upload to HP Service and Support
de
liv
er
y
on
ly

Fo
rT
TT
For more information on the HP Active Health System, go to:
http://h18013.www1.hp.com/products/servers/management/activehealthsystem/inde
x.html
2 -20
Rev. 12.31
HP Blad
deSystem Enclossures
on
ly
HP Inttelligent Prrovisioning
g
de
liv
er
y
HP Intellig
gent Provisio
oning enable
es single-servver deploymeent and confiiguration
without th
he need for additional
a
media.
m
Previo us generatio
on server provvisioning and
maintena
ance capability is now em
mbedded in the iLO Man
nagement En
ngine across all
ProLiant Gen8
G
serverss.
Intelligent Provisioning
g is targeted
d for provisioning and deeploying sing
gle servers an
nd
provides these operatting system installation o ptions:

Reco
ommended/E
Express Insta
allation

Assissted/Guided
d Installation

Man
nual Installation

Boott into HP Inte
elligent Provissioning on thhe server by pressing F10
0 at server PO
OST
so th
hat you can begin
b
server configuratio
on and mainttenance
Update drivers and
a systems software
s
by cconnecting d
directly to HPP.com and
perfo
orm firmware
e updates an
nd install an operating syystem in the ssame step
rT

TT
With Inte
elligent Provissioning, you can choose from numero
ous options:

Rollb
back firmwarre from within
n the HP Inteelligent Provissioning main
ntenance me
enu
Insta
all Windows, Linux, and VMware
V
quicckly

Proviision a serve
er remotely ussing iLO

Remo
ote Support registration
Fo


Rev. 12.3
31
Full system
s
integrration and operating systtem configurration elimina
ates 45% of
stepss, allowing you
y to deploy
y server threee times faster.
2 -21
Implementing HP BladeSystem Solutions
Communication between iLO and server blades
In the BladeSystem architecture, a single enclosure houses multiple servers. A
separate power subsystem provides power to all server blades in that enclosure.
ProLiant server blades use the iLO management processor to send alerts and
management information throughout the server blade infrastructure. However, there is
a strict communication hierarchy among ProLiant server components.
on
ly
The Onboard Administrator management module communicates with the iLO
processor on each server blade. The Onboard Administrator module provides
independent IP addresses for each server blade. The iLO firmware exclusively controls
any communication from iLO to the Onboard Administrator module. There is no path
from an iLO processor on one server blade to the iLO processor on another blade.
The iLO processor has information only about the presence of other server blades in
the infrastructure and whether there is enough amperage available from the power
subsystem to boot the iLO host server blade.
Note
de
liv
er
y
Within BladeSystem enclosures, the server blade iLO network connections are
accessed through a single, physical port on the rear of the enclosure. This greatly
simplifies and reduces cabling.
Fo
rT
TT
The iLO on a server blade maintains an independent IP address.
2 -22
Rev. 12.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
HP iLO
O Advance
ed for HP BladeSysttem
HP
P iLO features ccomparison
iLO functionality can be enhanced
d by using iLLO Advanced
d for BladeS
System. This is a
at unlocks ne
ew capabilities.
simple liccense key tha
iLO Adva
anced for Bla
adeSystem fe
eatures includ
de:
Sharred remote co
onsole — Up
p to four On board Administrator/iLO
O users with
remo
ote console privileges
p
in different
d
loca
ations can co
ollaborate ussing the sharred
remo
ote console. It is used to troubleshoot,
t
, maintain, a
and administter remote
serve
ers. The session leader ca
an allow eithher view onlyy or full conssole control b
by
indivvidual particiipants. Share
ed remote co
onsole modee is supported
d from the
integ
grated remote
e console on
n clients using
g Microsoft IInternet Explo
orer browserrs.
rT
TT

Micrrosoft Terminal Services Pass-Through
P
h — Microsoft Terminal S
Services workk as
long as the operating system is functioninng. With iLO
O Advanced, a Terminal
Servvices session is routed thro
ough the iLO
O network intterface to improve
prod
duction netwo
ork security. It automatica
ally switches to Terminal Services whe
en
the operating
o
sysstem is loade
ed and avail able. When not available, iLO
Adva
anced provid
des its own graphical
g
connsole.
Fo

Rev. 12.3
31
2 -23
Implementing HP BladeSystem Solutions
Directory Services integration — Onboard Administrator/iLO integrates with
enterprise-class directory services to provide secure, scalable, and cost-effective
user management. You can integrate Microsoft Active Directory with iLO devices
to maintain iLO user accounts. Integrating with a directory services application
such as Active Directory allows you to use the Lightweight Directory Access
Protocol (LDAP) directory to authenticate and authorize user privileges to multiple
iLO devices. With Active Directory, you have the flexibility to integrate with or
without a schema extension.
A simple installation program is available to install a management console
snap-in and extend an existing directory schema to enable directory support
for iLO.

A directory migration tool is available to automate setup for both methods
of integration.

Integration also supports LDAP nested groups.

You can configure a redundant domain controller when using Active
Directory and iLO.
y
on
ly

de
liv
er

iLO can use a backup domain controller if the primary domain controller is
unavailable. In an Active Directory configuration, there is no need to configure
the actual iLO device to allow a backup domain controller. The Microsoft
Domain Name System (DNS) server will automatically update the DNS name to
reflect domain controller availability.
Automatic and on-demand video footage — Onboard Administrator/iLO
Console Replay captures and stores for replay the console video during a
server's last major fault or boot sequence. Server faults include an ASR, server
boot sequence, Linux panic, or Windows blue screen. Additionally users are
able to manually record and save any console video sequence to their client
hard drive for replay from the ProLiant Onboard Administrator/iLO Integrated
remote console.
Fo
rT

TT
You should configure iLO to reference the DNS name of the domain, not the
specific IP address of the domain controller. If the primary DC is unavailable, the
DNS lookup of the domain will not return that server’s IP, so that iLO can
connect to the next available domain controller. Alternatively, in the iLO
configuration, you can use a comma or a semicolon between the IP addresses
for iLO when trying to contact the Active Directory.


2 -24
iLO Text Console — Onboard Administrator/iLO text consoles provide server
access via a text console, similar to a graphical remote console.
iLO Video Player — Onboard Administrator/iLO allows you to view
automatically captured server video footage or on-demand captured footage
within an iLO session or separately through the iLO video player.
Rev. 12.31
HP BladeSystem Enclosures

Power Regulator reporting — Both iLO Advanced and iLO Advanced for
BladeSystem enable access to power-related data from any of the three iLO
interfaces (browser, script, or command line) on supported server models.
Available information includes time spent in Power Regulator Dynamic Savings
mode and average, peak and minimum power consumption over 24-hour
intervals. Check the server QuickSpecs to verify specific system support for
Power Regulator and power monitoring.
Virtual folders — This feature allows you to mount a local folder on a remote
server.
Fo
rT
TT
de
liv
er
y

Multi-factor authentication — Onboard Administrator/iLO provides strong user
authentication with two-factor authentication using digital certificates embedded
on smartcards or USB flash drives. Using this form of strong authentication, iLO
access can be restricted only to IT individuals possessing a certificate bearing
smartcard or flash drive and a PIN.
on
ly

Rev. 12.31
2 -25
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Blad
deSystem pow
wer and cooling
TT
In the past, better datta center perrformance w
was the goal, and power and cooling
costs werre the price paid
p
for performance. Ass energy costs skyrocket, processor and
memory technologies
t
s make perfo
ormance the a
abundant resource, and power and
cooling are
a at a prem
mium. As servver density rrises, so do p
power requirrements. As
power increases, so does
d
heat ou
utput. The ina
ability to pow
wer and cool data centerrs
y is preventin
ng many com
mpanies from
m achieving ttheir IT goalss.
effectively
rT
Power an
nd cooling arre issues regardless of fo
orm factor. Ho
owever, incre
eased serverr
and proccessor density
y have accellerated the d
demands.
To achievve a controlla
able balance
e between po
ower and co
ooling while boosting data
center en
nergy efficien
ncy, significant tradeoffs m
must be mad
de:
Larger fans move
e more air bu
ut take more power.

Smaller fans nee
ed higher rpm
m to move th e same amo
ount of air.

High
her rpm means more noisse for a given size fan.

Physical limits dicctate how fast a fan can go.

More
e fans requirre more pow
wer and resultt in more cosst.
Fo

2 -26
Rev. 12
2.31
HP Blad
deSystem Enclossures
on
ly
Blade
eSystem enclosure
e
e design challeng
ges
Aperture
es in backplane
es/signal midpla
anes of BladeS
System enclosurres
Challenges faced by the BladeSystem design engineers in
ncluded:

y

Small apertures in
i the backp
plane assemb
bly meant tha
at getting suffficient air fro
om
s
bladess required high pressure.
the server
The Xeon
X
processor E5 seriess in the ProLiant BL460c Gen8 serverr blade requires
up to
o 30 cubic fe
eet per minutte (CFM) to ccool and theerefore can re
equire high
airflo
ow.
de
liv
er

Up to
o 16 half-heiight blades per
p chassis reequire large air volumes to be moved
d.
Fo
rT
TT
HP Active
e Cool Fans and Thermal Logic are thhe solutions tto these challlenges.
Rev. 12.3
31
2 -27
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
PARSEC archiitecture
The Blade
eSystem c70
000 enclosure
e uses paralllel, redundan
nt, scalable, enclosurebased co
ooling (PARSE
EC) architectture:

Redu
undant — Fa
ans located in each of four cooling zo
ones supply direct coolin
ng
for server bladess in their resp
pective zoness and redund
dant cooling
g for adjacen
nt
es. Each zone
e can contain four serverr blades.
zone
Scala
able — To operate, serve
er blades req
quire a minim
mum of four fans installed
d at
the rear
r
of the c7
7000 enclosure. The enclosure suppo
orts up to 10 fans so thatt
cooling capacity
y can scale as
a needs cha
ange.
rT

Para
allel — Fresh,, cool air flow
ws over all thhe blades (in
n the front off the enclosure)
and all the intercconnect modules (in the b
back of the eenclosure).
TT

ne
Enclo
osure-based — By mana
aging cooling
g throughoutt the entire enclosure, zon
cooling minimize
es the powerr consumptio n of the fan subsystem and increasess
e
in a single zone if one of thhe server bla
ades requiress more coolin
ng.
fan efficiency
This saves operating costs an
nd minimizess fan noise. H
HP recommends using att
leastt eight fans. Using 10 fan
ns optimizes power and ccooling.
Fo

2 -28
Rev. 12
2.31
HP BladeSystem Enclosures
PARSEC architecture optimizes thermal design to support all customer configurations
from 1 to 16 servers, with one to 10 fans. The BladeSystem enclosure features a
relatively air-tight manifold. The servers seal into the front section when in use; doors
seal off when servers are not in use. The rear section has back flow preventers that
seal when a fan does not rotate or is not installed.
on
ly
The middle section wraps around the complex power and signal distribution
midplanes to ensure that air is properly metered from the 10 parallel fans to the 16
parallel servers. These are three large snap-together plastic, metal, and gasket
subassemblies.
Cooling is managed by the Thermal Logic technology, which features Active Cool
Fans. These fans provide adaptive flow for maximum power efficiency, air movement,
and acoustics. Active Cool Fans provide an adaptive flow for maximum power
efficiency, air movement, and acoustics.
y
The PARSEC architecture is designed to draw air through the interconnect bays. This
allows the interconnect modules to be smaller and less complex.
Fo
rT
TT
de
liv
er
The power supplies are designed to be highly efficient and self-cooling. Single- or
three-phase enclosures and N+N or N+1 redundancy yield the best performance per
watt.
Rev. 12.31
2 -29
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Blade
eSystem c7000
c
enclosure airflow
Schema of airflow inside a c7000 enclo
osure
Thermal Logic
L
uses a control algo
orithm to optiimize for anyy configuratio
on based on
n
the follow
wing custome
er parameterrs:
Airflo
ow

Acou
ustics

Powe
er

Perfo
ormance
TT

Fo
rT
Airflow th
hrough the enclosure is managed
m
to eensure that eevery device gets cool air
and doess not sit in the hot exhausst air of anotther device, a
and to ensurre that air on
nly
goes whe
ere it is need
ded for coolin
ng. Fresh airr is pulled intto the interco
onnect bays
through a side slot in the front of the enclosuree. Ducts movve the air fro
om the front to
o
the rear of
o the enclosure, where itt is then pulleed into the in
nterconnect m
modules and
d
the centra
al plenum, and then exha
austed out thhe rear of thee system.
2 -30
Rev. 12
2.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
Active
e Cool Fa
ans
HP Active
e Cool Fans are an innovvative designn that can co
ool 16 bladess using as litttle
as 100W
W of power. The
T design iss based on a
aircraft technology that generates fan
n-tip
speeds up to 136 mp
ph with high pressure and
d high airflow
w while using
g less powerr
than traditional fan designs.
TT
With 20 patents pend
ding, Active Cool Fans m
meet a number of data ce
enter
requireme
ents:

The most energy--efficient airflow

Movving enough air to cool ju
ust the comp onents that n
need it
Enou
ugh power to
o pull cool air through thee blades and
d enclosure

Half the noise ou
utput of equivvalent rack-m
mount serverss
rT

Lowe
er power con
nsumption by
y using only the number of fans need
ded to mainta
ain
prese
et cooling thresholds
Fo


Rev. 12.3
31
Easy
y scalability to
t even the most
m stringen t future roadmap require
ements
2 -31
Implementing HP BladeSystem Solutions
Fan location rules
The c7000 enclosure
The c7000 enclosure ships with four Active Cool 200 Fans and supports up to 10
fans. Install fans in even-numbered groups, based on the total number of server
blades installed in the enclosure:
Four server blades — Install fans in bays 4, 5, 9, and 10.

Six server blades — Install fans in bays 3, 4, 5, 8, 9, and 10.

Eight server blades — Install fans in bays 1, 2, 4, 5, 6, 7, 9, and 10.

Ten server blades — Install fans in all bays.
Important
!
on
ly

de
liv
er
The c3000 enclosure
y
If the fans are not in these exact locations, the thermal subsystem will be degraded and no
newly inserted server will be allowed to power up.
The c3000 enclosure ships with a minimum of four fans and supports up to six. The
c3000 supports Active Cool 100 Fans. To ensure proper cooling, HP recommends
that you distribute fans based on these fan location rules:

Six-fan configuration — Fans in all six bays support population of all server
bays.
Fo
rT
TT

Four-fan configuration — Fans in bays 2, 4, 5, and 6 support a maximum of
four half-height blades or two full-height blades.
2 -32
Rev. 12.31
HP Blad
deSystem Enclossures
Fan populatio
p
on
de
liv
er
y
on
ly
The c7
7000 encllosure
Fan location placem
ment (c7000)
In a six-fan confiiguration, fan bays 3, 4,, 5, 8, 9, and
d 10 are use
ed to supportt
devicces in device
e bays 1, 2, 3,
3 4, 9, 10, 11, or 12.
rT

In a four-fan con
nfiguration, fan
f bays 4, 5
5, 9, and 10
0 are used to support a
maximum of two
o devices loca
ated in devicces bays 1, 2
2, 9, or 10. O
Only two devvice
d with four fa
ans.
bayss can be used
TT

In an
n eight-fan configuration, fan bays 1,, 2, 4, 5, 6, 7, 9, and 10
0 are used to
o
supp
port devices in
i all device bays.
Fo


In a ten-fan conffiguration, alll fan bays a
are used to suupport devices in all device
bayss.
!
Rev. 12.3
31
Important
Install fan bla
anks in any unu
used fan bays.
2 -33
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
The c3
3000 enclosure
Fan
F population (c3000)
Base c30
000 enclosurres ship with four Active C
Cool 100 Fa
ans installed, supporting up
to four ha
alf-height devvices or two full-height seerver blades.. Adding two
o additional
fans to th
he enclosure allows popu
ulation of eig
ght half-heigh
ht devices or four full-heig
ght
server bla
ades.
Four--fan configurration require
es populationn of fan bayys 2, 4, 5, an
nd 6.

Six-fa
an configura
ation enabless population of all fan ba
ays.
TT

Fo
rT
In a four-fan configura
ation, the On
nboard Adm
ministrator preevents blade
e devices in
g on and ideentifies the fa
an subsystem
m as degrade
ed.
bays 3, 4 7, and 8 frrom powering
To popula
ate blade de
evices in thesse bays, pop
pulate c3000
0 enclosures with six fanss.
2 -34
Rev. 12
2.31
HP BladeSystem Enclosures
Fan failure rules
In the event of a fan failure, the Onboard Administrator indicates on the Insight
Display and web GUI whether the fan failure resulted in loss of redundancy. The
health LED of the failed fan illuminates solid amber.
Important
!
Remove and replace this fan to correct the failure condition. Replacing the failed fans will
result in automatically returning the fan subsystem health to OK.
on
ly
If the fan subsystem is marked degraded, another fan failure will result in marking the
fan subsystem as failed. In this circumstance the Onboard Administrator probably
cannot prevent a server from overheating.
Caution
Failure to replace the affected fans could result in loss of data or damage to hardware.
de
liv
er
y
In all cases of fan failure, the Onboard Administrator continues to monitor server
temperatures and provides adequate cooling. In extreme cases such as fan failure, or
elevated enclosure or server ambient temperatures, the system resorts to maximum
enclosure fan rpm. When the failed fan is replaced, fan subsystem redundancy is
restored and the fan rpm returns to a controlled rpm.
Fan redundancy rules control system behavior in the event of the loss of fan:

If the 10-fan rule (c7000) is in place and the failed fan is in bay 1, 2, 6, or 7
and no blades are powered on in right half of enclosure (bays 5 through 8 and
13 through 16):


TT

The fan subsystem is still redundant.
The failed fan is marked failed.
Place the remaining fans to ensure compliance with the six-fan rule.
Fo
rT
In the c3000 enclosure, if you have six fans installed, they are automatically 5+1
redundant. If one fan fails, the Onboard Administrator will not prompt you to step
down to the four-fan configuration, because some of the server blades would have to
be powered down. Instead, the Onboard Administrator allows the server to run with
five fans, provided that adequate cooling continues.
Rev. 12.31
2 -35
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Fan qu
uantity versus powe
er
Fan
F quality verssus power
The prece
eding graph represents the number o
of fans versuss power draw
w. The circled
d
area indiicates the po
oint at which 10 fans are more efficient than eightt fans for the
e
same airfflow delivere
ed.
TT
According to the lawss of airflow dynamics,
d
10
0 fans will m
move more CFM of air with
er than six fa
ans or eight fans.
f
In addiition, althoug
gh it might se
eem to be a
less powe
contradicction, they wiill be quieterr. Six high-po
owered fans actually are 3.7dB loude
er
than eigh
ht lower-powe
ered fans.
Fo
rT
For sound
ds with simila
ar frequency
y content, mo
ost people co
onsider a 3dB change in
sound pre
essure a 2x difference in
n sound. Sim ilarly, peoplee perceive a 10dB increa
ase
as being nine times as
a loud.
2 -36
Rev. 12
2.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
Self-se
ealing Blad
deSystem enclosure
e
The c700
00 enclosure and the com
mponents witthin it optimiize the coolin
ng capacity
through unique
u
mechanical designs. Airflow tthrough the eenclosure is m
managed to
ensure that every devvice gets coo
ol air, devicess do not sit in
n the hot exh
haust air of
d
and ensures
e
that air only goees where it is needed for cooling. Fresh
another device,
air is pulled into the interconnect
i
bays throug h a slot in th
he front of the
e enclosure.
ove the air fro
om the front to the rear o
of the enclosuure, where it is then pulle
ed
Ducts mo
into the interconnects and the cen
ntral plenum,, and then exxhausted outt the rear of tthe
system.
Fo
rT
TT
Fan louve
ers automaticcally open when
w
a fan is installed an
nd automatically close wh
hen
the fan iss removed. When
W
a fan is installed innto the enclosure, the servver blade in the
enclosure
e activates a lever that op
pens a door on the fan a
assembly to a
allow air to fflow
through the server bla
ade.
Rev. 12.3
31
2 -37
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Coolin
ng multiple
e enclosurres
Multiple c70
000 enclosures cooling requireements
The c700
00 enclosure can operate
e with four ennclosures in a rack if the data center is
equipped
d to deliver sufficient
s
airfllow at the fro
ont of the racck and no aiir recirculatio
on
occurs ovver the top or around the
e sides of thee racks.
Fo
rT
TT
HP recom
mmends that you run the Power Sizer before installing enclosu
ures to
determine
e the load th
hat the propo
osed system w
would place on the cooling and pow
wer
systems.
2 -38
Rev. 12
2.31
HP BladeSystem Enclosures
Thermal Logic
Thermal Logic is the portfolio of technologies embedded throughout HP servers to
produce an energy efficient data center. Thermal Logic reduces energy consumption,
reclaims capacity, and extends the life of the data center.
Thermal Logic innovations include:

Common Slot Power Supplies – Reduce spares with standardized form factors
and “right-size” to match capacity. The result: up to 92% efficiency.
Power Management Tools – Insight Control Suite management software delivers
deep insight, precise control, and ongoing optimization to unlock the potential
of the infrastructure.
Intelligent Power Discovery – The industry's first automated, energy-aware
network to bring together facilities and IT by combining HP Intelligent PDUs,
Platinum common slot power supplies, and Insight Control software.
Fo
rT
TT

on
ly

Sea of Sensors – Up to 32 sensors adjust fan speeds and powers only the slots
that are in use. The result: 2.5x more efficient than ProLiant G5 servers and much
quieter.
y

Dynamic Power Capping – Reclaim trapped power and cooling capacity by
safely “capping” server power consumption. The result: triple server capacity.
de
liv
er

Rev. 12.31
2 -39
Implemen
nting HP BladeS
System Solutions
on
ly
Power Regulator techn
nologies
y
Schema of ProLiant
P
Power Regulator operration
de
liv
er
HP Powerr Regulator te
echnologies improve servver energy eefficiency by giving CPUss full
power for application
ns when they
y need it and
d power savings without performance
e
pplication acctivity is reduuced. It enab
bles you to re
educe power
degradattion when ap
consumption and gen
nerate less da
ata center heeat, resulting
g in compoun
nded cost
Y save firsst by using le
ess power in racks and seecond by pro
oducing less
savings. You
work for air cooling systems.
s
Thesse factors ca n save on op
perational exxpenses and
a center enviironment, an
nd do not neccessarily resu
ult
enable greater densitty in the data
in loss of system perfo
ormance.
TT
Power Re
egulator Static Low Powerr and Dynam
mic Power Sa
avings modess as well as
operating
g system-base
ed modes (A
AMD PowerN
Now or Intel Demand Bassed Switching)
can be enabled to sa
ave on serverr power and cooling costs. On suppo
orted ProLian
nt
P
Regula
ator allows CPUs
C
to operrate at lower frequency a
and voltage
servers, Power
during pe
eriods of red
duced applica
ation activityy.
Fo
rT
This power managem
ment technolo
ogy enables dynamic or static change
es in CPU
ance and pow
wer states. In
n dynamic m ode, Power Regulator au
utomatically
performa
adjusts th
he server's processor pow
wer usage annd performance to match
h CPU
applicatio
on activity. Power
P
Regula
ator effectivelly executes a
automated po
olicy-based
power ma
anagement at
a the individ
dual server leevel. In addittion, a uniqu
ue static low
power mo
ode allows servers
s
to run
n continuouslly in a system
m's lowest po
ower state.
Power Re
egulator is an
n operating-ssystem-indepeendent poweer managem
ment feature o
of
ProLiant servers.
s
It is included
i
on all ProLiant sservers (200 series and g
greater).
Note
For additiona
al information about
a
Power Reg
gulator visit:
http://h1800
04.www1.hp.co
om/products/seervers/manageement/ilo/pow
wer-regulator.htm
ml
2 -40
Rev. 12
2.31
HP BladeSystem Enclosures
Power Regulator for ProLiant
Power Regulator for ProLiant enables ProLiant servers with policy-based power
management to control CPU power state (CPU frequency and voltage) based on a
static setting or automatically based on application demand.
HP Power Regulator uses processor P-states to regulate server power consumption in
various workload environments.
on
ly
The Power Regulator feature provides iLO-controlled speed stepping for Intel x86 and
AMD processors. It improves server energy efficiency by giving processors full power
when they need it and reducing power when they do not. This power management
feature allows ProLiant servers with policy-based power management to control
processor power states.
Important
!
de
liv
er
y
Dynamic Power Savings mode is not available on all processor models. To determine
which processors are supported, consult the Power Regulator website at:
http://www.hp.com/servers/power-regulator
Because Power Regulator resides in the BIOS, it is independent of the operating
system and can be deployed on any supported ProLiant server without waiting for an
operating system upgrade. HP has also made deployment easy by supporting Power
Regulator settings in the HP iLO scripting interface.
The Power Regulator for ProLiant feature enables iLO 4 to dynamically modify
processor frequency and voltage levels based on operating conditions to provide
power savings with minimal effect on performance.
TT
Note
Fo
rT
In addition to Power Regulator, ProLiant servers also support operating system based
power management using Intel Demand Based Switching and AMD Opteron PowerNow.
Rev. 12.31
2 -41
Implementing HP BladeSystem Solutions
Power Regulator for Integrity
Although power monitoring operates independently of the operating system, Power
Regulator for Integrity requires a compliant operating system version. Power
regulation also requires power-performance state (p-state) capable hardware.
Note
Power Regulator for Integrity operates in four modes:

Dynamic Power Savings Mode — Allows the system to dynamically change
processor p-states when needed based on current operating conditions. The
implementation of this mode is operating system specific, so consult your
operating system documentation for details.
Operating System Control Mode — Power Regulator for Integrity configures the
server to enable the operating system to control the processor p-states. Use this
setting to put the operating system (including operating system-hosted
applications) in charge of power management. Moving to or from this state does
not require a reboot of Integrity servers.
TT

Static High Performance Mode — Power Regulator for Integrity sets the
processors to the p-state with the highest performance and forces them to stay in
that state. This mode ensures maximum performance, but it does not save any
resources. This mode is useful for creating a baseline of power consumption
data without Power Regulator for Integrity.
y

Static Low Power Mode — Power Regulator for Integrity sets the processors to the
p-state with the lowest power consumption and forces them to stay in that state.
This mode saves the maximum amount of resources, but it might affect the system
performance if processor utilization stays at 75% utilization or more.
de
liv
er

on
ly
Consult operating system documentation for details on power management support for a
given system.
Fo
rT
The HP Power Regulator for Integrity modes are available on supported platforms
equipped with Dual-Core Intel Itanium Processor 9100 series 1.6 GHz dual-core
parts.
2 -42
Note
The user must have the Configure iLO 4 Settings privilege to change these settings.
Rev. 12.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
iLO 4 power ma
anagemen
nt
iLO 4 pow
wer manage
ement enable
es you to view
w and contro
ol the powerr state of the
server, monitor power usage, mon
nitor the proccessor, and modify powe
er settings. TThe
anagement page
p
in the iLO
i 4 interfa
ace has threee menu optio
ons:
Power Ma

Servver Power — The followin
ng options arre available::
Press and Hold — This button
b
is idenntical to presssing the phyysical power
button for fivve seconds and
a then releeasing it. Thiss option provvides the
Advanced Configuration
C
n and Power Interface (AC
CPI)-compatible
functionality
y that is imple
emented by ssome operatting systems. These
operating sy
ystems behavve differentlyy depending on a short p
press or long
press. The behavior
b
of th
his option miight circumveent any grace
eful shutdow
wn
features of th
he operating
g system.
Fo
rT

Momentary Press — This button provvides behaviior identical to pressing tthe
physical pow
wer button.
TT


Reset — Thiss button initiates a system
m reset. This option is nott available
when the server is powe
ered down. TThe behavior of this optio
on might
circumvent any
a graceful shutdown fe atures of thee operating ssystem.

Cold Boot — This functio
on immediateely removes power from tthe system,
circumventin
ng graceful operating
o
sysstem shutdow
wn features. TThe system w
will
restart after approximate
ely six secon ds. This optio
on is not ava
ailable when
n
the server is powered do
own.
Note
Some of the power
p
control options
o
do not g
gracefully shut down the opera
ating system.
Rev. 12.3
31
2 -43
Implementing HP BladeSystem Solutions

Power Meter Readings

Power History
on
ly
Power Settings — The iLO Power Settings page allows you to view and control
the Power Regulator modes. The Power Management Settings page enables you
to view and control the Power Regulator mode of the server. Power Regulator for
ProLiant settings are:

Static Low Power mode — Sets the processor to minimum power, reducing
processor speed and power usage. Guarantees a lower maximum power
usage for the system.

Static High Performance mode — Processors will run in their maximum
power/performance state at all times regardless of the operating system
power management policy.
Note
y

Power Meter — The Power Meter page displays server power utilization as a
graph. This page has two sections:
de
liv
er

Fo
rT
TT
Selecting Static High Performance mode usually causes the system to use more power,
especially when it is lightly loaded. Most applications benefit from the power savings
offered by Dynamic Power Savings mode with little or no impact on performance.
Therefore, if choosing Static High Performance mode does not increase performance, HP
recommends that you re-enable Dynamic Power Savings mode to reduce power use.
2 -44
Rev. 12.31
HP BladeSystem Enclosures
Dynamic Power Savings mode — Automatically varies processor speed and
power usage based on processor utilization. Enables you to reduce overall
power consumption with little or no impact on performance. Does not
require operating system support. The server uses only the power it needs.
Unfortunately, this can cause system applications to overstate overall server
utilization because the measurements include data from throttled-down
processors.

OS Control mode — Processors will run in their maximum power/
performance state at all times unless the operating system enables a power
management policy.
on
ly

Note
With the exception of the OS Control mode, Power Regulator modes configured through
iLO do not require a reboot and are effective immediately. OS Control mode changes
become effective on the next reboot.
Note
de
liv
er
y
The Power Capping Settings section displays measured power values and enables
you to set a power cap and disable power capping. Measured power values include
the server power supply maximum value, the server maximum power, and the server
idle power. The power supply maximum power value refers to the maximum amount
of power that the server power supply can provide. The server maximum and idle
power values are determined by two power tests run by the ROM during POST.
Fo
rT
TT
The iLO command line interface (CLI) gives you command line access to the same
functions available through the iLO browser-based interface.
Rev. 12.31
2 -45
Implementing HP BladeSystem Solutions
Power efficiency
HP iLO 4 enables you to implement improved power usage using a High Efficiency
Mode (HEM). HEM improves the power efficiency of the system by placing the
secondary power supplies into step-down mode. When the secondary supplies are in
step-down mode, the primary supplies provide all the DC power to the system. The
power supplies are more efficient (more DC output Watts for each Watt of AC input)
at higher power output levels, and the overall power efficiency improves.
on
ly
When the system begins to draw more than 70% capacity of the maximum power
output of the primary supplies, the secondary supplies return to normal operation (out
of step-down mode). When the power use drops below 60% capacity of the primary
supplies, the secondary supplies return to step-down mode.
de
liv
er
y
HEM enables systems to achieve power consumption equal to the maximum power
output of the primary and the secondary supplies, while maintaining improved
efficiency at lower power usage levels. HEM does not affect power redundancy. If
the primary supplies fail, then the secondary supplies immediately begin supplying
DC power to the system, preventing any downtime.
HEM can only be configured through the ROM-Based Setup Utility (RBSU). These
settings cannot be modified through iLO. The settings for HEM are Enabled or
Disabled (also called Balanced Mode), and Odd or Even supplies as primary. These
settings are visible in the High Efficiency Mode & Standby Power Save Mode section
of the System Information, Power tab. This section displays the following information:
If HEM is enabled or disabled

Which power supplies are primary (if HEM is enabled)

Which power supplies do not support HEM
Fo
rT
TT

2 -46
Rev. 12.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
Dynam
mic Power Saver
The Dyna
amic Power Saver
S
feature
e takes advanntage of the fact that mo
ost power
supplies operate
o
ineffficiently when lightly load
ded and morre efficiently when heavilly
loaded. A typical pow
wer supply ru
unning at 20
0% load couuld have efficciency as low
w as
60%. Ho
owever, at 50
0%, the load could be 90
0% efficient.
In the gra
aphic, the top
p example shows the pow
wer demand
d spread ineffficiently acro
oss
six power supplies. Th
he second ex
xample dem onstrates tha
at with Dynamic Power
Saver, the
e power load
d is shifted to
o two powerr supplies forr more efficie
ent operation
n.
rT
TT
When the
e Dynamic Power
P
Saver feature
f
is en abled, the to
otal enclosure
e power
consumption is monitored in real-time. As a reesult, automa
atic adjustme
ents are tied to
dby conditio
on when the
changes in demand. Power supplies are placeed in a stand
emand from the server en
nclosure is lo
ow. When po
ower demand increases, the
power de
standby power
p
supplies instantaneously deliveer the requireed power. Th
his enables the
enclosure
e to operate at optimum efficiency,
e
w
with no impacct on redunda
ancy.
Fo
Dynamic Power Saver is supporte
ed on the HP 1U power ssupply and B
BladeSystem
enclosure
es. It is enablled by an intterconnect o n the manag
gement board
d. When the
e
power su
upplies are placed in stan
ndby mode, their LEDs fla
ash.
Rev. 12.3
31
2 -47
Implementing HP BladeSystem Solutions
Dynamic Power Capping
Benefits of Dynamic Power Capping include:

Maximizes utilization of data center floor space by fitting more servers or
enclosures in each rack
y

Reduces costly power and cooling overhead by efficiently using the power and
cooling resource budgeted to each rack
de
liv
er

on
ly
Dynamic Power Capping enables retrieval of stranded power and optimizes power
and cooling capacity in data centers. Dynamic Power Capping safely limits power
usage with no performance degradation without risk to electrical infrastructure. For
enclosures of blades, users will set an enclosure level power cap, and the Onboard
Administrator will dynamically adjust individual server power caps based on their
specific power requirements. By capping power usage at historical peak power
usage instead of significantly higher face-plate, ROM burn, or power calculator
default values, IT organizations can fit up to 36% more servers in their existing rack
infrastructure.
Postpones the need for costly data center expansions or facilities upgrades
Before using Dynamic Power Capping, ensure the enclosure contains redundant
Onboard Administrator modules and is in an N+N Redundant power mode.
Fo
rT
TT
For more information on Power Capping, refer to:
http://h18013.www1.hp.com/products/servers/management/dynamic-powercapping/support.html?jumpid=reg_R1002_USEN
2 -48
Rev. 12.31
HP Blad
deSystem Enclossures
Power delivery
y modes
BladeSysstem enclosures can be configured
c
inn one of threee power delivery modes:

Non-Redundant Power

Power Supply Re
edundant

AC Redundant
R
de
liv
er
y
on
ly
Non-R
Redundantt Power
The Non--Redundant Power
P
mode
e provides no
o power reduundancy; an
ny power sup
pply
or AC lin
ne failure will cause the system
s
to pow
wer off. Tota
al power is th
he power
available
e from all power suppliess installed.
Six power
p
supplies installed in a BladeSyystem c7000
0 enclosure = 14400W

Six power
p
supplies installed in a BladeSyystem c3000
0 enclosure = 7200W
TT

Fo
rT
This scenario is used to demonstra
ate simple ennclosure setuups or in classsrooms for
p
It is not recomm
mended for a production
n environmen
nt.
training purposes.
Rev. 12.3
31
2 -49
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Powerr Supply Redundant
Bla
adeSystem encllosures with DC
C redundant co
onfiguration
The most basic powe
er configuratiion has two power supplies. Based o
on the power
supply pllacement rule
es, these pow
wer suppliess would popuulate bays 1 and 4. To
reach po
ower supply redundancy,
r
, you would add anotherr power supp
ply in bay 2.
TT
As long as
a there are not more de
evices in the enclosure than two power supplies ccan
support, the system iss power supp
ply redundannt.
rT
With the Power Supp
ply Redundan
nt configurattion, a minim
mum of two p
power supplies
is require
ed. Up to six
x power supp
plies can be installed in a
an enclosure
e. One powe
er
supply is always rese
erved to provvide redunda
ancy. In the eevent of a single power
supply fa
ailure, the red
dundant pow
wer supply ta
akes over thee load.
Fo
This N+1 Power Mod
de configuration is cost-ssensitive but provides min
nimal
ncy. It is mosst often seleccted by smalll and medium
m-sized businesses that
redundan
purchase
e three or fou
ur power sup
pplies and onne power disstribution unit (PDU) or h
have
the capability to conn
nect only a single
s
line co
ord. It could also be selected by
performance computing a
applications where redundancy is le
ess
customers with high-p
ost.
important than low co
Note
The graphic shows
s
two circu
uits (circuit A annd B) being useed. This is possiible but not
necessary forr the power sup
pply redundant mode. Total po
ower for the c70
000 enclosure is
total power available,
a
less one
o power supp
ply. A 5+1 conffiguration = 120
000W. The
c3000 enclossures can provide up to 6000
0W in a 5+1 co
onfiguration.
2 -50
Rev. 12
2.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
AC Re
edundant
Bla
adeSystem encllosures with AC
C redundant co
onfiguration
In the N+
+N AC Redundant power mode, a m
minimum of tw
wo power supplies is
required. N power su
upplies provid
de power annd N providees redundanccy, where N
al 1, 2, or 3.
can equa
rT
TT
The Onbo
oard Adminiistrator reservves sufficientt power so th
hat any numb
ber of powerr
supplies from
f
1 to 3 can
c fail, and
d the enclosuure will contin
nue to opera
ate at full
performa
ance on the remaining
r
lin
ne feed. Wheen correctly w
wired with re
edundant AC
C
line feedss, AC Redundant mode ensures
e
that a
an AC line ffeed failure w
will not cause
e
the enclo
osure to powe
er off.
AC Redundant mode provides full redundancyy and is the configuration
ended for larg
ge enterprise
e customers b
because it en
nsures full pe
erformance w
with
recomme
one powe
er line feed.
Fo
Total available powerr is determined by half o
of the total nuumber of pow
wer supplies
installed in the enclossure. For exa
ample, for a cc7000 enclo
osure with sixx power
3
configurration yields 7200W of ttotal power. Similarly, a
supplies installed, a 3+3
h six power supplies
s
insta
alled yields 3
3600W of to
otal available
c3000 enclosure with
undant config
guration.
power in an AC Redu
When prroperly conne
ected to two separate cirrcuits, the On
nboard Adm
ministrator
ensures th
hat powered
d enclosure devices
d
do no
ot exceed ha
alf of the tota
al available
power, en
nsuring up-time as the re
emaining pow
wer supplies sustain the e
enclosure loa
ad.
Rev. 12.3
31
2 -51
Implementing HP BladeSystem Solutions
HP Intelligent Power Discovery Services
Intelligent Power Discovery Services combine an HP Intelligent Power Distribution Unit
(iPDU) and HP Common Slot (CS) Platinum/Platinum Plus power supplies with HP
Insight Control software to create an automated, energy-aware network between IT
systems and facilities. Intelligent Power Discovery Services with Intelligent PDUs
automatically track power usage and document configurations to increase system
uptime and reduce the risk of outages.
on
ly
Intelligent Power Discovery provides automated server discovery on a network
through power line communication technology that is embedded in CS Platinum
Power Supplies. Power line communication is a feature that allows the power supply
to communicate with the iPDU. The communication between the power supply and
iPDU helps:
Automatically discover the server when it is plugged into a power source

Map the server to the individual outlet on the iPDU
y

de
liv
er
When combined with the HP line of Platinum-level high-efficiency power supplies, the
Intelligent PDU actually communicates with the attached servers to collect asset
information for the automatic mapping of the power topology inside a rack. This
capability greatly reduces the risk of human errors that can cause power outages.
HP Thermal Discovery Services help you reduce energy usage and increase compute
capacity. This feature helps you squeeze the most IT out of every bit of data center
power and cooling capacity and reduce energy consumption by 10% compared to a
ProLiant G6 server.
TT
The automated energy optimization capabilities in the ProLiant Gen8 family are
enabled by HP 3D Sea of Sensor technologies. Embedded intelligence across a
sense of location, power utilization, and thermal demand provides a high level of
visibility and control over the energy efficiency of the data center.
rT
For more information about HP Intelligent Power Discovery, go to:
Fo
http:// www.hp.com/go/ipd
2 -52
Rev. 12.31
HP Blad
deSystem Enclossures
HP Inttelligent PD
DUs
Rea
ar view of 12 O
Outlet iPDU
y
on
ly
The key element
e
of HP Power Discovery Serviices is the iP DU, which iss a power
distributio
on unit with full remote outlet
o
control,, outlet-by-ouutlet power trracking, and
d
automate
ed documenttation of pow
wer configura
ation. HP iPD
DUs track ou
utlet power
usage at 99% accura
acy, showing
g system-by-ssystem poweer usage and
d available
power. The iPDU reco
ords server ID informatio
on by outlet a
and forward
ds this
information to HP Insight Control,, saving hou rs of manual spreadshee
et data-entry
ng and docuumentation errors.
time and eliminating human wirin
de
liv
er
When co
ombined with
h the HP line of Platinum--level high-effficiency pow
wer supplies, the
Intelligent PDU actuallly communiccates with thee attached servers to collect asset
on for the au
utomatic map
pping of the power topollogy inside a rack. This
informatio
capability
y greatly red
duces the riskk of human eerrors that ca
an cause pow
wer outages.
HP iPDUss provide pow
wer to multip
ple objects frrom a single source. In a rack, the iPD
DU
distribute
es power to th
he servers, sttorage units,, and other p
peripherals.
TT
Using the
e popular core-and-stick architecture
a
o
of the HP mo
odular PDU lline, the iPDU
U
monitors power consu
umption at th
he core, load
d segment, sttick, and outtlet level, with
h
unmatche
ed precision and accuraccy. Remote m
management is built in. TThis iPDU offe
ers
power cy
ycle ability off individual outlets
o
on thee Intelligent E
Extension Ba
ars.
Functionss of iPDUs incclude:
Help
ps you track and
a control power
p
that o
other PDUs ca
annot monito
or, with 99%
%
accu
uracy greaterr than 1 wattt
rT

Gath
hers informattion from all monitoring p
points at 1/2
2 second inte
ervals to enssure
the highest
h
precission
Mea
asures current draw less th
han 100mw;; the iPDU ca
an detect a n
new server e
even
befo
ore it is powe
ered on
Fo



Rev. 12.3
31
Disco
overs and maps servers to
t specific ouutlets insuring
g correlation
n between
equipment and power
p
data collected,
c
as a function o
of Intelligent Power
Disco
overy
2 -53
Implemen
nting HP BladeS
System Solutions
Monitored PDU
on
ly
HP po
ower distribution units
PDU syste
ems:
de
liv
er
y
HP PDUs provide pow
wer to multiple objects fro
om a single ssource. In a rack, the PDU
distribute
es power to th
he servers, sttorage units,, and other p
peripherals.

Address issues of power distrribution to co
omponents w
within the com
mputer cabin
net

Redu
uce the numb
ber of powerr cables com
ming into the cabinet

Proviide a level of power prottection throug
gh a series o
of circuit brea
akers
Fo
rT
TT
For more infformation about the HP powerr distribution un it portfolio, go to:
http://h180
004.www1.hp.ccom/products/sservers/prolian
ntstorage/poweerprotection/p
pdu.html
2 -54
Rev. 12
2.31
HP BladeSystem Enclosures
PDU benefits

Increased number of outlet receptacles

Modular design

Superior cable management

Flexible 1U/0U rack mounting options

Easy accessibility to outlets

Limited three-year warranty
on
ly
Benefits of the modular PDUs from HP include:
HP 16A to 48A Modular PDUs
y
HP Modular PDUs have a unique modular architecture designed specifically for data
center customers who want to maximize power distribution and space efficiencies in
the rack.
de
liv
er
Modular PDUs consist of two building blocks—the Control Unit (core) and the
Extension Bars (sticks). The Control Unit is 1U/0U, and the Extension Bars mount
directly to the frame of the rack in multiple locations.
Available models range from 16A to 48A current ratings, with output connections
ranging from four outlets to 28 outlets.
HP Monitored PDUs
TT
The monitored vertical rack-mount power distribution units provide both single- and
three-phase monitored power, as well as full-rack power utility ranging from 4.9 kVA
to 22 kVA. Available monitored PDUs include:

Full-rack models with 39 or 78 receptacles and half-rack versions

Three-phase models with 12 C-19 receptacles
rT
Single-phase models with 24 C-13 and 3 C-19 receptacles
Fo

Rev. 12.31
2 -55
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
BladeS
System c7
7000 PDUs
Avvailable power distribution un its for a c7000
0 enclosure
The PDUss available fo
or the c7000
0 enclosure a
are detailed in the preceding table.
Note
Fo
rT
TT
A pair of PDU
Us must be orde
ered for AC feeed redundancy.. If AC redunda
ancy is not
required, a single PDU may be acceptablee.
2 -56
Rev. 12
2.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
BladeS
System c3
3000 PDUs
Avvailable power distribution un its for a c3000
0 enclosure
The PDUss available fo
or the c3000
0 enclosure a
are detailed in the prece
eding table.
Note
Fo
rT
TT
A pair of PDU
Us must be orde
ered for AC feeed redundancy.. If AC redunda
ancy is not
required, a single PDU may be acceptablee.
Rev. 12.3
31
2 -57
Implemen
nting HP BladeS
System Solutions
Blade
eSystem enclosure
e
e power supplies
de
liv
er
y
on
ly
HP Co
ommon Slo
ot Power Supplies
S
HP Comm
mon Slot (CS
S) Power Supplies share a common ellectrical and physical
design th
hat allows forr hot-swap, to
ool-less insta
allation into H
HP server and storage
solutions.. CS power supplies
s
are available in multiple high-efficiency iinput and
output op
ptions, allowiing users to “right-size”
“
a power supp
ply for speciffic
server/sto
orage config
gurations and
d environmennts. This flexiibility helps tto minimize
power wa
aste, lower overall
o
energy costs, and avoid "trapped" power capacity in the
data centter.

TT
CS Power Supplies su
upport Intellig
gent Power D
Discovery and are availa
able in the
g models:
following
Com
mmon Slot Pla
atinum Plus Power
P
Suppliees
Are compatiible with ProLiant Gen8 sservers only
rT


Com
mmon Slot Pla
atinum Powerr Supplies
Fo


2 -58
Provide up to
o 94% powe
er efficiency a
at 50% serveer utilization level

Are compatiible with ProLiant G6 and
d 7 servers o
only

Provide up to
o 94% powe
er efficiency a
at 50% serveer utilization level
Com
mmon Slot Go
old Power Su
upplies and C
Common Slo
ot Silver Powe
er Supplies

Are compatiible with ProLiant G6, G7
7, and Gen8
8 servers

Provide up to
o 92% powe
er efficiency at 50% servver utilization level

Are a cost-effective optio
on for entry-leevel servers
Rev. 12
2.31
HP Blad
deSystem Enclossures
y
on
ly
Common
n Slot Platin
num Plus Po
ower Suppliees
de
liv
er
The CS Platinum
P
Plus Power Supply family is id
deal for ProLLiant Gen8 ccustomers
operating
g mid-to-large data cente
er environments with a fo
ocus on reduccing power,
downtime
e, and huma
an resource expenses.
e
Thee CS Platinum
m Plus Power Supply:
Enables HP Intelligent Power Discovery
D
— Creates eneergy aware n
network that
m hours to
helpss to reduce data center outages, shrinnk deploymeent times from
minuttes, reclaim stranded
s
pow
wer, and ma ximize IT com
mpute densitty.

Provid
des certified best power efficiency (9
94%) in the in
ndustry — Re
educes data
cente
er power requ
uirements by
y up to 60W
W/server (as ccompared to
o ProLiant G6
6
powe
er estimates).. This can save up to $80
0 annually p
per server.

Supports redunda
ant High Efficciency and LLoad-Balancing modes — Maximizess the
er efficiency capabilities
c
of
o power sup
pplies.
powe

Provid
des compatib
bility — Is co
ompatible w
with a wide ra
ange of ProLiant and
Integrrity servers, as
a well as HP Storage so
olutions. Easily accessible
e, hot-plug
powe
er supplies minimize
m
server downtimee as well as tthe costs asso
ociated with
mainttaining multiple sets of sp
pares. One p
power supplyy suits all cusstomer
enviro
onments, botth Class A and B.
Fo
rT
TT


Featu
ures multiple output optio
ons (460W, 7
750W, and 1
1200W) — Enables you tto
choosse the powerr supply sized appropria tely for each
h server confiiguration. Th
his
flexib
bility helps to
o minimize po
ower waste, lower overall energy cossts, and avoid
trapp
ped power ca
apacity in the
e data centeer.
CS Platinum/Platinum
m Plus power supplies alsso enable HPP Power Discovery Services,
asing compute density w
while reducing
g data cente
er outages.
which foccus on increa
Note
One CS Pla
atinum Hot Plug Power Supp
ply Kit is requiired for each sserver.
Rev. 12.3
31
2 -59
Implementing HP BladeSystem Solutions
CS 1200W -48VDC Power Supply
The CS 1200W -48VDC model is supported on ProLiant ML350p Gen8, DL360 G7,
DL380 G7, and DL385 G7 servers. This model supplies the highest power output
option available in CS design for 48VDC input and provides 90% efficiency at 50%
utilization. It is primarily used for server solutions with shared power architectures.
CS 750W - 48VDC Power Supply
DL360p Gen8

DL380p Gen8

DL385p Gen8

ML350p Gen8

SL6500 Gen8
y

on
ly
The HP CS 750W - 48VDC Power Supply provides an option for the following
ProLiant servers:


Improved power input connector design — Simpler terminal block design
provides users with greater flexibility in cable selection, design, and
management.
Compatible with a wider range of HP ProLiant server solutions —More options
and greater flexibility for DC power usage within ProLiant Gen 8 servers.
Uses the HP Common Slot power supply design — Easily accessible hot-plug
power supplies that minimize server downtime.
rT

Improved power efficiency to 94% at 50% utilization — Reduces power waste
and consumption when compared to previous generation 1200W -48VDC
(90%) power supply option.
TT

de
liv
er
It is the lowest-cost power solution available in CS design for 48VDC input. The CS
750W - 48VDC Power Supply offers a higher- efficiency DC power solution with
improved power input cabling options.
Fo
For more information about the HP power supply portfolio, go to:
http://www.hp.com/go/proliant/powersupply
2 -60
Rev. 12.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
BladeS
System c7
7000 enclo
osure pow
wer suppliees
Power supplies
s
for a cc7000 enclosurre
rT
TT
The powe
er supplies convert single
e-phase AC tto 12V DC current and fe
eed the powe
er
backplan
ne. Moving th
he power sup
pplies into thhe enclosure allowed HP to reduce th
he
transmisssion distance
e for DC pow
wer distributio
on and use a
an industry-sttandard 12V
V
infrastructure for the BladeSystem.
B
. Using a 12 V infrastructuure allowed HP to eliminate
several power-related componentss and improvve power effficiency on th
he server bla
ades
ure. The conttrol circuitry w
was stripped
d and put on the
and in the infrastructu
managem
ment board and
a fans.
Fo
The c700
00 enclosure supports up
p to six poweer supplies deepending on
n whether it is
equipped
d with a three
e-phase or single-phase power config
guration. Ad
dditionally, th
he
c7000 en
nclosure bun
ndled with the
e HP Insight Management suite proviides six HP
2400W high-efficienccy hot-plug power
p
supplies.
Rev. 12.3
31
2 -61
Implementing HP BladeSystem Solutions
Key features of the 2400W power supplies include:



Increased power output—2400W; supports more blades with fewer power
supplies
High efficiency to save energy; provides 90% efficiency from as low as 10%
load
Low standby power that facilitates reduced power consumption when servers are
idle
Uses Onboard Administrator 2.30 or later
Important
!
on
ly

Fo
rT
TT
de
liv
er
y
The 2400W power supplies do not operate with 2250W power supplies. Therefore, to
use the 2400W power supplies with a c7000 enclosure that uses 2250W power supplies,
you need to replace all the 2250W power supplies with the 2400W power supplies.
2 -62
Rev. 12.31
HP Blad
deSystem Enclossures
de
liv
er
y
on
ly
Powerr modules and cordss
Different
D
input power
p
moduless for a c7000 eenclosure
The c700
00 enclosure can be insta
alled in both the AC and
d DC environments:

Three
e-phase (3Ø
Ø) AC power

Single-phase (1Ø
Ø) AC powerr

-48V
V DC power
TT
Each type
e of power environment
e
requires
r
a sp
pecific powerr module.
BladeSysstem powerr cords
rT
The Blade
eSystem is designed to match
m
what thhe customer already has in the data
center. It uses standarrd power corrds:
IEC-C
C19 – 16A 208V
2
= 3328
8VA

NEM
MA L15-30p 24A 3Ø 208V = 8646V
VA
Fo


IEC 309
3 5-pin 16
6A 3Ø 230V
V = 11040VA
A
In the Bla
adeSystem, one
o L15-30p line cord ca n power onee enclosure p
populated wiith
16 half-he
eight blades.
As a poin
nt of comparrison, if it had been desig
gned for racck-based pow
wer,
er.
BladeSystem enclosurres would req
quire 60A to
o 100A threee-phase powe
Rev. 12.3
31
2 -63
Implemen
nting HP BladeS
System Solutions
y
on
ly
Single
e-phase AC
C power supply
s
pla
acement
de
liv
er
Power supply placement fo
or a c7000 encclosure
Install the
e power supp
plies based on
o the total nnumber of suupplies neede
ed:
Two power supp
plies — Powe
er supplies inn bays 1 and
d4

Three power sup
pplies — Pow
wer supplies in bays 1, 2
2, and 4

Fourr power supp
plies — Powe
er supplies inn bays 1, 2,, 4, and 5

Five power supp
plies — Powe
er supplies inn bays 1, 2, 3, 4, and 5

Six power
p
supplies — Powerr supplies in all bays
TT

Note
rT
The Insight Display panel sliides left or rightt to allow accesss to power sup
pply bays 3 and
d
4.
Fo
The prece
eding graphic further deffines the pow
wer supply p
placement ba
ased on the
power redundancy mode.
m
Note
In single-phasse configuration
ns, you can usee fewer than sixx power supplie
es.
The place
ement rules are
a enforced by the Onb
board Administrator. Whe
en the power
supplies are
a placed incorrectly, th
he Insight Dissplay shows an error.
!
2 -64
Important
Three-phase AC
A power requ
uires that all six power suppliess be installed.
Rev. 12
2.31
HP BladeSystem Enclosures
DC power configuration rules
WARNING
!
Never attempt to install an AC power supply into a DC power module. Doing so could
cause damage to both the power module and the power supply.
The product configuration rules are:


-48V DC hot-plug power supplies are only supported with the -48V DC power
module.
on
ly

-48V DC power module can only accept DC power supplies.
Mixing of AC and DC components within the same system is prohibited.
Keying on the DC power module and DC power supply prevents incorrect
insertions.
Caution
!
y

Fo
rT
TT
de
liv
er
To prevent damage to components in the enclosure, never mix AC and DC power in the
same enclosure.
Rev. 12.31
2 -65
Implemen
nting HP BladeS
System Solutions
Total available
a
power
Power supplies
s
in a c7
7000 enclosuree

If no
o power redundancy is co
onfigured, th e total poweer available iis defined ass
the power
p
availa
able from all supplies insttalled. Thereffore, if six po
ower supplie
es
are installed
i
in an
a enclosure installed, 14
4400W of po
ower will be available to
o
the enclosure.
e
If the
e N+1 powe
er mode is co
onfigured, theen the total p
power availa
able is define
ed
as th
he total powe
er available, less one pow
wer supply. TTherefore, an
n enclosure w
with
a 5+
+1 configuration will rece
eive 12000W
W of power.

de
liv
er
y

on
ly
Total pow
wer available
e to the enclo
osure, assum
ming 2400W
W are availab
ble from each
h
power su
upply, depends on the po
ower mode cconfigured fo
or the enclosu
ure.
If the
e N+N AC Redundant
R
mode is config
gured, then tthe total pow
wer available
e is
the amount
a
from the A or B side
s
with the lesser numb
ber of supplie
es. Therefore,,
an enclosure
e
with
h 3+3 config
guration will receive 720
00W of powe
er.
Important
!
HP strongly re
ecommends tha
at you run the H
HP BladeSystem
m Power Sizer to
o determine the
power and co
ooling requirem
ments of your co
onfiguration. Reefer to
http://www..hp.com/go/bla
adesystem/pow
wercalculator to
o download the
e Power Sizer.
rT
TT
mple
Exam
Single-phase pow
wer runs on 30A circuits in North Am
merica. Whe
en you apply the
80%
% rule (in an NA/JPN envvironment, yo
ou can only pull 80% of the total pow
wer
available on a circuit) this tra
anslates to 24
4A availablee. Therefore, you would u
use
4A modular PDU,
P
which can
c only sup
pport 4992VA
VA or two pow
wer suppliess.
a 24
With
h redundant AC feeds, yo
ou can suppo
ort four pow
wer supplies p
per enclosure
e.
Four power supp
plies can provvide 9600W
W of power to
o the compo
onents.
Fo
A full encclosure of 16 blades requ
uires up to 37
700W. Four power supp
plies enable
N+N AC
C redundancy
y as long as you have reedundant AC
C feeds.
2 -66
Note
3700W averrages 231.25W
W per blade. If thhe 3700W figuure does not incclude the
Onboard Administrator, fans, and interconnnects, you still have overhead of 800W per AC
ndant.
feed to coverr the additional need and rema
ain N+N redun
Rev. 12
2.31
HP Blad
deSystem Enclossures
on
ly
BladeS
System c3
3000 enclo
osure pow
wer suppliees
The c300
00 enclosure power supp
plies are sing
gle-phase po
ower suppliess that supporrt
both low--line and hig
gh-line enviro
onments. Wa ttage output per power ssupply depen
nds
on the rated AC input voltage.
200V
VAC to 240V
VAC input = 1200W DC
C output

120V
VAC input = 900W DC output
o

100 VAC input = 800W DC output
de
liv
er
y

AC powe
er supplies are auto-switcching betweeen 100VAC a
and 240VAC
C, providing
customerss with diverse
e deploymen
nt options.
Each AC power supp
ply ships with
h a standard PDU power cord (C13 to
o C20). Pow
wer
supplies may be conn
nected to standard wall o
outlets; howeever, proper wall outlet
ased.
cords must be purcha
Important
Wall outlet power cords sho
ould only be useed with low-linee (100V to 110V
V) power source
es.
If high-line po
ower outlets are
e required, safeety regulations rrequire the use of a PDU or a
UPS between
n the c3000 encclosures power supplies and w
wall outlets.
TT
!
Fo
rT
s
are available. Ea
ach 48V DC
C Common Slot Power
Optional DC power supplies
an provide 1200W. Up to
t six total D
DC power sup
pplies can be
e used in a
Supply ca
c3000 enclosure. DC
C and AC po
ower suppliess cannot be mixed inside
e one c3000
0
enclosure
e.
Rev. 12.3
31
Caution
Without prop
per surge protecction, connectinng directly to a standard wall o
outlet may causse
loss of data or
o damage to th
he BladeSystem
m enclosure.
2 -67
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Powerr supply pllacement
c30
000 enclosure power supply nnumbering and
d placement
TT
The quan
ntity of power supplies is a function o
of the power redundancy mode versus
the quanttity, type, and configurattion of the deevices installeed in the encclosure. The
tables dissplay the pro
oper location
n for power ssupplies in th
he Power Sup
pply Redundant
and AC Redundant
R
power
p
modess. For properr functionalityy, the AC Redundant pow
wer
mode req
quires two AC
C circuits; on
ne connected
d to power supplies 1, 2,, and 3 and the
second connected to power supplies 4, 5, an d 6.
Note
Fo
rT
There is no Onboard
O
Admin
nistrator-enforceed rule that dicta
ates the power supply placem
ment
based on the
e number of servver blades; how
wever, there is o
one for the fanss. Power supplyy
population is dependent on the power sup ply redundancyy level and the quantity and
configuration
n of server blades and interconnnects.
2 -68
Rev. 12
2.31
HP BladeSystem Enclosures
Total available power
Total power available to the enclosure may vary depending on the input AC voltage,
the power redundancy mode, and the quantity of power supplies installed. For
enclosures connected to 208VAC–240VAC, the maximum power available from six
installed power supplies is as follows:


In Non Power Redundant mode, total power available from six power supplies is
7200W DC.
In Power Supply Redundant mode, six power supplies provide a total of 6000W
DC.
on
ly

In AC Redundant mode, six power supplies provide a total of 3600W DC.
Important
!
Fo
rT
TT
de
liv
er
y
HP strongly recommends that you run the Power Sizer (available from
http://www.hp.com/go/bladesystem/powercalculator) or HP Power Advisor to determine
the power and cooling requirements of your configuration.
Rev. 12.31
2 -69
Implementing HP BladeSystem Solutions
BladeSystem DVD-ROM drive options
Attaching a DVD-ROM drive to the HP BladeSystem enclosure enables local media
access to the server blades. Insight Display, iLO, and the Onboard Administrator
allow system administrators to connect and disconnect the media device to one or
multiple server blades at a time.
This feature enables administrators to:
Perform operating system installations such as SmartStart installations or
imaging tasks
on
ly


Install additional software

Perform critical operating system updates and patches

Update server platform firmware
y
The DVD-ROM drive can be attached using the:
DVD-ROM drive bay in the front of the c3000 enclosure

USB port on the c7000 enclosure Onboard Administrator module

Local I/O cable connection to the individual server blades

ISO images on a locally attached USB key
de
liv
er

Fo
rT
TT
The DVD-ROM drive offers local drive access to server blades by using the virtual
media scripting capability of iLO. The DVD-ROM drive is connected directly to the
server blade’s USB and provides significantly improved data throughput, as
compared to iLO virtual media, using physical disks or ISO files, especially over
long distances.
2 -70
Rev. 12.31
HP BladeSystem Enclosures
Learning check
1.
List three factors that distinguish an ideal deployment for a c3000 enclosure.
.................................................................................................................
.................................................................................................................
.................................................................................................................
4.
True
b.
False
on
ly
a.
What is the difference between the Onboard Administrator in the c3000 and
the c7000 enclosures?
a.
The c3000 Onboard Administrator is not a DDR2 module.
b.
The c3000 Onboard Administrator does not have USB ports.
c.
The c3000 Onboard Administrator has the same components, but they are
in different locations.
d.
The c3000 does not support a redundant Onboard Administrator.
y
3.
An HP c7000 enclosure can use standard wall-outlet power.
de
liv
er
2.
The Onboard Administrator module for the c7000 enclosure is available with
KVM support and without KVM, and these two versions require different
firmware.
 True
5.
TT
 False
With the Power Supply Redundant configuration in a BladeSystem, a minimum
of four power supplies is required.
rT
 True
Fo
 False
Rev. 12.31
2 -71
Implementing HP BladeSystem Solutions
6.
What are the benefits of using an industry-standard 12V infrastructure for the
BladeSystem?
.................................................................................................................
.................................................................................................................
7.
What BladeSystem challenges are met by Thermal Logic and Active Fans
technology?
.................................................................................................................
on
ly
.................................................................................................................
Fo
rT
TT
de
liv
er
y
.................................................................................................................
2 -72
Rev. 12.31
HP BladeSystem Server Blades
Module 3
Objectives
Fo
rT
TT
de
liv
er
y
on
ly
After completing this module, you should be able to describe the HP ProLiant
Generation 8 (Gen 8) and Integrity server blades that constitute the HP BladeSystem
portfolio.
Rev. 12.31
3 –1
Implemen
nting HP BladeS
System Solutions
ProLiant Ge
en8 serrver bla
ade portfolio
de
liv
er
y
on
ly
ProLia
ant BL420c Gen8
8 server blade
b
The HP ProLiant BL420c Gen8 serrver blade iss an entry-levvel blade. Th
he BL420c
workload
d spans from single appliications for m
mid-market so
olutions to la
arge enterprisse
requireme
ents.
This serve
er blade feattures two eig
ght-core Xeonn processors with the Inte
el C600 serie
es
chipset. Additional
A
fe
eatures of the
e ProLiant BL4
420c Gen8 server includ
de:
SAS/
/SATA/SSD hot-plug drivves

Max
ximum 2 TB storage
s
confiiguration

Twelve memory DIMMs,
D
six per
p processo
or

1x8 and 1x16 PC
CIe Gen3 mezzanine
m
slo
ots

iLO Managemen
M
nt Engine
Fo
rT
TT

3 –2
Rev. 12
2.31
HP BladeSyste
em Server Blad
des
de
liv
er
y
on
ly
ProLia
ant BL460c Gen8
8 server blade
b
The HP ProLiant BL46
60c Gen8 server blade o
offers a balan
nce of perforrmance,
scalability, and expandability, ma
aking it a sta
andard for da
ata center co
omputing. Th
his
server bla
ade features two eight-co
ore Xeon pro
ocessors with the Intel C6
600 series
chipset. Additional
A
fe
eatures includ
de:

Up to
o 512GB of DDR3
D
LRDIMM
Ms — With LRDIMMs, a ProLiant BL4
460c Gen8
serve
er can be co
onfigured with up to 512 G
GB of memo
ory.
ports two I/O
I/O expansion slots — The BL460c
B
Gen8
8 server supp
O expansion
n
mezz
zanine slots:

press Type A – Supports d
dual-port meezzanine card
ds. One port is
X16 PCI Exp
routed to intterconnect module bay 3 and the oth
her to bay 4..
X16 PCI Exp
press Type B – Supports d
dual-port and
d quad-port m
mezzanine
cards. For dual-port card
ds, one port is routed to interconnect bay 5 and tthe
y 6. For quad
d-port cards, one port is routed to interconnect ba
ays
other to bay
5, 6, 7, and
d 8.
Fo
rT

TT


Interrnal storage — The BL46
60c Gen8 su pports a varriety of intern
nal storage
optio
ons, including solid state drives, allow
wing up to 2 TB of internal storage to
o be
configured. The configuration
c
n options aree shown in th
he following table.
Hot plug
p
Hot plug
p
Hot plug
p
Hot plug
p
Rev. 12.3
31
SFF
SFF
SFF
SFF
SAS
SATA
SAS SS
SD
SATA SSD
S
Total cap
pacity
Drive cconfiguration
2.0TTB
2.0TTB
1.6TTB
800G
GB
2xx – 1.0TB
2xx – 1.0TB
2x – 800GB
2x – 400GB
3 –3
Implemen
nting HP BladeS
System Solutions
y
on
ly
ProLia
ant BL465c Gen8
8 server blade
b
de
liv
er
The HP ProLiant BL46
65c Gen8 serrver blade iss an ideal seerver for virtualization and
d
consolida
ation. The BLL465c Gen8 is the first seerver blade tto achieve more than 2,0
000
cores perr rack by usin
ng AMD Op
pteron 6200 series proceessors with up
p to 16 coress
each.
Features of the BL465
5c Gen8 incllude:
Smart Array conttroller with 512
5 MB flash--backed writte cache

SmartMemory

SAS and SAS solid-state drive
es

iLO Managemen
M
nt Engine
Fo
rT
TT

3 –4
Rev. 12
2.31
HP BladeSyste
em Server Blad
des
Integ
grity i2 server blade portfollio
de
liv
er
y
on
ly
Integrrity BL86
60c i2
TT
The Integrity BL860c i2 is a full-he
eight server b
blade with Ittanium 9300
0 series
processors and the In
ntel 7500 chiipset. This seerver supportts up to two p
processors w
with
our processorr cores.
two or fo
rT
Note
The Integrity
y BL860c i2 only
o
supports iidentical processors in a tw
wo-processor
configuratio
on.
Fo
u to 384 G
GB of memoryy using 24 PPC3-10600
The Integrity BL860c i2 supports up
Registered CAS9 mem
mory module
es. These meemory modules support e
error correctin
ng
CC), as well as
a double ch
hip sparing teechnology. D
Double chip sparing can
code (EC
detect an
nd correct an
n error in DRA
AM bits, pra
actically eliminating the d
downtime
needed to
o replace failed DIMMs.
Note
The Integrity
y BL860c i2 re
equires a min imum of 8 GB
B of RAM to o
operate. Doub
ble
chip technology is not enabled with 2 GB memory m
modules.
Rev. 12.3
31
3 –5
Implementing HP BladeSystem Solutions
The server features two SFF SAS hot-plug hard drive bays. Hardware RAID is
provided by an embedded HP P410i RAID controller, which supports RAID 1 for
HP-UX and Linux. Because the SAS controller does not support Microsoft Windows,
Windows internal disk mirroring requires a Smart Array controller and cannot use the
internal hard drives.
Important
RAID 1 configuration requires two identical hard drives.
!
Fo
rT
TT
de
liv
er
y
on
ly
The server also features four autosensing 1Gb/10Gb NIC ports through two dual
NC532 Flex-10 adapters, plus an additional 100Mb NIC dedicated to Integrity iLO
management.
3 –6
Rev. 12.31
HP BladeSyste
em Server Blad
des
de
liv
er
y
on
ly
Integrrity BL87
70c i2
The Integrity BL870c i2 server bla
ade is a full-hheight, doublle-wide form factor serve
er
blade tha
at occupies tw
wo device ba
ay slots in a BladeSystem
m enclosure.
The BL87
70c i2 server blade features the Intel 7
7500 chipseet and:
Processors — The Integrity BL870c i2 ma
ay contain up
p to four Itan
nium 9300
d-core processors, with up to 24 MB of L3 cache.. Processor kkits include:
quad

TT

p
Quad-core processors
Itanium 9320 (1.33G
GHz/4-core/
/16MB/155
5W); up to 1.46 GHz witth
Turbo) processor
p
Fo
rT



Itanium 9340 (1.6G
GHz/4-core/
//20MB/185
5W); up to 1
1.73 GHz wiith
Turbo) processor
p

Itanium 9350 (1.73G
GHz/4-core/
/24MB/185
5W); up to 1.86 GHz with
Turbo) processor
p
Dual-core prrocessor

Itanium 9310 (1.6GH
Hz/2-core/1
10MB/130W
W) processor
Note
The Integrity
y BL870c i2 su
upports two, tthree, or four-p
processor con
nfigurations.
Processors must
m be identical.
Rev. 12.3
31
3 –7
Implementing HP BladeSystem Solutions

Memory — The Integrity BL870c i2 server blades supports up to 768 GB of
memory.

Forty-eight PC3-10600 16Gb DIMMs

High-speed memory bus bandwidth of 4.8GT/s
Note
Memory for the BL870c i2 must be installed in groups of four DIMMs.
Integrity iLO 3 — Integrity iLO management processors make it simpler, faster,
and less costly to remotely manage Integrity servers. Integrity iLO 3 ships with a
built-in Advanced Pack License. iLO Advanced features include Virtual Media,
LDAP directory services, iLO power measurement, and integration with Insight
Power Manager. No additional iLO licensing is needed.
on
ly


Storage

de
liv
er
y
Note
The iLO Management Engine with iLO 4 is not supported on Integrity server
blades.
Up to four SFF SAS hot-plug hard drive bays, providing up to 3.6 TB of
internal storage using four 900 GB SAS drives.
Note
Mixed disk configurations are supported, although not for RAID configurations.
RAID 1 configuration requires two identical hard drives.
NICs — Eight autosensing 1Gb/10Gb NICs through four embedded NC532i
dual-port Flex-10 adapters.
rT

Two P410i 3Gb SAS controllers provide support for RAID 0, RAID 1, and
HBA mode options.
TT

Fo
!
3 –8
Important
Flex-10 capability requires operating system drivers and the use of an HP Virtual
Connect Flex-10 10GbE Ethernet module.
Rev. 12.31
HP BladeSystem Server Blades
Mezzanine card options — Six additional I/O expansion slots by using
mezzanine cards. Supported mezzanine cards include:
HP NC553m Dual Port 10Gb FlexFabric Adapter

HP NC551m Dual Port 10Gb FlexFabric Adapter

HP NC552m 10Gb 2-port Flex-10 Ethernet Adapter

HP NC532m Dual Port Flex-10 10GbE BL-c Adapter

HP Emulex LPe1205 8Gb FC BL-c HBA (2-port 8Gb Emulex FC HBA)

HP QMH 2562 8Gb FC BL-c HBA (2-port 8Gb QLogic FC HBA)

HP Smart Array P711m/1G FBWC Controller

HP Smart Array P700m/512 Controller

HP 4X QDR IB CX-2 Dual Port Mezz HCA for HP BladeSystem

HP NC364m 4-port 1GbE BLc Adapter

HP NC360m 2-port 1GbE BLc Adapter
de
liv
er
y
on
ly

Fo
rT
TT

Rev. 12.31
3 –9
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Integrrity BL89
90c i2
The Integrity BL870c i2 is a full-he
eight, quadruuple-wide forrm factor serrver blade that
occupies four device bay slots in the BladeSysstem enclosuure. It features the Intel 75
500
chipset and supports Integrity iLO
O 3. It also feeatures:
Processors — The Integrity BL890c i2 ma
ay contain up
p to eight Ita
anium 9300
d-core processors, with up to 24MB o
of L3 cache. Processor kits include:
quad

TT

p
Quad-core processors
Itanium 9320 (1.33G
GHz/4-core/
/16MB/155
5W); up to 1.46 GHz witth
Turbo) processor
p
Fo
rT



Itanium 9340 (1.6G
GHz/4-core/
//20MB/185
5W); up to 1
1.73 GHz wiith
Turbo) processor
p

Itanium 9350 (1.73G
GHz/4-core/
/24MB/185
5W; up to 1.86 GHz with
h
Turbo) processor
p
Dual-core prrocessor

Itanium 9310 (1.6GH
Hz/2-core/1
10MB/130W
W) processor
Note
The Integrity
y BL890c i2 supports up to eight processsors. Processo
ors must be
identical.
3 –10
Rev. 12
2.31
HP BladeSystem Server Blades

Memory — The BL890c i2 server blades supports up to 1.5TB of memory:

Ninety-six PC3-10600 16Gb DIMMs

High-speed memory bus bandwidth of 4.8GT/s
Note
Memory for the BL890c i2 must be installed in groups of four DIMMs.
Storage — Up to eight SFF SAS hot-plug hard drive bays, providing up to 7.2
TB of internal storage using eight 900GB SAS drives. Four P410i 3Gb SAS
controllers provide support for RAID 0, RAID 1, and HBA mode options.
on
ly

NICs — The Integrity BL890c i2 ships with 16 autosensing 1Gb/10Gb NICs via
eight embedded NC532i dual port Flex-10 adapters.
de
liv
er

y
Note
Mixed disk configurations are supported, although not for RAID configurations.
RAID 1 configuration requires two identical hard drives
Important
Flex-10 capability requires operating system drivers and the use of a Virtual
Connect Flex-10 10GbE Ethernet module.
!
Mezzanines — The Integrity BL890c i2 supports 12 additional I/O expansion
slots by using mezzanine cards. Supported mezzanine cards include:

HP NC552m 10Gb 2-port Flex-10 Ethernet Adapter

HP Emulex LPe1205 8Gb FC BL-c HBA (2-port 8Gb Emulex FC HBA)

HP QMH 2562 8Gb FC BL-c HBA (2-port 8Gb QLogic FC HBA)
HP Smart Array P711m/1G FBWC 6G SAS Controller
rT

TT

HP Smart Array P700m/512 Controller

HP 4X QDR IB CX-2 Dual Port HCA for HP BladeSystem

HP NC364m 4-port 1GbE BL-c Adapter

HP NC360m 2-port 1GbE BL-c Adapter

HP NC532m Dual Port Flex-10 10GbE BL-c Adapter

HP NC551m Dual Port 10Gb FlexFabric Adapter

HP NC553m Dual Port 10Gb FlexFabric Adapter
Fo

!
Rev. 12.31
Important
A maximum of eight of these additional adapters are supported with the BL890c
i2 server blade.
3 –11
Implementing HP BladeSystem Solutions
Learning check
3.
a.
1
b.
2
c.
4
d.
8
a.
96 GB
b.
128 GB
c.
256 GB
d.
512 GB
on
ly
What is the maximum memory supported in a ProLiant BL460c Gen8 server
blade?
y
2.
How many processors can be installed in an Integrity BL860c i2 server blade?
de
liv
er
1.
The ProLiant BL460c Gen8 supports up to two Intel Xeon processors.
 True
Fo
rT
TT
 False
3 –12
Rev. 12.31
HP BladeSystem Storage and Expansion Blades
Module 4
Objectives
After completing this module, you should be able to:

Storage blades

Tape blades

Expansion blades
Describe the features and functions of HP Smart Array controllers
Fo
rT
TT
de
liv
er
y

Describe the features and functions of HP:
on
ly

Rev. 12.31
4 –1
Implemen
nting HP BladeS
System Solutions
HP BladeSy
B
ystem storage
s
and exxpansio
on blad
des
HP Blade
eSystem is bu
uilt not only on
o servers, b
but also on sttorage and e
expansion
modules. HP offers many storage solutions tha
at increase eeither storage
e capacity orr
p
for server bllades. A Blad
deSystem ca
an also conso
olidate otherr
storage performance
network equipment,
e
including storage and ba
ackup options.
de
liv
er
y
on
ly
HP sto
orage blades
D2200sb
D
Stora
age Blade
TT
HP offers storage solu
utions design
ned to fit inside the BladeeSystem enclosure, as we
ell
n for virtually
y unlimited sttorage capacity. HP stora
age blades
as external expansion
on and workk side-by-sidee with ProLiant and Integrity server
offer flexiible expansio
blades.
The HP sttorage portfo
olio for Blade
eSystems inccludes:
D2200sb Storag
ge Blade

X380
00sb G2 Ne
etwork Storag
ge Gatewayy Blade

X180
00sb G2 Ne
etwork Storag
ge Blade
Fo
rT


4 –2
IO Accelerator
A
Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
de
liv
er
y
on
ly
HP D2
2200sb Sttorage Bla
ade
The HP D2200sb
D
Storage Blade delivers
d
direcct-attached sstorage (DAS
S) for server
blades. The
T enclosure
e backplane provides a PPCIe connecttion to the ad
djacent serve
er
blade and enables hiigh-performa
ance storage access without additional cables.
The D220
00sb storage
e blade featu
ures an onbo
oard Smart A
Array P410i ccontroller witth
1GB flash-backed wrrite cache (FB
BWC) for inccreased perfo
ormance and
d data
n. Other feattures include
e:
protection
Supp
port for up to
o 12 hot plug
g SFF SAS orr SAS/SATA/
/Solid State or SATA
Midlline hard disk drives in a half-height b
blade, includ
ding supportt for enterprisse
300 GB 15K SAS hard drives
TT

Internal Smart Arrray P410i co
ontroller withh 1 GB FBW
WC

Simp
ple configura
ation and setup with the H
HP Array Co
onfiguration U
Utility (ACU)
rT


ance software to create a shared
Com
mpatibility with HP Virtuall SAN Applia
stora
age environm
ment inside a BladeSystem
m enclosure
Fo

Easy
y maintenancce and troubleshooting w
with industry-sstandard ma
anagement to
ools
inclu
uding HP Sysstem Insight Manager
M
(HPP SIM)

Abiliity to configu
ure the storag
ge blade forr RAID levels 0, 1, 1 + 0,, 5, and 6
(RAID ADG) by using
u
the inte
ernal Smart A
Array P410i controller with1 GB flash
hbackked write cacche
Note
RAID 6 and RAID
R
60 requirre purchase of a Smart Array A
Advanced Packk (SAAP) license
e.
Rev. 12.3
31
4 –3
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
HP X1800sb G2
2 Network Storage Blade
HP X1800sb G2 Network Storage Blade is a flexible stora
age server so
olution for
BladeSystem environm
ments. File se
erving insidee the BladeSyystem enclosu
ure is available
when the
e X1800sb G2
G is paired with
w the D22
200sb storag
ge blade. The X1800sb G
G2
can also be used as an
a affordable SAN gatew
way to proviide consolida
ated file-servvice
access to
o Fibre Channel, SAS, or iSCSI SANs.
Features include:
6 GB
B (3 x 2 GB)) PC3-10600
0R RDIMMs

HP Smart
S
Array P410i Contro
oller (RAID 0
0/1)

TT

2 x 146GB SFF SAS
S 15k hot plug hard d
drives with M
Microsoft Win
ndows Storag
ge
X
Edition p
pre-installed ((in a RAID 1 configuratio
on)
Servver 2008 R2,, Standard X64
Integ
grated NC55
53i Dual Portt FlexFabric 1
10GbE Convverged Netw
work Adapterr

One
e additional 10/100
1
NIC
C dedicated tto iLO 3 man
nagement

Two I/O expansion mezzanine slots
Fo
rT


Supp
ports up to tw
wo mezzanin
ne cards
Functiona
ality of the x1
1800sb G2 can be enha
anced with o
optional softw
ware such ass HP
Mirroring
g Software or Data Protecctor Express..
4 –4
Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
de
liv
er
y
on
ly
HP X3
3800sb G2 Networrk Storage
e Gatewayy Blade
The X380
00sb G2 Ne
etwork Storag
ge Gateway Blade is useed to access Fibre Chann
nel,
SAS, or iSCSI SAN sttorage, transslating file da
ata from the server into b
blocks for
o provide co
onsolidated fiile, print, and
d managemeent hosting sservices in a
storage to
cluster-ab
ble package.
TT
Built on the ProLiant BL460c
B
serve
er blade, thee X3800sb G
G2 Network Storage
y Blade is a ready-to-dep
r
on, with Win
ndows Storag
ge
Gateway
loy SAN gatteway solutio
Server 20
008 R2, Ente
erprise x64 Edition
E
pre-innstalled. The X3800sb G
G2 also includ
des
a Microso
oft Cluster Se
erver (MSCS) license and
d Microsoft i SCSI Software Target.
Key featu
ures include:
One
e quad-core Intel Xeon Pro
ocessor E564
40 (2.66 GH
Hz, 80w)

6 GB
B (3 x 2 GB)) PC3-10600
0R RDIMMs

Smart Array P410
0i controller (RAID 0/1)
rT

Two 146GB SFF SAS 15k hott plug hard d
drives with W
Windows Storage Server
2008 R2, Enterp
prise X64 Ediition pre-insta
alled (in a RA
AID 1 config
guration)
Fo

Rev. 12.3
31

Integ
grated NC55
53i Dual Portt FlexFabric 1
10GbE Convverged Netw
work Adaptorr

One
e additional 10/100
1
NIC
C dedicated tto iLO 3

Two I/O expansion mezzanine slots

Supp
port for up to
o two mezzanine cards
4 –5
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Direct Connect SAS Stora
age for HP
P BladeSysstem
TT
Direct Co
onnect SAS Storage
S
for BladeSystem
B
allows customers to build
d local serve
er
storage quickly
q
with zoned
z
storag
ge or low-cosst shared storage within tthe rack. The
e
high-perfo
ormance 3G
Gb/s SAS arcchitecture co
onsists of a Smart Array PP700m
controllerr in each servver and 3Gb
b SAS BL swiitches conneected to an H
HP Modular D
Disk
System (M
MDS) 600.
Fo
rT
By combiining the sim
mplicity and cost
c efficienccy of direct-atttached stora
age with the
flexibility and resourcce utilization of a SAN, server administrators can have a simp
ple
in-rack zo
oned direct attach
a
SAS sttorage solution that is ideeal for growing capacityy
requireme
ents.
4 –6
Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
Blade
eSystem tape
t
blad
de portfo
olio
de
liv
er
y
on
ly
HP Ulttrium Tape
e Blades
The HP Ultrium
U
Tape Blades are id
deal for Blad
deSystem cusstomers who need an
integrated
d data prote
ection solution. These halff-height tapee blades provvide directattach da
ata protection
n for the adja
acent server and networrk backup pro
otection for a
all
data resid
ding within the
t enclosure
e. Ultrium Tap
ete data
pe Blades offfer a comple
protection
n, disaster re
ecovery, and archiving so
olution for BladeSystem ccustomers.
TT
Each Ultrrium Tape Bla
ade solution ships standa
ard with Data
a Protector E
Express Basicc
backup and
a recovery
y software. In
n addition, ea
ach tape bla
ade supports HP One-Buttton
Disaster Recovery
R
(OBDR), which allows quickk recovery of the operating system,
applicatio
ons, and datta from the la
atest full bacckup set. Ultrium Tape Bla
ades are the
industry'ss first tape bllades and arre developed
d exclusively for BladeSysstem enclosu
ures.
rT
The curre
ent BladeSysttem tape blade portfolio consists of:
HP Ultrium
U
448c Tape Blade — Includes LTO-2 Ultrium
m tape techn
nology with 4
400
GB of
o capacity on
o a single data
d
cartridg
ge (2:1 comp
pression) and
d performancce
up to
o 173 GB/hrr (2:1 compre
ession)
HP SB1760c
S
Tape
e Blade — In
ncludes LTO-4
4 Ultrium tap
pe technolog
gy with 1.6 TB of
capa
acity on a sin
ngle data ca
artridge (2:1 compression
n) and perforrmance up to
o
576 GB/hr (2:1 compression
n)
Fo



Rev. 12.3
31
HP SB3000c
S
Tap
pe Blade — In
ncludes LTO--5 Ultrium tap
pe technolog
gy with 3 TB of
capa
acity on a sin
ngle data ca
artridge (2:1 compression
n) and perforrmance up to
o1
TB/h
hr (2:1 compression)
4 –7
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
BladeS
System tap
pe blades — Featurre comparrison
Comparing HP BladeSyystem tape blad
des
TT
Ultrium 448c,
4
SB1760
0c, and SB3
3000c tape b
blade featurees are listed in the
preceding
g table. The main differe
ences are in tthe recording
g technologyy (LTO-2, LTO
O-4,
or LTO-5), compressed capacity on
o a single d
data cartridge (400GB, 1
1.6TB, or 3.0
0TB,
1 data comp
pression ratio
o), and susta ined transferr rate (173GB
B/hr,
at a 4:2:1
576GB/h
hr, or 1TB/hr, at the 3:2::1 data comp
pression ratio
o).
The maximum configu
uration per enclosure
e
takkes into acco
ount the tape blades
connected to half-heig
ght server blades.
rT
All tape blades
b
provid
de integrated
d data proteection for the enclosures—
—direct-attach
data prottection for the adjacent server
s
blade and networkk backup pro
otection for a
all
data with
hin the enclosure.
Fo
The HP ta
ape blades are
a electrically connected
d to the adja cent server b
blades throug
gh
a signal midplane tha
at functions as
a a PCIe buus to link adja
acent slots o
of the enclosu
ure.
Therefore
e, the tape bllades will be
e seen exactlyy the same a
as if they were directly
connected (by way of SCSI, for ex
xample) to thhat server bla
ade.
For inform
mation about the
e compatibility of these tape b
blades with BladeSystem serve
er
blades, re
efer to the BladeSystem Compa
atibility section of the QuickSp
pecs for the
respective
e tape blade, or visit: http://w
www.hp.com/g
go/connect
4 –8
Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
de
liv
er
y
on
ly
HP Sto
orage Librrary and Tape
T
Toolss
HP Librarry and Tape Tools (L&TT) is a free, rob
bust diagnosstic tool for a
all HP tape
storage and
a magneto
o-optical stora
age productss. Targeted ffor a wide ra
ange of userss, it
is ideal fo
or customers who want to
o verify theirr installation, ensure prod
duct reliabilitty,
perform their
t
own dia
agnostics and
d achieve fa ster resolutio
on of tape de
evice issues.
TT
L&TT perfforms firmwa
are upgradess, verificationn of device o
operation, faiilure analysiss
and a range of utility
y functions. Pe
erformance ttools assist in
n troubleshoo
oting
warn of common host issues. It also
bottleneccks and system configurattion checks w
provides seamless integration with
h HP supporrt by generatting and ema
ailing test ressults
and supp
port tickets.
rT
HP Suppo
ort requires the
t use of L&
&TT to troubleeshoot most d
device issuess, so it is
recomme
ended that a support ticke
et is pulled a
and the devicce assessmen
nt test is run
before ca
alling.
Fo
Operating systems cu
urrently suppo
orted includee HP-UX, Wiindows, Linuxx, OpenVMS
S,
a Mac OS
S X.
Solaris, and
Rev. 12.3
31
L&TT is available from a lin
nk on the CD th at ships with th
he product or ass a free
download frrom the HP web
bsite: http://ww
ww.hp.com/sup
pport/tapetoolss
4 –9
Implementing HP BladeSystem Solutions
Features and benefits of L&TT




Free, easy-to-install, and easy-to-use diagnostic tool
Downloaded and installed from HP.com
(http://www.hp.com/support/tapetools) in less than five minutes
Intuitive user interface that requires no customer training
Choice between local installation or running from a remote installation, CD or
memory stick
on
ly

Reduced product downtime through preventative maintenance, fast issue
diagnosis with corrective actions
Automated, smart firmware downloads, updates and notifications

Comprehensive device analysis and troubleshooting tests

First-level failure analysis of both the device and system without HP involvement

Troubleshoot system performance issues through the use of analysis tools

A direct link to the ITRC web-based troubleshooting content

Seamless integration with HP hardware support organization
de
liv
er

y

Ability to generate and email support tickets to the support center for faster
service and support
An all-inclusive source of device information for HP support center

Drive health, life, usage, utilization, performance

Media health, life, usage (Ultrium only)

Backup quality (Ultrium only)

Integration with HP TapeAssure service (http://www.hp.com/go/tapeassure )
Fo
rT
TT

4 –10
Rev. 12.31
HP BladeSysteem Storage and
d Expansion Blades
de
liv
er
y
on
ly
PCI Expansion
E
n Blades
HP offers an expansio
on blade to support
s
card
ds that are no
ot offered in a mezzanine
or.
form facto
The Blade
eSystem PCI Expansion Blade
B
provid es PCI card expansion slots to an
adjacent server blade
e. This blade
e expansion unit uses thee mid-plane to
o pass stand
dard
PCI signa
als between adjacent
a
encclosure bays,, so you can add off-the--shelf PCI-X o
or
PCIe card
ds.
TT
The PCI Expansion
E
Blade fits into a half-heighht device bayy and is man
naged by the
e
partner server blade — by its ope
erating system and drivers.
rT
Customerrs need one PCI Expansio
on Blade forr each serverr blade that rrequires PCI
card expansion. Any PCI card fro
om third-partyy manufacturers that worrks in ProLian
nt
P
DL se
ervers should
d work in thiss PCI Expansion Blade.
ML and ProLiant
Fo
Note
Rev. 12.3
31
HP does not offer any warra
anty or support for third-party PCI products.
4 –11
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
HP PC
CI Expansion Blade — PCI ca
ard detailss
Each PCI expansion blade
b
can ho
old one or tw
wo PCI-X card
ds (3.3V or u
universal) or
one or tw
wo PCIe card
ds (x1, x4, orr x8). It cannnot hold one of each type
e of PCI card
d;
that is, on
ne PCI-X and
d one PCIe card at the sa
ame time.
TT
Installed PCI-X cards must use lesss than 25W per card. Insstalled PCIe cards must u
use
CIe slot, or a single PCIee card can usse up to 150W
W with a
less than 75W per PC
special power
p
connecctor enabled
d on the PCI eexpansion b
blade.
Fo
rT
Customerrs typically in
nstall SSL or XML accelerrator cards, vvoice over IPP (VoIP) cardss,
special purpose
p
telecommunicatio
on cards, and
d graphic accceleration ccards in the PPCI
expansio
on blade.
4 –12
Rev. 12
2.31
HP BladeSysteem Storage and
d Expansion Blades
HP IO
O Accelera
ator
Three mo
odels are ava
ailable:
on
ly
The HP IO
O Accelerato
or is part of a comprehennsive solid sttate storage p
portfolio. Thiis
storage device
d
is targ
geted for markets and ap
pplications reequiring high
h transaction
rates and
d real-time da
ata access th
hat will beneefit from appllication perfo
ormance
enhancem
ment.
HP 80GB
8
IO Acccelerator for BladeSystem
m c-Class

HP 160GB
1
IO Acccelerator fo
or BladeSysteem c-Class

HP 320GB
3
IO Accelerator
A
fo
or BladeSysteem c-Class
de
liv
er
y

With the IO Accelera
ator, the amo
ount of free R
RAM required
d by the drivver depends on
the size of
o the blocks used when writing
w
to thee drive. Sma
aller blocks re
equire more
RAM. Gu
uidelines for 80 GB blocks of storagee are listed in
n the followin
ng table.
RAM usage
(Megabytes)
(
8,192
4,096
2,048
1,024
512
400
4
800
1,500
2,900
5,600
5
TT
Average Blo
ock
Size(bytes)
rT
Typical usse cases incllude:

Seism
mic data pro
ocessing
Fo

Data
abases that historically
h
were
w
run in m emory or accross many disk spindles for
perfo
ormance rea
asons
Rev. 12.3
31

Busin
ness intellige
ence and datta mining

Real-time financia
al data proccessing and vverification

Conttent caching for near-stattic data for fiile/web servvers

3D animation/re
a
endering

CAD
D/CAM

Virtu
ual Desktop Infrastructure (VDI) solutio
ons

Hype
ervisor running multiple virtual
v
machiines
4 –13
Implementing HP BladeSystem Solutions
Solid state technology can be implemented in various ways within a server. The two
most common implementations are as a solid state drive (SSD) (in a SATA or SAS
form factor) or as an I/O card attached to the PCI Express bus.
on
ly
As an I/O card, the IO Accelerator is not a typical SSD; rather it is attached directly
to the server's PCI Express fabric to offer extremely low latency and high bandwidth.
The card is also designed to offer high I/O operations per second (IOPs) and nearly
symmetric read/write performance. The IO Accelerator uses a dedicated PCI Express
x4 link with nearly 800MB/s of usable bandwidth. Each mezzanine slot in an
enclosure offers at least that amount of bandwidth, so by combining cards, you can
easily scale the storage to match an application's bandwidth needs.
Fo
rT
TT
de
liv
er
y
The IO Accelerator's driver and firmware provide a block-storage interface to the
operating system that can easily be used in the place of legacy disk storage. The
storage can be used as a raw disk device, or it can be partitioned and formatted
with standard file systems. You can also combine multiple cards using RAID (up to
three cards with a full-height server blade) for increased reliability, capacity, or
performance in a single server blade.
4 –14
Rev. 12.31
HP BladeSysteem Storage and
d Expansion Blades
de
liv
er
y
on
ly
Sma
art Array contrroller portfolio
o
TT
The HP array
a
controller portfolio consists
c
of seeveral models with differiing SAS
channels,, memory siz
zes, and perfformance. A
All Smart Arra
ay products sshare a
common set of config
guration, man
nagement annd diagnostic tools, inclu
uding Array
Configura
ation Utility (ACU),
(
Array
y Diagnostic Utility (ADU
U), and HP SIIM. These
software tools reduce
e the cost of training
t
for eeach successsive generatio
on of producct
and take much of the guesswork out
o of troubleeshooting fieeld problems. These toolss
lower the
e total cost off ownership by reducing training and
d technical exxpertise
necessary
y to install and maintain HP server sttorage.
Fo
rT
The graphic outlines the
t enhancements of the Smart Arrayy controllers shipping in tthe
G
serverss.
ProLiant Gen8
Rev. 12.3
31
More inform
mation about Sm
mart Array cont rollers is availa
able at:
http://h180
006.www1.hp.ccom/products/sservers/prolian
ntstorage/array
ycontrollers/ind
de
x.html
4 –15
Implementing HP BladeSystem Solutions
Standard features of Smart Array controllers
Several features that are common to all Smart Array controllers give them their
reputation for reliability:


on
ly

Consistent configuration and management tools — Smart Array products use a
standard set of configuration and management tools and utility software that
minimize training requirements and simplify maintenance tasks.
Universal hard drive standards — Form-factor compatibility across many
enterprise platforms enables easy upgrades, data migration between systems,
and management of spare drives.
y

Data compatibility — Complete data compatibility with previous-generation
Smart Array controllers allows for easy data migration from server to server. The
controller upgrades any time better performance, greater capacity, or increased
availability is needed. Every successive generation of Smart Array controllers
understands the data format of other Smart Array controllers.
Online spares — You can configure spare drives before a drive failure occurs. If
a drive fails, recovery begins with an online spare and data is reconstructed
automatically.
de
liv
er

Recovery ROM — Recovery ROM provides a unique redundancy feature that
protects from a ROM image corruption. A new version of firmware can be
flashed to the ROM while the controller maintains the last known working
version of the firmware. If the firmware becomes corrupt, the controller reverts
back to the previous version of firmware and continues operating. This reduces
the risk of flashing firmware to the controller.
TT
Note
rT
Although common in most new controllers, Recovery ROM is not a standard feature of
all Smart Array controllers.

Pre-failure alerts and a pre-failure warranty — Failing components can be
detected and replaced before a fault occurs.
Fo
In addition there is software consistency among all Smart Array family products:
4 –16

Array Configuration Utility (ACU)

Option ROM Configuration for Arrays (ORCA)

Array Diagnostic Utility (ADU)

HP SIM

HP Intelligent Provisioning
Rev. 12.31
HP BladeSystem Storage and Expansion Blades
I/O bandwidths in Smart Array controllers
The Smart Storage family of controllers and drives for ProLiant Gen8 servers provides
higher I/O bandwidths with PCIe 3.0. This provides maximum compute and I/O
performance for dense high-performance computing environments.
Some examples of improved I/O bandwidth:

The ProLiant ML350p server has Increased I/O expansion by 50% and
increased the I/O capacities by 200% with PCIe Gen3. More I/O bandwidth to
the processor, resulting in lower latency (Gen8 = 40 lanes/processor, G7 = 24
lanes/processor).
The ProLiant DL380p server has 200% the I/O capacities with PCIe Gen3. More
I/O bandwidth to the processor resulting in lower latency (Gen8 = 40
lanes/processor, G7 = 24 lanes/processor).

de
liv
er
y

The ProLiant SL230 server introduces the higher performance Intel Socket-R while
offering the same density to the SL140 (8 nodes per 4U chassis). Flexibility in
options includes single GPU and I/O accelerator support.
on
ly

HP 331FLR and 331T adaptors feature the next generation of Ethernet integration
that reduces power requirements for four ports of 1Gb Ethernet and optimizes
I/O slot utilization.
Other features include:
Choice of FlexLOM adapter tailored to meet the system workload

Easily update firmware using HP Service Pack for ProLiant (SPP)

Optional single GPU or 160GB SLC PCIe IO Accelerator configurations

Optional dual front hot-plug hard drive configuration
I/O Virtualization support for VMware NetQueue and Microsoft VMQ on
331FLR, 331T 530FLR, and 530M adaptors
rT

TT

Note
Fo
This is important because it meets the performance demands of consolidated virtual
workloads.
Rev. 12.31
4 –17
Implementing HP BladeSystem Solutions
Smart Array controller classification
To simplify the Smart Array controller product line, HP divides it into three general
categories:

Entry-level controllers — Entry-level controllers are usually less expensive than
high-performance controllers and have smaller memory sizes. If write cache is
available, it is provided as an upgrade as opposed to shipping standard with
the controller.
on
ly

Integrated controllers — Integrated Smart Array controllers are intelligent array
controllers for entry-level, hardware-based fault tolerance. These low-cost
controllers provide an economical alternative to software-based RAID.
High-performance controllers — Smart Array controllers generally have write
cache as a standard feature, and it is often upgradeable in this category of
controllers. This group also supports RAID 60 and RAID 6, with the optional
SAAP2.
de
liv
er
HP Smart Array P822 controller
y

The HP Smart Array P822 controller supported on ProLiant Gen8 servers can support
two times more total drives internally and externally over previous generations, for up
to 227 drives (108 drives are supported with the Smart Array P812 controller).
Additional features include:

PCI bus — Full-height, half-length card PCIe 3.0 x8

Memory bus speed — DDR3-1333 MHz 72-bit with 2 GB FBWC

Maximum drives — Up to 227
Management software support — ACU, HP System Management Homepage
(SMH), HP SIM, ORCA, SPP Storage
rT

SAS/SATA connectivity — Two x4 ports mini-SAS internal with expander
support; four x4 ports external
TT

RAID support — RAID 0, 1, 10, 5, 6, 50, and 60

SAAP — SAAP 2.0 is included standard
Fo

4 –18
Rev. 12.31
HP BladeSystem Storage and Expansion Blades
HP Smart Array P220 and HP Smart Array P222 controllers

Online RAID level migration

FBWC

Global online spare

Pre-failure warning
HP Smart Array P420 and P420i controllers
on
ly
The Smart Array P220 and P222 controllers are entry-level 6 Gb/s array controllers
that provide improved performance, greater attach rate, and lower maintenance. The
P222 controller is ideal for RAID 0/1, 10, 5, 50, 6, and 60. Additional advanced
features are upgradable by using SAAP2. The P222 controller delivers increased
server uptime by providing advanced storage functionality, including:
de
liv
er
y
The HP Smart Array P420 and P420i controllers are enterprise-class 6 Gb/s
controllers that provide improved performance, internal scalability, and lower
maintenance. The P420 controller is ideal for RAID 0/1, 1+0, 5, 50, 6 and 60.
Additional advanced features are upgradable by SAAP2. The P420 delivers
increased server uptime by providing advanced storage functionality, including
online RAID level migration with FBWC, global online spare, and pre-failure
warning.
Smart Array P420 and P420i controllers:


Upgrade seamlessly from past generations and upgrade to next generation HP
high performance and high capacity SAS Smart Array controllers
Deliver high performance and data bandwidth with 6Gb/s SAS technology;
retain full compatibility with 3Gb/s SATA technology
Feature x8 PCI Express Gen 3 host interface technology for high performance
and data bandwidth up to 8.5 GB/s maximum bandwidth
rT

Support up to 27 drives depending on the server implementation
TT

Can be upgraded from 40-bit 512MB cache to 72-bit 1GB FBWC or 72-bit 2GB
FBWC
Fo


Enable array expansion, logical drive extension, RAID migration and strip size
migration with the addition of the flash backed cache upgrade
Note
A minimum of 512 MB cache is required to enable RAID 5 and 5+0 support with the
Smart Array P420i controller.
Rev. 12.31
4 –19
Implementing HP BladeSystem Solutions
Learning check
1.
What enables the server blades to partner with storage and expansion blades
within the HP BladeSystem enclosures?
.................................................................................................................
.................................................................................................................
.................................................................................................................
2.
on
ly
.................................................................................................................
The HP PCI Expansion Blade can partner with one full-height server blade and
two half-height server blades.
 True
 False
y
You can connect a storage blade and a tape blade to a single, full-height server
blade.
 True
 False
a.
Embedded on the system board of the partner server blade
b.
Embedded in the SB40c storage blade
c.
Embedded on the mezzanine card installed in the partner server blade
d.
Embedded in the signal midplane
Which combination of PCI cards is not allowed in a PCI Expansion Blade?
a.
Two PCI-X cards
b.
One PCI-X card
c.
One or two PCIe cards
Fo
rT
5.
The SB40c storage blade requires a dedicated Smart Array controller. This
controller is:
TT
4.
de
liv
er
3.
d.
4 –20
One PCI-X card and one PCIe card
Rev. 12.31
Ethernet Connectivity Options
for HP BladeSystem
Module 5
Objectives
on
ly
After completing this module, you should be able to describe the following
HP BladeSystem Ethernet interconnect modules for HP ProLiant server blades:
HP 6120XG Ethernet Blade Switch

HP 6120G/XG Blade Switch

Cisco Catalyst Blade Switch 3020

Cisco Catalyst Blade Switch 3120

HP GbE2c Layer 2/3 Ethernet Blade Switch

HP 1:10Gb Ethernet BL-c Switch

HP 1Gb Ethernet Pass-Thru Module

HP 10GbE Pass-Thru Module
Fo
rT
TT
de
liv
er
y

Rev. 12.31
5 –1
Implementing HP BladeSystem Solutions
Available Ethernet interconnect modules
The Ethernet interconnect modules available for the BladeSystem enclosures are:

HP 6120G/XG Blade Switch (498358-B21) — Designed for the BladeSystem
enclosure, the HP 6120G/XG Blade Switch provides sixteen 1Gb downlinks,
four 1Gb copper uplinks, and two 1Gb SFP uplinks, along with three 10Gb
uplinks and a single 10Gb cross-connect.
Cisco Catalyst Blade Switch 3020 (410916-B21) — Flexible to fit the needs of a
variety of customers, the Cisco Catalyst Blade Switch 3020 for BladeSystem
provides an integrated switching platform with Cisco resiliency, advanced
security, enhanced manageability, and reduced cabling requirements.

HP GbE2c Layer 2/3 Ethernet Blade Switch (438030-B21) — This HP switch
provides Layer 2 switching and Layer 3 routing features. It has 16 internal
downlinks, 5 uplinks, and 2 internal cross-connects. Four of the five uplinks can
be either copper or fiber using optional SX SFP fiber modules. This switch is
supported by ProLiant server blades only.
HP 1:10Gb Ethernet BL-c Switch (438031-B21) — This easy-to-manage
interconnect provides sixteen 1Gb downlinks and four 1Gb uplinks, along with
three 10Gb uplinks and a single 10Gb cross-connect. This switch is supported
by ProLiant server blades only.
rT

Cisco Catalyst Blade Switch 3120 (451438-B21/451439-B21) — As the next
generation in switching technology, the Cisco Catalyst Blade Switch 3120 Series
introduces a switch stacking technology that treats individual physical switches
within a rack as one logical switch. This innovation simplifies switch operations
and management. The Cisco Catalyst Blade Switch 3120 Series is supported by
both HP ProLiant and HP Integrity server blades.
TT

de
liv
er
y

HP 6120XG Ethernet Blade Switch (516733-B21) — Designed for the BladeSystem
enclosure, the HP 6120XG Blade Switch provides sixteen 10Gb downlinks and
eight 10G enhanced small form-factor pluggable transceiver (SFP+) uplinks
(including a dual-personality CX4 and SFP+ 10G uplink and two 10Gb crossconnects).
on
ly

HP 1Gb Ethernet Pass-Thru Module (406740-B21) — This 16-port Ethernet
interconnect provides 1:1 connectivity between the server and the network. A
pair of pass-thru modules offers a redundant connection from the servers to the
external switches. It is supported by both ProLiant and Integrity server blades.
Fo


5 –2
HP 10GbE Pass-Thru Module (538113-B21) — The HP 10GbE Pass-Thru Module is
designed for BladeSystem customers requiring a nonblocking, one-to-one
connection between each server and the network. This pass-thru module
provides 16 uplink ports that accept both SFP and SFP+ connectors.
Rev. 12.31
EEthernet Conne ctivity Options for HP BladeSyystem
on
ly
HP 612
20XG Eth
hernet Bllade Switch
y
Designed
d for the Blad
deSystem encclosure, the H
HP 6120XG Blade Switch
h provides
sixteen 10
0Gb downlin
nks and eigh
ht 10G SFP+ uplinks (including a dua
al-personalityy
CX4 and SFP+ 10G uplink,
u
and two
t
10Gb crross-connectss). A robust sset of industrryons, quality o
of service (Q
QoS) metering
g, security, a
and
standard Layer 2 swittching functio
high-avaiilability featu
ures round ou
ut this extrem
mely capablee switch offerring.
de
liv
er
The 6120
0XG switch iss suited for data
d
centers m
migrating to next-generation 10Gb h
highperforma
ance architecctures. With the
t support o
of dual speed
ds (1Gb and
d 10Gb) on tthe
uplinks and Converge
ed Enhanced
d Ethernet (C
CEE) hardware capabilityy, the 6120XG
G
protection.
provides true future-proofing and investment p
TT
The 6120
0XG blade sw
witch brings consistency and interopeerability acro
oss existing
network investments
i
to help reducce the compl exity of netw
work manage
ement throug
gh
resilient core-to-edge
c
connectivity and automa
ated provisio
oning technologies. With a
variety off connection interfaces, the 6120XG sswitch offers excellent invvestment
protection
n, flexibility, and scalability, as well a
as ease of deeployment and reduced
operation
nal expense.
Fo
rT
6120XG uses a nonblocking architecture, and
d it has wire speed performance on a
all
downlinkks and all uplinks.
Rev. 12.31
5 –3
Implemen
nting HP BladeS
System Solutions
on
ly
HP 6120XG Ethe
ernet Blade
e Switch — Front pa
anel
The follow
wing table id
dentifies the front
f
panel ccomponents o
of the HP 612
20XG Blade
e
Switch.
Description
TT
de
liv
er
y
1
Port 17 (10GBA
ASE-CX4)*
2
Console
C
port (U
USB 2.0 mini-AB connector)
3
Clear
C
button
4
Port 17 SFP+ (10GbE) slot*†
5
Port 18 SFP+ (10GbE) slot†
6
Port 19 SFP+ (10GbE) slot†
7
Port 20 SFP+ (1
10GbE) slot†
8
Port 21 SFP+ (10GbE) slot†
9
Port 22 SFP+ (1
10GbE) slot†
10
Port 23 SFP+ (10GbE) slot*†
11
Port 24 SFP+ (10GbE) slot*†
12
Reset button (reecessed)
* Dual-personality port
† Supports 10
0GBASE-SR SFP+, 10GBASE-LLR SFP+, 10GBA
ASE-LRM SFP+,,
1000BASE-T SFP, 1000BASE
E-SX SFP, and 1
1000BASE-LX S FP optical transsceiver
modules
rT
Port 17 co
onsists of a CX4
C port mu
ultiplexed witth an SFP+ p
port. Only on
ne port can b
be
active. Th
he SFP+ portt takes precedence—if it contains a m
module, it is the active po
ort
and the CX4
C port is inactive.
Fo
Ports 23 and
a 24 are each
e
multiple
exed with intterswitch linkk ports on the
e blade switcch
backplan
ne. Either the
e SFP+ port on
o the front p
panel or the backplane p
port can be
active, bu
ut both cannot be active at the same time. The SFFP+ port on tthe frontplan
ne
takes pre
ecedence—if it contains a module, it is the active port and its correspondiing
backplan
ne port is ina
active.
5 –4
Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
on
ly
HP 612
20G/XG
G Blade Switch
S
y
Designed
d for the Blad
deSystem encclosure, the H
HP 6120G/X
XG Blade Sw
witch provide
es
sixteen 1Gb downlinks, four 1Gb
b copper upl inks, and tw
wo 1Gb SFP u
uplinks, alon
ng
e 10Gb uplin
nks and a sin
ngle 10Gb ccross-connectt. It also inclu
udes a robusst
with three
set of ind
dustry-standard Layer 2 sw
witching funcctions, QoS metering, security, and
high-avaiilability featu
ures.
de
liv
er
The 6120
0G/XG blade
e switch is id
deal for data
a centers in trransition, wh
here a mix off
1Gb and
d 10Gb netw
work connectiions are requuired.
The 6120
0G/XG blade
e switch provvides consisttency and intteroperabilityy across
existing network
n
invesstments to he
elp reduce thhe complexityy of network management
through resilient
r
core--to-edge con
nnectivity and
d automated provisioning
g technologie
es.
With a va
ariety of con
nnection interrfaces, the 6120G/XG b
blade switch o
offers excelle
ent
investmen
nt protection,, flexibility, and
a scalabilitty, as well ass ease of deployment an
nd
reduced operational expense.
Fo
rT
TT
0G/XG blade
e switch usess a nonblockking architeccture, and it h
has wire speed
The 6120
performa
ance on all downlinks and
d all uplinks..
Rev. 12.31
5 –5
Implementing HP BladeSystem Solutions
HP 6120G/XG Ethernet Blade Switch — Front panel
Description
on
ly
The following table identifies the front panel components of the HP 6120G/XG Blade
Switch.
Fo
rT
TT
de
liv
er
y
1
Port C1 (10GBASE-CX4)
2
Port X1 XFP (10GbE) slot*
3
Port X2 XFP (10GbE) slot*
4
Port S1 SFP (1GbE) slot**
5
Port S2 SFP (1GbE) slot**
6
Console port (USB 2.0 mini-AB connector)
7
Clear button
8
Ports 1–4 (10/100/1000BASE-T)
9
Reset button (recessed)
* Supports 10GBASE-SR XFP and 10GBASE-LR XFP pluggable optical
transceiver modules
** Supports 1000BASE-T SFP, 1000BASE-SX SFP, and 1000BASE-LX SFP
optical transceiver modules
5 –6
Rev. 12.31
EEthernet Conne ctivity Options for HP BladeSyystem
Menu interfacce view
on
ly
Manag
ging HP blade sw
witches


Com
mmand line in
nterface (CLI)) — An interfface offering
g the full set o
of switch
commands through the VT10
00/ANSI connsole built into the switch
h.
ace offering status inform
Web
b browser intterface — A switch interfa
mation and a
subset of switch commands
c
th
hrough a sta
andard web browser such
h as Netscap
pe
net Explorer.
Navigator or Miccrosoft Intern
ProC
Curve Manag
ger (PCM) — A Windowss-based netw
work manage
ement solutio
on
inclu
uded in-box with
w all mana
ageable ProC
Curve devicees. Features include
automatic device
e discovery, network
n
statuus summary, topology an
nd mapping,
agement.
and device mana
rT

Menu interface — A menu-drriven interfacce offering a subset of sw
witch comma
ands
ugh the built--in VT100/A
ANSI consolee.
throu
TT

de
liv
er
y
Managem
ment interfacces enable yo
ou to reconfiigure a blade switch and
d to monitor
switch sta
atus and perfformance. HP offers the ffollowing inteerfaces for itts blade
switches:
ProC
Curve Manag
ger Plus (PCM
M+) — A com
mplete Wind
dows-based n
network
management solution that pro
ovides both the basic fea
atures offered
d with PCM as
agement fea
atures such ass in-depth tra
affic analysiss,
well as more advvanced mana
grou
up and policy
y manageme
ent, configura
ation manag
gement, devicce software
upda
ates, and advanced virtual LAN (VLA
AN) manageement.
Fo

Rev. 12.31
5 –7
Implementing HP BladeSystem Solutions
Cisco Catalyst Blade Switch 3020 features
The Cisco Catalyst Blade Switch 3020 is an integrated Layer 2+ switch that uses
existing network investments to help reduce operational expenses. The key features of
the Cisco Catalyst Blade Switch 3020 are:
Enhanced performance — Wire speed switching on 16 internal 1Gb ports and
on 8 external 10/100/1000BASE-T ports
Four external 10/100/1000 SFP-based ports, which can be configured
instead of the 10/100/1000BASE-T ports, to support Fiber SX SFP modules
from Cisco Systems

One external console port

One Fast Ethernet connection to the BladeSystem Onboard Administrator
The Fa0 (port 0) is dedicated to OA management. No data is routed to
the Fa0 port
Note
y

on
ly

de
liv
er

Ports 23 and 24 are configured by default as external-facing ports, but they can be
configured to provide an internal crossover connection to an associated Cisco Catalyst
Blade Switch 3020. If the cross-connects are enabled, the external ports 23 and 24 are
automatically disabled.

Improved manageability — Support for CiscoWorks software, which provides
multilayer feature configurations such as routing protocols, Access Control Lists
(ACLs), and QoS parameters
Support for an embedded Remote Monitoring (RMON) software agent that
provides enhanced traffic management, monitoring, and analysis
rT

Support for the Internetwork Operating System (IOS) CLI, which is a
common user interface and command set included with all Cisco routers
and Cisco Catalyst desktop switches
TT

Enhanced security — Compatible with Cisco Secure Access Control Server
(ACS), which enables users to access their security profiles regardless of where
they connect on the network
Fo

5 –8

Support for VLANs

Support for Cisco Identity-Based Networking Services (IBNS), which prevent
unauthorized network access

ACLs, which provide protection against denial-of-service and other attacks
Rev. 12.31
EEthernet Conne ctivity Options for HP BladeSyystem
de
liv
er
y
on
ly
Catalysst Blade Sw
witch 302
20 front beezel
The switcch module ha
as 18 LEDs. You
Y can use the switch m
module LEDs tto monitor
switch mo
odule activity
y and performance. Grap
phical repressentations off the LEDs are
e
visible in the device manager.
m
Eight LED
Ds are on the
e front bezel, including:
Twelve LEDs for uplink
u
port sttatus

Four switch system status LED
Ds

Two HP specific LEDs to indiccate health a
and UID statuus
rT
TT

System status LED ind
dicators are as
a follows:
Off — The system
m is not pow
wered on.

Blink
king green — The powerr-on self-test ((POST) is in p
progress.

Solid
d green — Th
he system is operating no
ormally.

Amb
ber — The sy
ystem is receiiving power but is not fun
nctioning pro
operly.
Fo

The green
n status (STA
AT), duplex (D
DLX), and sp eed (SPD) LE
EDs are used
d with the Mo
ode
button to select the diisplay mode for the port LEDs. You ca
an press the Mode button
n to
cycle thro
ough the thre
ee display mo
odes. After 3
30 seconds p
passes witho
out the Mode
e
button be
eing pressed,, status inform
mation displ ays.
Rev. 12.31
5 –9
Implemen
nting HP BladeS
System Solutions
on
ly
Cisco Catalyst Blade Sw
witch 312
20 featurres

Cisco
o stacking te
echnology

Combine up
p to nine swittches into a ssingle logica
al switch

Use a single
e IP address and routing domain

Enable 64G
Gb stack bandwidth

Mix and ma
atch any com
mbination of 3
3120 series sswitches
Enha
anced perforrmance

Enable wire speed switcching on all ssixteen 1Gb downlinks
Enable wire speed switcching on all 1
1Gb uplinks
rT

TT

de
liv
er
y
Cisco Catalyyst Blade Sw
The Cisco
o Catalyst Blade Switch 3120
3
series i ncludes the C
witch
3120G and Cisco Ca
atalyst Blade Switch 3120
0X models. TThe Catalyst B
Blade Switch
h
3120 seriies introduce
es the Cisco stacking
s
techhnology that eliminates th
he need to
manage multiple swittches per racck. Key featu res are:
Enable wire speed switcching on bothh 10Gb uplin
nks (3120X o
only)

Use the sam
me IOS interfa
ace, Manageement Inform
mation Bases (MIBs), and
d
managemen
nt tools as the
e rest of the Cisco Catalyyst series
Fo


Imprroved manag
geability

Manage mu
ultiple switche
es as a singlle logical sw
witch with a single IP address
and a single
e Spanning Tree
T
Protocoll (STP) node

Support CisccoWorks software, whichh provides m
multilayer featture
configurations such as ro
outing protoccols, ACLs, a
and QoS parrameters

ager (EEM) a
and Generic On-line
Support the Embedded Events Mana
Diagnostics (GOLD)

Support the Cisco Network Assistantt
5 –10
Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
de
liv
er
y
on
ly
Catalysst Blade Sw
witch 3120
0 front beezel
The diagram shows th
he front beze
el componennts of the Cisco Catalyst B
Blade Switch
h
3120. The
e switch mod
dule has 18 LEDs
L
on the face plate:
Twelve LEDs for uplink
u
port sttatus

Four switch statuss LEDs

Two HP specific LEDs to indiccate health a
and UID statuus
rT
TT

Note
Fo
The preceding diagram disp
plays 13 LEDs. TThe rest of the LLEDs are visible
e in the device
manager.
EDs to monito
or switch mo
odule activity and
You can use the switcch module LE
ance. Graphical representations of thee LEDs are vvisible in the device
performa
managerr.
Rev. 12.31
5 –11
Implemen
nting HP BladeS
System Solutions
on
ly
HP Gb
bE2c Lay
yer 2/3 Ethernet
E
B
Blade Sw
witch
The HP GbE2c
G
Layer 2/3 Etherne
et Blade Swittch provides Layer 2 swittching plus th
he
additiona
al capabilitie
es of Layer 3 routing.
de
liv
er
y
Using Lay
yer 3 routing
g, inter-VLAN
N routing beccomes more sscalable and
d more efficie
ent
than equivalent Layerr 2 networks that rely on STP alone. I P forwarding
g enables tra
affic
to be forw
warded betw
ween VLANs without an eexternal routeer or Layer 3 switch. Thiss
reduces traffic
t
in the core
c
networkk by making Layer 3 routting decision
ns within the
BladeSystem enclosurre. Layer 3 ro
outing also reeduces the n
number of brroadcast
domains,, increasing network perfformance annd efficiency.
The Virtua
al Router Red
dundancy Prrotocol (VRRPP) maximizess availability in complex
network environments
e
s by allowing
g multiple sw
witches to pro
ocess traffic iin an activeactive configuration. All
A switches in a VRRP grroup can pro
ocess traffic ssimultaneoussly,
erformance and
a fast, sea
amless failoveer.
ensuring maximum pe
Additiona
al features off Layer 3 rou
uting include :
128 IP interfacess

4096
6 Address Re
esolution Pro
otocol (ARP) eentries

Glob
bal default ro
oute

Static routing sup
pport with 12
28 routing ta
able entries

Dyna
amic routing support with
h up to 4,00
00 entries in a routing tab
ble

Routing Informatiion Protocol (RIP) and Op
pen Shortestt Path First (O
OSPF)
rT
TT

Fo
The GbE2
2c Layer 2/3
3 switch provvides 16 inteernal downlin
nks and two internal crosssconnects in a single low-cost blad
de switch. It ffeatures five uplinks, fourr of which ca
an
be coppe
er or fiber ussing optional SFP fiber m odules.
Note
The HP GbE2
2c Layer 2/3 Fiiber SFP Optionn Kit (440627-B
B21) contains tw
wo SX SFP fiberr
modules. Only SFP moduless with this part nnumber operatee in the Layer 2
2/3 switch.
5 –12
Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
de
liv
er
y
on
ly
GbE2c Layer 2/3 Ethernett Blade Sw
witch frontt bezel
The front bezel of the GbE2c Laye
er 2/3 Etherrnet Blade Sw
witch feature
es two LEDs
(health and UID), one
e serial port, and five Ethhernet ports.
The healtth LED in the GbE2c Laye
er 2/3 Etherrnet Blade Sw
witch can be
e in one of th
hree
states:
Off—
—Not powere
ed up

Gree
en—Powered
d up and all ports match

Amb
ber—A probllem has occu
urred, such a
as a port missmatch
rT
TT

Fo
The five front panel Ethernet portss have two LE
EDs (speed a
and link/actiivity) per porrt.
Rev. 12.31
5 –13
Implemen
nting HP BladeS
System Solutions
on
ly
HP 1:10Gb Ethernet BL--c Switch
The HP 1:10Gb Ethern
net BL-c Switcch is designeed specifically for the data center
transitioning from 1G
Gb to 10Gb. It enables cuustomers to uuse an existin
ng
10/100/
/1000Mb inffrastructure to
o move to 10
0Gb as the n
need develop
ps.

The XFP
X (10Gb SFP)
S
Multi-Source Agreem
ment (MSA) iss a specification for a
pluggable, hot-sw
wappable op
ptical interfa ce for 10Gb
b SONET/SD
DH, Fibre
Channel, Gigabiit Ethernet, and
a other ap plications.
10G
GBASE-CX4, also
a known by
b its workinng group nam
me of 802.3a
ak, transmitss
over four lanes in
n each direction, over co
opper cabling
g similar to tthe variety ussed
in InfiniBand tech
hnology. It iss designed to
o work up to a distance o
of 15 m (49 ft.).
h the lowe
est cost per p
port of all 10G
Gb interconn
nects, but at the
This technology has
expe
ense of range
e. Each devicce capable o
of supporting
g a 10GbE m
module uses
some
e MSA to pro
ovide the acttual module connectivity within the device to the
outsiide connecto
or.
rT
TT

de
liv
er
y
Designed
d for the Blad
deSystem encclosure, the 1
1:10Gb Etherrnet BL-c Switch provides
more than 34Gb of uplink
u
bandw
width to hand
dle the most demanding applicationss. It
s
1Gb downlinks, four 1Gb upllinks along w
with three 10
0Gb uplinks
delivers sixteen
(CX4, XFP), and a 10
0Gb cross-co
onnect in a siingle-bay forrm factor. Perrformance
features include low latency, wire speed perfo
ormance for Layer 2 and Layer 3
packets, and low pow
wer consump
ption.
Additiona
al features in
nclude:
Industry-standard
d Ethernet Lay
yer 2 switchiing and Layeer 3 routing functions

QoS
S

Secu
urity

High
h-availability features
Fo

The 1:10G
Gb Ethernet BL-c Switch reduces cabl ing and pow
wer and coolling
requireme
ents compared to stand-a
alone switchees.
It is comp
patible with all
a server bla
ades in a Bla
adeSystem c7
7000 enclosure.
5 –14
Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
de
liv
er
y
on
ly
1:10Gb
b Ethernet BL-c Switcch front beezel
The front bezel of the 1:10Gb Ethe
ernet BL-c Sw
witch providees following ttwo LEDs perr
port for th
he front pane
el Ethernet ports:
p
RJ-45
5 port speed
d LED

RJ-45
5 and 10Gb
b link/activity
y LED
Fo
rT
TT

Rev. 12.31
5 –15
Implemen
nting HP BladeS
System Solutions
HP 1G
Gb Ethern
net Pass-TThru Mod
dule
on
ly
The HP 1Gb Ethernett Pass-Thru Module
M
for Bla
adeSystem iss a 16-port E
Ethernet
interconn
nect that provvides a 1:1 no
onswitched, nonblocking
g path betwe
een the serve
er
and the network.
n
Thiss connectivity
y is especiallly useful wheen nine or mo
ore ports are
e
used in an
a enclosure;; however, th
he actual perrformance deepends on en
nd-to-end
connectivvity.
de
liv
er
Note
y
The 1Gb Ethernet Passs-Thru Modu
ule delivers 1
16 internal 1Gb downlinkks and 16
external 1Gb RJ-45 copper
c
uplinkks. Designed
d to fit into a single I/O b
bay of the
BladeSystem enclosurre, the 1Gb Ethernet Passs-Thru module should be installed in
pairs to provide
p
redundant uplinkk paths.
The 1Gb Ethe
ernet Pass-Thru module (PN: 4
406740-B21) shiips as a single unit and should
d
be ordered in
n quantities of two.
t
Cables aree not included.
TT
u module is designed
d
forr customers w
who want an
n unmanaged
d
This Ethernet pass-thru
nnection betw
ween each server
s
blade within the enclosure and
d an external
direct con
network device
d
such as
a a switch, router, or huub. There is n
no need for e
extra LAN
managem
ment in the enclosure,
e
and there is a full gigabit p
pipe between
n the server and
the upstre
eam LAN po
ort. However, the ports do
o not auto-neegotiate spee
ed; the speed
on each port is fixed at 1Gb, and
d the all of thhem must be connected tto a 1Gb
switch.
Fo
rT
Because of the additional cost of cabling and
d extra ports on director-cclass switche
es,
the 1Gb Ethernet Pass-Thru Module is an expeensive way ffor a custome
er to connectt to
networks. It is targete
ed toward cusstomers limitted to direct 1:1 connectio
ons between
n
u modules al so offer direct pass-throu
ugh for
the server and networks. Pass-thru
ot want embe
edded switchhing or an exxtra layer of LAN manag
ged
customerss who do no
switches; however, HP
P Virtual Con
nnect is a mo
ore cost-effecctive alternattive.
5 –16
Important
Pass-thru app
proaches are sim
mple, but addinng many cabless could lead to reliability
problems and
d risks of human error. No Virttual Connect suupport is availa
able for pass-thrru
modules.
Rev. 12
2.31
EEthernet Conne ctivity Options for HP BladeSyystem
on
ly
HP 10GbE Passs-Thru Module
M
The HP 10GbE Pass-TThru Module is designed for BladeSysstem and HPP Integrity
Superdom
me 2 customers requiring
g a nonblockking, one-to-o
one connectiion between
each servver and the network.
n
Thiss pass-thru m
module provid
des 16 uplinkk ports that
accept bo
oth SFP and SFP+ conne
ectors.
de
liv
er
y
The HP 10GbE Pass-TThru Module can supportt 1Gb and 10Gb connecctions on a p
portb
Optical as well as Direct Attachh Copper (DA
AC) cables a
are also
by-port basis.
supported
d. Both stand
dard Etherne
et as well as Converged E
Enhanced Etthernet (CEE))
traffic to an
a FCoE cap
pable switch is possible w
when using tthe appropriate NIC or
adapter module.
m
Thiss module sup
pports all NIC
Cs and mezzzanine adap
pters includin
ng
FlexNICs.
This module contains following po
orts:
downlinks
Sixte
een internal 1Gb/10Gb
1

Sixte
een 1Gb/10Gb uplinks supporting
s
S
SFP and SFP+
+

Mini USB configuration and managemennt port
Fo
rT
TT

Rev. 12.31
5 –17
Implemen
nting HP BladeS
System Solutions
on
ly
HP 10G
GbE Pass-TThru Modu
ule compo
onents
Front view of
o an HP 10Gb
bE Pass-Thru Mo
odule
Description
UID LED




Fo
rT
TT
de
liv
er
y
Blue light on—The pass-tthru module is a
activated.
Blue light off—The pass-tthru module is d
deactivated.
Off—The pass-thru moduule is powered o
off.
Green—The pass-thru mo
odule is powereed up, and all p
ports
match.
Health LED
 Amber—A
An issue exists, such as a port mismatch. For more
informatio
on, see the HP B
BladeSystem En
nclosure Setup a
and
Installation
n Guide.
 Green—Liink is 10G.
 Flashing green—10G
g
lin k activity is dettected.
 Amber—LLink is 1G.
Ethernet
 Flashing amber—1G
a
linkk activity is deteected.
port
 Flashing alternately
a
gree n and amber—
—A link mismatcch
condition exists. For moree information, ssee the HP
BladeSyste
em Enclosure S
Setup and Installlation Guide.
Reset button
Mini-USB RS232 managemeent serial port
 Ports 1 throough 16
 SFP+ portss to support SFPP and SFP+ trannsceiver modulees and Direct A
Attach
Cables (DA
ACs)
5 –18
Rev. 12
2.31
Ethernet Connectivity Options for HP BladeSystem
Learning check
1.
List the available interconnect modules supported for ProLiant server blades in a
c7000 enclosure.
.................................................................................................................
.................................................................................................................
Match each interconnect with its description.
Cisco Catalyst Blade
Switch 3020
b.
GbE2c Layer 2/3
Ethernet Blade Switch
1:10Gb Ethernet Switch
c.
d.
1Gb Ethernet Pass-Thru
Module
e.
Cisco Catalyst Blade
Switch 3120
Name the key differences between the Cisco Catalyst 3120G and 3120X.
TT
3.
............ A 16-port Ethernet interconnect that
provides 1:1 connectivity between the
server and an external switch port
............ An integrated Layer 2+ switch that
features 16 internally facing ports
............ A high-performance, affordably
priced, low-latency switch with
20 ports (16 downlinks and 4 uplinks)
............ A switch with a full set of Layer 3
routing that uses optional SX SFP fiber
modules
............ A switch that provides switch stacking
technology, which combines up to
nine switches into a single logical
switch
y
a.
de
liv
er
2.
on
ly
.................................................................................................................
.................................................................................................................
Fo
rT
.................................................................................................................
Rev. 12.31
5 –19
Implementing HP BladeSystem Solutions
a.
1,024
b.
1,005
c.
1,000
d.
1,010
Which Ethernet module features five uplinks, four of which can be copper or
fiber using optional SFP fiber modules?
a.
Virtual Connect Flex-10 10Gb module
b.
1/10Gb VC-Enet module
c.
GbE2c Layer 2/3 Ethernet Blade Switch
d.
10GbE Pass-Thru Module
Fo
rT
TT
de
liv
er
y
5.
How many VLAN IDs does the Cisco Catalyst Blade Switch 3120 support?
on
ly
4.
5 –20
Rev. 12.31
Storage Connectivity Options
for HP BladeSystems
Module 6
After completing this module, you should be able to:

on
ly
Objectives
Describe the HP BladeSystem Fibre Channel interconnect modules available for
HP BladeSystems

Cisco MDS 9124e Fabric Switch

Brocade 8Gb SAN Switch
Describe the Serial-Attached SCSI (SAS) switches available for BladeSystems

Identify 4X InfiniBand Switch Modules available for BladeSystems
de
liv
er
Differentiate the Fibre Channel mezzanine card options available for
BladeSystems
Fo
rT
TT

y

Rev. 12.31
6 –1
Implemen
nting HP BladeS
System Solutions
Fibre
e Chan
nnel inte
erconne
ect options
The Blade
eSystem architecture offe
ers several c hoices for co
onnecting se
erver blades to
Fibre Cha
annel networks. These Fiibre Channeel interconnecct modules a
are currently
available
e for the Blad
deSystem:
Broccade 8Gb SA
AN Switch — An easy-to-m
manage emb
bedded Fibrre Channel
switcch with 8Gb/s performance. The Bro
ocade 8Gb S
SAN Switch hot-plugs intto
the back
b
of the BladeSystem
B
enclosure. TThe integrateed design fre
ees up rack
spacce, enables shared
s
powe
er and coolinng, and reduuces cabling and the
number of small form factor pluggable (S
SFP) transceivers. The Bro
ocade 8Gb
SAN
N Switch provvides enhancced trunking
g support and
d new featurres in the Pow
wer
Packk+ option.
y

Cisco
o MDS 9124
4e Fabric Swiitch — A Fib
bre Channel switch that ssupports link
spee
eds up to 4G
Gb/s. The Cissco MDS 91 24e Fabric Switch can o
operate in a
fabriic containing
g multiple sw
witches or as the only swiitch in a fabric.
on
ly

de
liv
er
Cisco
o MDS 91
124e Fab
bric Switcch for Bla
adeSystem
TT
The Cisco
o MDS 9124e Fabric Switch for BladeeSystem featuures 16 logiccal internal
ports (numbered 1 through 16) that connect seequentially to
o server bayys 1 through 16
e midplane. Server
S
bay 1 is connecteed to switch p
port 1, server
through the enclosure
o switch portt 2, and so fo
orth. The exteernal ports a
are labeled
bay 2 is connected to
EXT1 thro
ough EXT4 (left bank) and EXT4 throuugh EXT8 (rig
ght bank).
Fo
rT
Up to six zero-footprint switches are
a supported
d per enclosure. The hot--swappable
switch supports redun
ndant, dual-p
port Fibre Chhannel mezza
anine cards.
6 –2
Rev. 12
2.31
Storage Connectivity Options for HP BladeSystem
The Cisco MDS 9124e Fabric Switch is available in two port-count options as well as
with the option of an upgrade license for lower cost of entry:
Important
These are the same physical switch; available port options are dependent on the
license purchased.
!
Eight internal 4GB ports

Four external 4GB ports

Two preinstalled short wavelength small form-factor pluggable (SFP)
modules

Licensing for port activation in eight-port increments (the first eight ports are
licensed by default)
y
on
ly

Cisco MDS 9124e 24-port Fabric Switch (PN: AG642A)
de
liv
er

Cisco MDS 9124e 12-port Fabric Switch (PN: AG641A)

Sixteen internal 4GB ports

Eight external 4GB ports

Four preinstalled short wavelength SFPs

Licensing available for port activation in eight-port increments (the first eight
ports are licensed by default)Cisco MDS 9124e Fabric Switch 12-port
Upgrade License to Use (LTU) (PN: T5169A)

Enables eight additional internal ports and four fabric-facing ports on the
12-port model, for a total of 16 internal ports and eight fabric-facing ports

Does not include SFPs
Important
The Cisco MDS 9124e Fabric Switch for HP c-Class BladeSystem is compatible with the
BladeSystem c7000 enclosure only.
Fo
rT
!
TT

Rev. 12.31
6 –3
Implementing HP BladeSystem Solutions
Cisco MDS 9124e Fabric Switch features and components
Features of the Cisco MDS 9124e Fabric Switch include:

Auto-sensing link speeds (Gb/s) — 4/2/1

Fabric support — Full fabric

Aggregate bandwidth (Gb/s, end-to-end) — 192

PortChannel (Gb/s) — 32Gb/bundle
Universal ports with self discovery

Nondisruptive software upgrades

SAN-OS level 3.1(2) or later
de
liv
er
Standard and optional software
y

on
ly
Note
PortChannel includes up to eight ports in one logical bundle.
The standard software components are:


SAN-OS — Delivers advanced storage networking capabilities.
Cisco Fabric Manager — Provides integrated, comprehensive management of
larger storage area network (SAN) environments, enabling you to perform vital
tasks such as topology, discovery, fabric configuration and verification,
provisioning, monitoring, and fault resolution.
Optional software components are:
Cisco Enterprise Package — Contains a set of advanced traffic engineering and
advanced security features recommended for all enterprise SANs. The following
additional features are bundled together in the Cisco MDS 9000 Enterprise
package:
Fo
rT

Cisco Fabric Manager Server (FMS) Package — Provides historical performance
monitoring for network traffic hot-spot analysis, centralized management
services, and advanced application integration.
TT

6 –4

Quality of Service (QoS) levels

Switch/switch and host/switch authentication

Host to logical unit number (LUN) zoning

Read-only zoning

Individual port security

Virtual SAN (VSAN)-based access control
Rev. 12.31
S
Storage Conne ctivity Options for HP BladeSyystem
de
liv
er
y
on
ly
Cisco MDS 9124e Fabric Switch layout
Fo
rT
TT
The prece
eding graphic shows the Cisco MDS 9124e Fabriic Switch layyout and
compone
ents.
Rev. 12.3
31
6 –5
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Dynam
mic Ports on
o Deman
nd
Sta
atic mapping co
onfiguration
Static ma
apping descrribes the relationship betw
ween the devvice bays an
nd the interna
al
switch po
orts. Specific device bayss must be pop
pulated to m
match a corre
esponding
active sw
witch port. This configuration significa ntly enhancees usability fo
or low-touch
server customers.
rT
TT
With Dyn
namic Ports on
o Demand (DPOD),
(
youu can map any device ba
ay to an activve
port. Portts are allocatted on a firstt-come-first, sserved basis to any locattion, includin
ng
external ports.
p
The nu
umber of pre
e-reserved po
orts decreasees the numbe
er of ports fro
om
the pool of ports. Rem
moving a serrver or externnal port (exceept a pre-resserved port)
ol.
expands the available DPOD poo
Fo
For exam
mple, if you are
a licensed for
f eight inte rnal facing p
ports, you ca
an put any
combinattion of eight full-height an
nd half-heighht server blad
des with Fibrre Channel
mezzanin
ne cards in any
a device bay. If you rem
move one off the blades o
ne
or mezzanin
cards, yo
ou have freed
d one port on
n the switch and made itt available to
o another serrver
or Fibre Channel
C
mez
zzanine card
d. The switch senses a deevice trying to
o communica
ate
to an inte
ernal port an
nd, if ports are available in the licensse pool, a po
ort will be
activated.
On the Cisco
C
Fabric Switch
S
for BladeSystem, a
any of the eiight internal ports and
external ports
p
ext1 through ext4 are
a licensed by default. A single on-d
demand portt
activation
n license is re
equired to usse the remainning eight in
nternal and fo
our external
ports.
6 –6
Rev. 12
2.31
S
Storage Conne ctivity Options for HP BladeSyystem
Brocade 8Gb SA
AN switch
on
ly
Broca
ade SAN
N switches
Currently,, the HP Blad
deSystem SA
AN switch po
ortfolio includ
des the Broca
ade 8Gb SA
AN
Switch.
de
liv
er
y
The switcch hot-plugs into the enclo
osures, uses power and ccooling provvided by the
enclosure
es, and features 24 auto-sensing portts (16 interna
al and 8 exte
ernal). The
switches can be mana
aged locally
y and remoteely using the HP BladeSysstem Onboard
Administrrator and Bro
ocade Fabricc OS configuuration and m
management tools.
This switcch also suppo
orts DPOD, a feature tha
at automatica
ally discoverss online portts
and assig
gns an availa
able license to them. Thiss feature ena
ables you to connect servver
blades to
o switch portss without reg
gard for the sserver slot po
opulated; the
e associated
switch po
orts automatically activate
e as the servver ports are deployed. PPorts are
activated on a first-co
ome, first-servved basis forr any combin
nation of loca
ations,
g external po
orts.
including
The Broca
ade 8Gb SA
AN Switch features:

8Gb
b performancce
TT

A sin
ngle trunk grroup of eight SAN-facing
g ports for up
p to 64Gb/s of balanced
d
throu
ughput
Additional bufferr credits and 8G long-wa
ave B-series 10km Fibre C
Channel SFP+

Man
nagement fea
atures in the Power Pack+
+ bundle
Fo
rT

Rev. 12.3
31
6 –7
Implementing HP BladeSystem Solutions
Brocade SAN switch licensing
The Brocade 8Gb SAN Switch integrates the following license options that
complement existing HP product lines:


Two short-wave 8Gb SFPs

Full fabric connectivity
HP B-Series 8/24c SAN Switch (PN: AJ821A)
on
ly
8Gb SAN Switch with 12 ports enabled for any combination (internal and
external)

8Gb SAN Switch with 24 ports enabled (16 internal and 8 external ports)

Four short-wave 8Gb SFPs

Full fabric connectivity
HP B-Series 8/24c SAN Switch Power Pack+ (PN: AJ822A)

8Gb SAN Switch with 24 ports enabled (16 internal and 8 external ports)

Four short-wave 8Gb SFPs

Full fabric connectivity

Power Pack+ bundle
HP Brocade 8/12c SAN Switch 12-port Upgrade LTU (PN: T5517A)
Fo
rT
TT


y

HP B-Series 8/12c SAN Switch (PN: AJ820A)
de
liv
er

6 –8
Rev. 12.31
Storage Connectivity Options for HP BladeSystem
Brocade SAN switch software
The standard and optional software for Brocade SAN switches includes:



on
ly

Frame filtering — Enables the switch to ”view“ the first 64 bytes of the Fibre
Channel frame and also provides advanced capabilities such as the optional
software components Advanced Zoning and Advanced Performance Monitoring
(APM)
Advanced zoning — Enables administrators to organize a physical fabric into
logical groups and prevent unauthorized access by devices outside the zone
Web tools — Enable organizations to monitor and manage single Fibre Channel
switches and small SAN fabrics
y

Access Gateway — Enables seamless connectivity for Brocade-embedded SAN
switches to other supported SAN fabrics and enhances scalability and simplifies
manageability
Dynamic Path Selection — Improves performance by routing data traffic
dynamically across multiple links and trunk groups using the most efficient path
in the fabric
de
liv
er

Secure Fabric OS — Provides policy-based security protection for more
predictable change management, assured configuration integrity, and reduced
risk of downtime

TT

Security methods include digital certificates and digital signatures, multiple
levels of password protection, strong password encryption, Public Key
Infrastructure (PKI)-based authentication, and 128-bit encryption of the
private key used for digital signatures
Power Pack+ Software Bundle — Includes Adaptive Networking, inter-switch link
(ISL) trunking, Advanced Performance Monitoring, Extended Fabrics, and Fabric
Watch
Adaptive Networking Services — Optimizes fabric behavior and ensures
ample bandwidth for mission-critical applications; tools include QoS,
Ingress Rate Limiting, Traffic Isolation, and Top Talkers
rT

Fo

Rev. 12.31
ISL trunking — Logically groups up to eight E-ports (switch mode) or F-ports
(Access Gateway mode) to provide a high-bandwidth trunk between two
Brocade or HP B-Series switches
6 –9
Implementing HP BladeSystem Solutions

Extended Fabrics — Increases the scalability, reliability, and performance
benefits of Fibre Channel SANs beyond the native 10 km distance specified
by the Fibre Channel standard.

Fabric Watch — Enables each switch to monitor the health of the SAN for
potential faults and automatically alert network managers to problems
before they become failures
on
ly
Advanced Performance Monitoring — Enables administrators to monitor
application data traffic from a SID (Source ID) to a DID (Destination ID), so
they can fine-tune and scale the fabric more efficiently
Fabric Manager — Manages up to 80 switches across multiple fabrics in real
time, helping SAN administrators with SAN configuration, monitoring, dynamic
provisioning, and daily management—all from a single seat
Fo
rT
TT
de
liv
er
y


6 –10
Rev. 12.31
S
Storage Conne ctivity Options for HP BladeSyystem
SAS storag
ge solutions for BladeeSystem
m serverrs
HP 3Gb SAS B
BL Switch
on
ly
HP 3G
Gb SAS BL Switch
de
liv
er
y
The HP 3Gb
3
SAS BL Switch
S
for HP BladeSysteem enclosurees is an integ
gral part of H
HP
direct-con
nnect SAS sto
orage, enabling a straighhtforward, exxternal zone
ed SAS or
shared SA
AS storage solution.
s
The SAS architeccture combin
nes an HP P7
700m Smart
Array con
ntroller in ea
ach server with 3Gb SAS BL switches connected to
o either an H
HP
600 Mod
dular Disk Sy
ystem (MDS6
600) enclosuure for zoned
d SAS or an H
HP 2000sa
Storage Modular
M
Sma
art Array (MSA2000sa) for shared S
SAS storage.
The 3Gb SAS BL Swittch enables two
t
external architectures for BladeSyystem serverss:
Sharred SAS — The
T 3Gb SAS
S BL Switch ccan also be uused to acce
ess shared SA
AS
stora
age provided
d by the MSA
A2000sa. A P700m Sma
art Array Controller installed
in ea
ach server accts as a passs-through, wi th RAID funcctionality for shared SAS
stora
age provided
d by the MSA
A2000sa. Thhe MSA2000
0sa creates a shared
stora
age environm
ment where more
m
than onne server blade can acce
ess a storage
e
logiccal unit.
Fo
rT

Zone
ed SAS — Usse the HP Virrtual SAS Ma
anager (VSM
M) software o
of the 3Gb S
SAS
BL Sw
witch to zone
e groups of physical
p
drivves in the MD
DS600 and a
assign them to
indivvidual server blades. The
e drives in thee zone will a
appear as loccal drives to
that individual se
erver. A P700
0m Smart Arrray controlleer installed in
n the server
unctionality fo
or the group of physical drives that have been
provvides RAID fu
zone
ed to that serrver.
TT

Rev. 12.3
31
6 –11
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
HP Virtual SA
AS Manag
ger
Example of the VSM
M Maintain tab
HP Virtua
al SAS Mana
ager (VSM) iss embedded
d in the 3Gb SAS BL Swittch firmware
and is the
e software application ussed to createe hardware-b
based zone g
groups to
control acccess to exte
ernal SAS sto
orage enclosuures and tap
pe devices.
VSM ena
ables you to perform the following tassks:
Enter switch para
ameters

Crea
ate zone groups

Assig
gn zone grou
ups to serverrs

Rese
et the switch
rT
TT

Update firmware
e
Fo

Note
Storage is configured,
c
forrmatted, and partitioned ussing software utilities such a
as
the HP Array Configuratio
on Utility (ACU
U), the HP Sto
orage Management Utility
(SMU), and Microsoft Dissk Manager. C
Configuration tools differ fo
or each storag
ge
enclosure and operating system enviro
onment. For mo
ore informatio
on, see the
QuickSpecss for the storag
ge enclosure.
Note
For more infformation abo
out HP Virtual SAS Manageer, consult the HP Virtual SA
AS
Manager 2.2.4.x User Guide
G
availabl e from the HPP website.
6 –12
Rev. 12
2.31
S
Storage Conne ctivity Options for HP BladeSyystem
4X InfiniBa
and switch modules
on
ly
QLogic BLc 4X
X QDR IB Switcch for HP BladeeSystem
The 4X In
nfiniBand Sw
witch moduless for BladeSyystem are do
ouble-wide sw
witch module
es
based on
n the Mellano
ox technolog
gy. The 4X InnfiniBand Sw
witch module has
16 downlink ports to connect up to
t 16 server blades in thee enclosure.
de
liv
er
y
A subnet manager is required to manage and
d control an InfiniBand fa
abric. The
subnet manager functionality can be provided
d by either a rack-mount InfiniBand
dded fabric manager
m
(alsso known ass an internally managed
switch with an embed
m
softw
ware on a seerver conneccted to the
switch) orr by host-bassed subnet manager
fabric.
The 4X In
nfiniBand sw
witch moduless available fo
or a BladeSyystem environ
nment are:
HP 4X
4 QDR Infin
niBand Switch
h Module

Compatible with only the
e BladeSysteem c7000 en
nclosure

Includes 16 internal 4X QDR
Q
downlinnk ports

Based on the Mellanox InfiniScale IV
V technologyy

pluggable (Q
QSFP) uplinkk ports for
Supports 16 quad small form-factor p
inter-switch links or to co
onnect to exteernal serverss

0Gb/s (QDR
R) bandwidth
Supports 40
HP 4X
4 DDR InfiniiBand Gen2 Switch Mod ule
rT

TT

Compatible with BladeS
System c7000
0 and c3000
0 enclosures

Includes 16 internal 4X DDR
D
downlinnk ports

Based on the Mellanox InfiniScale IV
V technologyy

nks or to con
nnect to exterrnal
Supports 16 QSFP uplink ports for innter-switch lin
servers

0Gb/s (DDR)) bandwidth
Supports 20
Fo

Rev. 12.3
31
6 –13
Implementing HP BladeSystem Solutions

Compatible with BladeSystem c7000 and c3000 enclosures

Based on the Mellanox InfiniScale III technology

Supports 8 CX4 uplink ports for inter-switch links or to connect to external
servers

Supports 20Gb/s (DDR) bandwidth
on
ly
QLogic BLc 4X QDR IB Switch

Includes 16 internal 4X QDR downlink ports

Includes 16 external 4X QDR QSFP uplink ports

Uses the QLogic TrueScale ASIC architecture

Designed to cost-effectively link workgroup resources into a cluster or
provide an edge switch option for a larger fabric

Supports an optional management module that includes an embedded
subnet manager

Supports optional InfiniBand Fabric Suite software

Enables up to a 288-node fabric using only the management capability of
the unit
y

HP 4X DDR InfiniBand Switch Module
de
liv
er

Fo
rT
TT
Depending on the mezzanine connectors used for the InfiniBand host channel
adapter (HCA), the switch module must be inserted into interconnect bays 3 and 4,
5 and 6, or 7 and 8.
6 –14
Rev. 12.31
S
Storage Conne ctivity Options for HP BladeSyystem
Mez
zzanine
e cards and adapterss
Similar to
o the PCI slotts and cards used in the ProLiant servvers, mezzan
nine slots and
d
cards in the
t BladeSysstem provide a connectio
on from the seerver bladess to Ethernet,
Fibre Cha
annel, and In
nfiniBand sw
witches.
y
on
ly
Mezz
zanine ca
ard and slot optio
ons availlable for BladeSyystem
de
liv
er
HP NC550m
N
10Gb 2-port PCIe xx8 Flex-10 Ethe rnet Adapter
Mezzanine cards and
d slots on server blades a
are either Tyype I or Type II. Both type
es
e physical siz
ze but have different keyying, and pro
ovide differen
nt
of slots have the same
nine cards draw less pow
wer
amounts of power to the mezzaniine cards. Tyype I mezzan
than Type
e II mezzanin
ne cards.
The type of the mezza
anine card determines
d
w
where it can b
be installed in the server
blade. Ty
ype I mezzan
nine cards ca
an be installeed in any Type I or Type II mezzanine
e
slots; Typ
pe II mezzaniine cards mu
ust be installeed in Type II mezzanine slots only.
Fo
rT
TT
where you n
In turn, where
w
you insstall the mezz
zanine card determines w
need to installl
the intercconnect modules. The serrver blade m ezzanine po
ositions (mezzanine 1, 2, or
3) connect directly through the sig
gnal midplanne to the app
propriate inte
erconnect ba
ays.
The intercconnect bayss are designe
ed to acceptt single-widee or double-w
wide moduless.
Note
Both Type I and Type II mezzanine
m
carrds use the sam
me 450-pin connector (200
0
signal/250 gnd) to connect the powerr and PCIe sig
gnals and the connections
from the serrver blades to the interconn ect bays.
Multifuncction adapterrs include iSC
CSI network boot (iSCSI boot), which
h allows a
server to boot from a remote operrating system
m image loca
ated on a SA
AN.
Rev. 12.3
31
6 –15
Implementing HP BladeSystem Solutions
Type I mezzanine cards and slots
Type I cards and slots can be either four-lane (x4) or eight-lane (x8). Type I
mezzanine slots:

Are the lower-positioned slots on the server blade system board

Accept Type I mezzanine cards only
on
ly
The Type I mezzanine card is supported by all server blades and typically is used for
Gigabit Ethernet and Fibre Channel applications. It can be physically positioned in
either Type I or Type II mezzanine slot. Electronic keying by the Onboard
Administrator detects any mismatch between the mezzanine card and the switch
ports and will not allow the connection if it is misconfigured.
PCIe x4 or x8 bus width

Maximum power is 15W

3.97” (100.84mm) x 4.46”(113.28mm)
de
liv
er

y
The basic architecture of the Type I mezzanine card has the following specifications:
Type II mezzanine cards and slots
Type II cards and slots are eight-lane (x8) only. Type II mezzanine slots:

Are the higher-positioned slots on the server blade system board

Accept either Type I or Type II mezzanine cards
The Type II mezzanine card operates only in Type II mezzanine slots and is typically
used for high-powered Gigabit applications such as 10Gb Ethernet.
TT
All Type II mezzanine cards support eight lanes of connections to:
Four 2-lane connections to four single-wide switches

Two 4-lane connections to a double-wide switch for redundant connections
rT

The basic architecture of the Type II mezzanine card has the following features:
25W maximum power

PCIe x8 bus width

5.32 inches (135.13mm) x 4.46 inches (113.28mm)
Fo

As with Type I mezzanine cards, electronic keying by the Onboard Administrator
detects a mismatch between the mezzanine card and switch module.
6 –16
Rev. 12.31
S
Storage Conne ctivity Options for HP BladeSyystem
HBAss availab
ble
The host bus adapter (HBA) mezz
zanine cardss available fo
or BladeSyste
ems are:

QLog
gic QMH2562 8Gb Fibrre Channel H
HBA (PN: 45
51871-B21)

Emulex LPe1205--HP 8Gb/s Fibre
F
Channeel HBA (PN: 456972-B21
1)

Broccade 804 8G
Gb FC HBA for
f HP BladeeSystem (PN: 590647-B21)
TT
de
liv
er
y
on
ly
QLog
gic QMH
H2562 8G
Gb Fibre Channeel HBA
QLogic QM
MH2562 8Gb Fibre Channel HBA
Fo
rT
The QLog
gic QMH256
62 8Gb Fibre
e Channel H
HBA is a dua
al-channel PC
CIe mezzanin
ne
form facto
or card desig
gned for BladeSystem so
olutions. It deelivers twice tthe data
throughput as the pre
evious genera
ation 4Gb m
mezzanine ca
ard. It is optimized for
wer usage, management
m
t, security, reeliability, ava
ailability, and
d
virtualization, low pow
bility. It is also backward compatible with 4Gb and 2Gb Fibrre Channel
serviceab
speeds and is compa
atible with all BladeSystem
m server blades. It is opttimized for H
HP
d
and is
i supported by third-parrty SAN vend
dors.
storage devices
Rev. 12.3
31
6 –17
Implementing HP BladeSystem Solutions
This HBA features:


Enables multiple logical (virtual) connections to share the same physical
ports

Supports 256 queue pairs for intensive virtualization

Prevents conflicts between multiple queues through prioritization of queues
Reduced power consumption
on
ly
Supports virtualized servers for overall effective server utilization

Saves power with the latest-generation technology

Reduces overall power consumption by reducing the number of components
on each Fibre Channel HBA

Requires lower airflow so it lowers power consumption
Multipath support for redundant HBAs and paths, including Linux driver failover
Optimized reliability, availability, and serviceability (RAS), security, and
manageability
Fo
rT
TT


y

Advanced embedded support for virtualized environments
de
liv
er

6 –18
Rev. 12.31
S
Storage Conne ctivity Options for HP BladeSyystem
y
on
ly
Emule
ex LPe1205-HP 8G
Gb/s Fib
bre Chan
nnel HBA
A
de
liv
er
Emulex LPe1205-HP 8Gb/ss Fibre Channeel HBA
The Emulex LPe1205-HP dual-portt Fibre Chan nel HBA pro
ovides reliable, highperforma
ance 8Gb/s connectivity. In addition to providing
g greater ban
ndwidth, the
LPe1205-HP HBA also
o provides fe
eatures such a
as data integ
grity, securityy, and
virtualization, which are
a all comp
plementary to
o initiatives im
mportant to tthe enterprise
e
data centter.
TT
This HBA
A combines higher
h
transfe
er rates, enha
anced I/O p
processing, a
and extended
d
interrupt managemen
nt with Emule
ex Virtual HBA
BA Technolog
gy. The dual-cchannel desiign
is ideal fo
or mission-crritical applica
ations that reely on high-a
availability co
onnectivity.
Fibre Cha
annel—Security Protocol (FC-SP) com
mpliance enables protection of
proprieta
ary data from
m unauthorize
ed access.
rT
Features of the Emule
ex LPe1205-H
HP HBA incluude:
Com
mprehensive virtualization
v
n capabilitiess with suppo
ort for N_Portt ID
Virtu
ualization (NPIV) and Virttual Fabric — Provides support for up
p to 255 VPo
orts,
whicch improves server
s
consolidation capa
abilities and asset utiliza
ation
Supe
erior perform
mance capab
ble of sustain
ning up to 20
00,000 I/Os per second per
chan
nnel — Delivvers the perfo
ormance neeeded for high
h-transaction database
envirronments
Fo


Rev. 12.3
31
6 –19
Implementing HP BladeSystem Solutions

PCI Express Bus Gen I (x8), Gen II (x4)

Uses PCIe 2.0, which provides 5Gb/s lanes (double the 2.5Gb/s data rate
of PCIe 1.0)

Supports the faster bit rate as well as retaining backward compatibility with
existing PCIe 1.0 server blades, which enables greater flexibility and
reliability for subsequent generations of servers
Message Signaled Interrupts eXtended (MSI-X) Support for Greater Host CPU
Utilization — Streamlines interrupt routing to improve overall server efficiency
Fo
rT
TT
de
liv
er
y

Host to Fabric FC-SP authentication—Provides advanced security, protecting the
SAN from potential threats such as WorldWide Name (WWN) spoofing,
compromised servers, and so on
on
ly

6 –20
Rev. 12.31
S
Storage Conne ctivity Options for HP BladeSyystem
y
on
ly
Broca
ade 804 8Gb Fib
bre Chan
nnel Hostt Bus Ada
apter
de
liv
er
Brocade 804 8Gb
8
Fibre Cha
annel Host Bus A
Adapter
The Broca
ade 804 8G
Gb Fibre Cha
annel HBA offfers high-performance co
onnectivity,
extends fabric feature
es to the servver and appl ications, and
d integrates sseamlessly w
with
ment software
e such as HP
P Data Centeer Fabric Ma
anager to pro
ovide a
managem
complete
e end-to-end data
d
center solution.
s
This dual-port Fibre Channel
C
HBA
A supports 8G
Gb/s and 4G
Gb/s connectivity. Features
BA include:
of this HB
Supp
port for up to
o 255 VPorts

Up to
o 500,000 I/Os
I
per seccond per cha
annel

Up to
o 1600MB/ss throughputt per port
Fabrric-based boo
ot LUN disco
overy, which enables simplified deplo
oyment of bo
ootover--SAN environ
nments
rT

TT

Wide operating system support

Softw
ware tools
Fo

Rev. 12.3
31

Host Connecctivity Manager GUI and
d command line interface
e (CLI)

Multipathing
g

Managemen
nt APIs
6 –21
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
HP 4X
X InfiniBa
and Mez
zzanine HCAs
H
QLogic 4X QDR IB Dual-Po
ort Mezzanine HCA
The 4X In
nfiniBand Me
ezzanine HC
CAs for BladeeSystem encllosures includ
de:
HP 4X
4 QDR IB Du
ual-Port Mezzzanine HCA
A

Based on the ConnecX-2
2 technologyy from Mellan
nox or on the
e TrueScale
technology from
f
QLogic

Designed ass a dual-portt 4X QDR InffiniBand PCI Express G2 Mezzanine
card
or PCI Expresss 2.0 x8 connnectors on B
BladeSystem G6 server
Designed fo
blades
rT

TT

o ProLiant BL280c G6, B
BL2x220c G
G6, BL460c G
G6, and
Supported on
BL490c G6 server blade
es

Supported with
w the Volta
aire OFED Li nux driver sta
ack and Win
nOF 2.0 on
Microsoft Windows
W
HPC
C Server 200
08
Fo

6 –22
Rev. 12
2.31
Storage Connectivity Options for HP BladeSystem

HP 4X DDR IB Dual-port Mezzanine HCA

Designed as a dual-port 4X DDR InfiniBand PCI Express Mezzanine card

Based on the Mellanox ConnectX technology

Supported on HP Integrity BL860c server blades and most ProLiant server
blades

Supported with the Voltaire OFED Linux driver stack and WinOF 2.0 on
Windows HPC Server 2008 (ProLiant blades only)
HP 4X DDR IB Mezzanine HCA
on
ly


Designed as a single-port 4X DDR InfiniBand PCI Express Mezzanine card

Supported on ProLiant and Integrity server blades
HP IB QDR/EN 10Gb 2P 544M Mezzanine Adaptor


Capable of dual 10 Gb Ethernet ports when connected to a supported Ethernet
switch in a c7000 enclosure
Is designed for PCI Express 3.0 x8 connectors on BladeSystem Gen8 server
blades
Can be used in either mezzanine slot of the server blade
Fo
rT

Based on the Mellanox ConnectX-3 IB technology
TT

de
liv
er
y
The HP IB QDR/EN 10Gb 2P 544M Mezzanine Adaptor delivers low-latency and
up to 40Gbps (QDR) bandwidth (dual port) for performance-driven server and
storage clustering applications in High-Performance Computing (HPC) and enterprise
data centers. Key features include:
Rev. 12.31
6 –23
Implementing HP BladeSystem Solutions
Learning check
1.
Advanced zoning and frame filtering are standard software with the Brocade
8Gb SAN Switch.
 True
 False
a.
Cisco MDS 9124e Fabric Switch
b.
Brocade 8Gb SAN Switch
c.
HP 4Gb VC-FC Module
d.
HP 4Gb FC Pass-Thru Module
The QLogic QMH2462 4Gb Fibre Channel HBA fits all ProLiant server blades.
 False
Which feature enables the Brocade 8Gb SAN Switch to facilitate interoperability
with other SAN fabrics and eliminate domain considerations while improving
SAN scalability?
ISL Trunking
b.
Frame filtering
c.
Access Gateway mode
d.
Fabric QoS
TT
a.
Which HBA provides support for up to 255 VPorts, which improves server
consolidation capabilities and asset utilization?
a.
QLogic QMH2562 8Gb Fibre Channel HBA
b.
QLogic QMH2462 4Gb Fibre Channel HBA
c.
Emulex LPe1205-HP 8Gb/s Fibre Channel HBA
Fo
rT
5.
de
liv
er
 True
4.
on
ly
3.
Which switch supports Dynamic Ports on Demand?
y
2.
d.
6.
Emulex LPe1105-HP 4Gb Fibre Channel HBA
List the tasks that HP Virtual SAS Manager enables.
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
6 –24
Rev. 12.31
Configuring Ethernet Connectivity Options
Module 7
Objectives
An HP GbE2c Layer 2/3 Ethernet Blade Switch

A Cisco Catalyst Blade Switch 3020 or 3120

An HP 1:10Gb Ethernet BL-c Switch

An HP 6120XG or 6120G/XG switch
Fo
rT
TT
de
liv
er
y

on
ly
After completing this module, you should be able to explain how to configure:
Rev. 12.31
7 –1
Implementing HP BladeSystem Solutions
Configuring an HP GbE2c Layer 2/3 Ethernet Blade
Switch
When planning the switch configuration, secure access to the management interface
by:

Creating users with various access levels

Enabling or disabling access to management interfaces to fit the security policy
Changing default Simple Network Management Protocol (SNMP) community
strings for read-only and read-write access
on
ly

User, operator, and administrator access rights

User

Operators

Administrators
de
liv
er
y
The user interface provides multilevel password-protected user accounts. To enable
better switch management and user accountability, three levels or classes of user
access have been implemented on the switch. Levels of access to the command line
interface (CLI), web management functions, and screens increase as needed to
perform various switch management tasks. Access classes are:
TT
Access to switch functions is controlled through the use of unique surnames and
passwords. After you are connected to the switch through the local console, telnet, or
Secure Shell (SSH) encryption, you are prompted to enter a password.
Fo
rT
Note
HP recommends that you change default switch passwords after the initial
configuration and as regularly as required under your network security policies.
For more information, see “Setting Passwords” in the GbE2c Ethernet Blade
Switch for HP c-Class Command Reference Guide available from:
http://www.docs.hp.com
7 –2
Rev. 12.31
Configuring Ethernet Connectivity Options
Access-level defaults
The default user names and passwords for each access level are:

User interaction with the switch is completely passive. The user has no direct
responsibility for switch management. He or she can view all switch status
information and statistics, but cannot make any configuration changes to the
switch.

The password is user.
Operator

Operators can only make temporary changes on the switch. These changes
will be lost when the switch is rebooted or reset. Operators have access to
the switch management features used for daily switch operations. Because
any changes an operator makes are undone by a reset of the switch,
operators cannot severely impact switch operation.

By default, the operator account is disabled and has no password.

Admin
The password is admin.
Fo
rT

Only administrators can make permanent changes to the switch
configuration; these changes are persistent across a reboot or reset of the
switch. The administrator has complete access to all menus, information,
and configuration commands on the switch, including the ability to change
both the user and administrator passwords. Because administrators can also
make temporary (operator-level) changes, they must be aware of the
distinctions between temporary and permanent changes.
TT

de
liv
er
y

User
on
ly

Rev. 12.31
7 –3
Implementing HP BladeSystem Solutions
Accessing the GbE2c Switch
You can access the GbE2c switch remotely through the Ethernet ports or locally
through the DB-9 management serial port.
To access the GbE2c switch locally:
Connect the switch DB-9 serial connector by using a null modem serial cable to
a local client device (such as a laptop computer) with VT100 terminal emulation
software.
2.
Open a terminal emulation session with the following settings:
9600 baud rate, 8 data bits

No parity, 1 stop bit

No flow control
To access the GbE2c switch remotely:
By default, the switch is set up to obtain its IP address from a Bootstrap Protocol
(BOOTP) server existing on the attached network. From the BOOTP server, use
the interconnect media access control (MAC) address to obtain the switch
IP address.
de
liv
er
1.
y

on
ly
1.
Important
By default, BOOTP is enabled at the factory. To establish a static IP address, you
must disable BOOTP.
!
From a computer connected to the same network, use the IP address to access
the switch by using a web browser or telnet application, which enables you to
access the switch Browser-Based Interface (BBI) or CLI.
rT
2.
TT
Note
The GbE2c switch can obtain its IP address from either BOOTP or DHCP.
To access the switch remotely, you must set an IP address in one of the following
ways:
Management port access — This is the most direct way to access the switch.
Fo



7 –4
Using a Dynamic Host Configuration Protocol (DHCP) server — When DHCP is
enabled, the management interface (interface 256) requests its IP address from
a DHCP server. The default value for the /cfg/sys/dhcp command is
enabled.
Manual configuration — If the network does not support DHCP, you must
configure the management interface (interface 256) with an IP address.
Rev. 12.31
Configuring Ethernet Connectivity Options
Logging in through the Onboard Administrator
The HP Onboard Administrator provides a single point of contact for performing
basic management tasks on server blades and switches within the enclosure. It
enables you to perform initial configuration steps for the enclosure as well as run-time
management and enclosure component configuration.
To log in through the Onboard Administrator, follow these steps:
Locate the Ethernet port on the Onboard Administrator module.
2.
Connect the Ethernet cable to the Onboard Administrator module and the
workstation/server or to the network containing the workstation.
on
ly
1.
Important
Verify that the interconnect is not being modified from any other connections
during the remaining steps.
!
Open a telnet connection by using the IP address set earlier. When the login
prompt displays, the connection locates the switch in the network.
4.
Enter the password. The default password is admin. If passwords have not been
changed from the default value, you are prompted to change them. You can do
one of the following:
de
liv
er
y
3.

Enter new system passwords.

Press Ctrl+C to bypass the password prompts.
Verify that the login was successful. A successful login displays the switch name
and user ID to which you are connected.
Fo
rT
5.
TT
Note
You can create up to two simultaneous admin sessions and four user sessions.
Rev. 12.31
7 –5
Implementing HP BladeSystem Solutions
Configuring redundant switches
Each GbE2c switch has five external Ethernet ports and 16 internal Gigabit Ethernet
ports providing connectivity to the server blades within the enclosure.
In a dual-switch configuration, switches in shared interconnect bays provide switch
redundancy by using dedicated crosslinks (ports 17 and 18). In addition, the signal
midplane has redundant paths to the network ports on the server blades.
on
ly
Each pair of switches consolidates up to thirty-two 10/100/1000 Ethernet signals
into one to eight gigabit ports (on the back of the system). This design eliminates up
to 31 network cables from the back of the server blade enclosure.
de
liv
er
Redundant crosslinks
y
Note
On a heavily used system, using a single uplink port for 32 Ethernet signals can
cause a traffic bottleneck. For optimum performance, HP recommends that at
least one uplink port per switch be used.
The two switches are connected through redundant 10/100/1000 crosslinks. These
two crosslinks provide an aggregate throughput of 2Gb/s for traffic between the
switches.
Redundant paths to server bays
Fo
rT
TT
Redundant Ethernet signals from each server blade are routed through the enclosure
backplane to separate switches within the enclosure. Two Ethernet signals are routed
to Switch 1 and two are routed to Switch 2. This configuration provides redundant
paths to each server bay; however, specific switch port to server mapping varies,
depending on which type of server blade is installed.
7 –6
Rev. 12.31
Configuring Ethernet Connectivity Options
Manually configuring a GbE2c Switch
You can configure a GbE2c switch manually using a CLI, a BBI, or an SNMP
interface. For more information on how to use these management interfaces to
configure the switch, see the GbE2c Ethernet Blade Switch for HP c-Class
Command Reference Guide available from:
http://www.hp.com/go/bladesystem/documentation
on
ly
After a switch is configured, you can back up the configuration to a TFTP server as a
text file. You can then download the backup configuration file from the TFTP server to
restore the switch back to the original configuration. This restoration could be
necessary if:
The switch configuration becomes corrupted during operation.

The switch must be replaced because of a hardware failure.
de
liv
er
Configuring multiple GbE2c Switches
y

You can configure multiple switches by using scripted CLI commands through telnet
or by downloading a configuration file using a TFTP server.

Using a configuration file — If you want the base configuration of multiple
switches in your network to be the same, you can manually configure one
switch, upload the configuration to a TFTP server, and use that configuration as
a base configuration template file.
TT

Using scripted CLI commands through telnet — The switch CLI enables you to
execute customized configuration scripts on multiple switches. You can tailor a
configuration script for one of the multiple switches and then deploy that
configuration to other switches from a central deployment server.
Fo
rT
Switch IP addresses are acquired by using BOOTP or DHCP; therefore, each switch
has a unique IP address. You can access each switch remotely from a central
deployment server and download an individual switch configuration to meet specific
network requirements.
Rev. 12.31
7 –7
Implementing HP BladeSystem Solutions
Configuring a Cisco Catalyst Blade Switch 3020 or
3120
Cisco Catalyst Blade Switches 3020 and 3120 share the same installation
procedures.
Obtaining an IP address


on
ly
IP addresses can be assigned to two of the switch interfaces:
The fa0 Ethernet interface — This Layer 3 Ethernet interface is connected to the
Onboard Administrator. It is used only for switch management traffic, not for
data traffic.
The VLAN 1 interface — You can manage the switch module from any of its
external ports through virtual LAN (VLAN) 1.
de
liv
er
y
Obtaining an IP address for the fa0 interface through the Onboard
Administrator
For the switch module to obtain an IP address for the fa0 interface through the
Onboard Administrator, these conditions must be met:


The basic configuration of the Onboard Administrator must be completed, and
you must have the user name and password for the Onboard Administrator.
A DHCP server must be configured on the network segment on which the
enclosure resides or the Enclosure-Based IP Addressing (EBIPA) feature must be
enabled for the appropriate interconnect bay.
TT

The c7000 enclosure must be powered on and the Onboard Administrator must
be connected to the network.
rT
Note
If the switch receives an IP address through the Onboard Administrator, the
VLAN 1 IP address is not assigned.
Fo
After you install the switch, it powers on and begins the power-on self-test (POST).
You can verify that the POST has completed by confirming that the system and status
LEDs remain green.
!
Important
If the switch module fails the POST, the system LED turns amber. POST errors are
usually fatal. Call Cisco Systems immediately if your switch module fails POST.
After you install the switch module in the interconnect bay, the switch automatically
obtains an IP address for its fa0 interface through the Onboard Administrator.
7 –8
Rev. 12.31
Configuring Ethernet Connectivity Options
Using a console session to assign a VLAN 1 IP address
You must assign the IP address before you can manage the switch. To assign an
IP address to the switch, you need the following information:
IP address

Subnet mask (IP netmask)

Default gateway IP address

Names of the SNMP read and write community strings

Host name, system contact, and system location
on
ly


Local access password

Telnet access password
SNMP read and write community strings (if you plan to use a networkmanagement program such as CiscoWorks)
de
liv
er

y
After completing the initial setup, you can configure these optional parameters
through the Cisco Express Setup program:
When you first set up the switch module, you can use Express Setup to enter the
initial IP information. Doing this enables the switch to connect to local routers and the
Internet. You can then access the switch through the IP address for further
configuration.
Cisco Express Setup
TT
Express Setup enables you to set basic configuration parameters such as the
IP address, default gateway, host name, and the system, enable mode
(configuration), and telnet passwords. Cisco recommends using TCP/IP to manage
your switch. You can use Express Setup to configure your switch to be managed
through TCP/IP.
Fo
rT
To run Express Setup, you need a PC and an Ethernet (Cat 5) straight-through cable.
Rev. 12.31
7 –9
Implemen
nting HP BladeS
System Solutions
on
ly
Assig
gning the VLAN 1 IP addrress
To assign
n the VLAN 1 IP address::
y
Mode button
b
location on Cisco switch
Veriffy that no de
evices are connected to thhe switch, beecause during Express
Setup, the switch
h listens for a DHCP serveer.
2.
If your laptop ha
as a static IP address, beffore you beg
gin, change yyour laptop
settin
ngs to tempo
orarily use DH
HCP.
de
liv
er
1.
Important
You must iniitiate this proccess immediattely after insta
alling the switcch module in tthe
server blade
e. If you miss the opportuni ty to assign th
he IP address this way, you
will need to remove and then
t
reinstall tthe switch mo
odule.
!
en the switch
h module pow
wers on, it beegins the PO
OST. You can
n verify that
Whe
POST has completed by conffirming that tthe system an
nd status LED
Ds remain
gree
en.
4.
Presss and hold th
he Mode buttton until the four LEDs neext to the Mo
ode button tu
urn
gree
en. This takess approximattely three secconds. Relea
ase the Mode
e button.
Fo
rT
TT
3.
7 –10
Note
If you have held the Mod
de button for m
more than two
o minutes and the LEDs have
e
not turned green,
g
obtaining the VLAN 1 IP address through Expre
ess Setup is no
o
longer possible and you must remove a
and then reinsstall the switch
h module. If th
he
LEDs next to
o the Mode bu
utton begin to
o blink after yo
ou press the b
button, release
e it.
Blinking LED
Ds mean that the
t switch mo
odule has alrea
ady been con
nfigured and
cannot go into Express Se
etup mode.
Rev. 12
2.31
Configuring Ethernet Connectivity Options
5.
Connect a CAT-5 Ethernet cable to any Ethernet port on the switch module front
panel. Connect the other end to the Ethernet port on the laptop or workstation.
Caution
Do not connect the switch module to any device other than the laptop or
workstation being used to configure it.
Verify that the port status LEDs on both connected Ethernet ports are green.
7.
After the port LEDs turn green, wait at least 30 seconds and launch a web
browser on your laptop or workstation.
8.
Enter the IP address 10.0.0.1 (or 10.0.1.3 or 10.0.2.3, depending on the
firmware version).
9.
Continue the configuration by completing the Express Setup fields.
Fo
rT
TT
de
liv
er
y
on
ly
6.
Rev. 12.31
7 –11
Implementing HP BladeSystem Solutions
Obtaining an IP address for the fa0 interface through the
Onboard Administrator
For the switch to obtain an IP address for the fa0 interface through the Onboard
Administrator, the BladeSystem enclosure must be powered on and connected to the
network. Then follow these steps:
Complete the basic configuration of the Onboard Administrator and have the
user name and password for the Onboard Administrator.
2.
A DHCP server must be configured on the network segment on which the server
blade resides. The Onboard Administrator must be configured to run as a DHCP
server, or the EBIPA feature must be enabled for the appropriate interconnect
bay.
3.
Install the switch in the interconnect bay. After approximately two minutes, the
switch automatically obtains an IP address for its fa0 interface through the
Onboard Administrator.
4.
After you have installed the switch, it powers on. When it powers on, the switch
begins the POST, which might take several minutes. Verify that the POST has
completed by confirming that the system and status LEDs remain green. If the
switch fails the POST, the system LED turns amber. POST errors are usually fatal.
Call Cisco Systems immediately if the switch fails the POST.
5.
Wait approximately two minutes for the switch to get the software image from its
flash memory and begin the autoinstallation.
6.
Using a PC, access the Onboard Administrator through a browser window.
7.
Open the Interconnect Bay Summary window, where you can find the assigned
IP address of the switch fa0 interface in the Management URL column.
8.
Click the IP address hyperlink for the switch from the Management URL column
to open a new browser window. The Device Manager window for the switch
displays.
rT
TT
de
liv
er
y
on
ly
1.
On the left side of the Device Manager GUI, click Configuration  Express
Setup. The Express Setup home page displays.
Fo
9.
7 –12
Rev. 12.31
Configuuring Ethernet C
Connectivity Op
ptions
Configuring
g an HP 1:10G
Gb Etheernet BL-c Switch
HP 1:10G
Gb Ethernet BL-c
B Switch iss a single-wid
de switch witth 10Gb uplinks.
on
ly
Plann
ning the 1:10Gb
1
Ethernet
E
BL-c
B switcch config
guration
1:1
10Gb Ethernet BL-c Switch
ettings are:
Default se
All downlink
d
and
d uplink portss enabled

A de
efault VLAN assigned to each port

The VLAN
V
ID (VID) set to 1
de
liv
er

y
HP recom
mmends that you plan the
e configuratio
on before yo
ou actually co
onfigure the
switch. When
W
you de
evelop your plan,
p
consideer your defauult settings and assess the
e
particular server environment to determine
d
any requiremen
nts.
This default configura
ation enabless you to connnect the serve
ver blade encclosure to the
e
b using a siingle uplink cable from a
any external Ethernet con
nnector.
network by
Switch port mapping
TT
The 1:10G
Gb Ethernet BL-c Switch does
d
not deteermine NIC eenumeration and mappin
ng
NIC interrfaces to swittch ports. NIC numbering
g is determin
ned by:

Servver type

Ope
erating system
m
rT
NICss enabled on
n the server
Fo

Note
Port 18 is re
eserved for the connection to the Onboa
ard Administra
ator module. TThe
Onboard Administrator
A
performs
p
the fo
ollowing funcctions:
 Enables you
y perform fu
uture firmwaree upgrades
 Controls all
a port enabling by matchiing ports betw
ween the serve
er and the
interconnect bay
 Verifies thhat the server NIC option m
matches the sw
witch bay that is selected an
nd
enables all
a ports for the NICs installled before pow
wer up
For detailed
d port mappin
ng informationn, see the HP BladeSystem enclosure
installation poster or the HP BladeSysttem enclosuree setup and installation guid
de
on the HP website:
w
http://www.hp.co
om/go/bladeesystem/docum
mentation
Rev. 12.3
31
7 –13
Implementing HP BladeSystem Solutions
Accessing the 1:10Gb Ethernet BL-c switch
After installing the switch, you can access it remotely or locally. The switch is
accessed remotely using the Ethernet ports or locally using the DB-9 management
serial port.
To access the switch remotely:
Assign an IP address. By default, the switch obtains its IP address from a BOOTP
server on the attached network.
2.
From the BOOTP server, use the switch MAC address to obtain the switch
IP address.
3.
Use the IP address to access the switch BBI or CLI, from a computer connected to
the same network, using a web browser or telnet application.
To access the switch locally:
on
ly
1.
Connect the switch DB-9 serial connector, using a null modem serial cable to a
local client device with VT100 terminal emulation software.
2.
Open a VT100 terminal emulation session with these settings: 9600 baud rate,
eight data bits, no parity, one stop bit, and no flow control.
Fo
rT
TT
de
liv
er
y
1.
7 –14
Rev. 12.31
Configuuring Ethernet C
Connectivity Op
ptions
y
on
ly
User, operator, and ad
dministra
ator acceess rights

Ope
erators can only
o
effect tem
mporary cha nges on the switch. Thesse changes w
will
be lo
ost when the switch is reb
booted or resset. Operato
ors have acce
ess to the sw
witch
management fea
atures used fo
or daily switcch operation
ns. Because a
any changes an
a undone by a reset off the switch, operators ca
annot severelly
operrator makes are
impa
act switch op
peration.
Adm
ministrators are the only ones
o
that cann make perm
manent chang
ges to the sw
witch
configuration, wh
hich are changes that aree persistent a
across a rebo
oot or reset o
of
the switch.
s
Administrators can access swiitch functionss to configure
e and
troub
bleshoot prob
blems on the
e switch. Bec ause adminiistrators can also make
temp
porary (opera
ator-level) changes as weell, they mustt be aware o
of the
interractions betw
ween tempora
ary and perm
manent chan
nges.
Fo
rT

Userr interaction with
w the switch is compleetely passive.. Nothing ca
an be change
ed
on th
he switch. Ussers can disp
play informattion that has no security or privacy
implications, such as switch statistics
s
and
d current operational state
e information
n.
TT

de
liv
er
To enable
e better switcch managem
ment and useer accountability, three levvels or classe
es
of user acccess have been
b
implemented on thee switch. Leveels of access to CLI, web
o perform va
managem
ment function
ns, and scree
ens increase as needed to
arious switch
managem
ment tasks. Conceptually,
C
, access classses are defin
ned as:
Access to
o switch functtions is contrrolled throug
gh the use of unique user names and
password
ds. After you connect to the
t switch thrrough the loccal console, telnet, or SSH,
a passwo
ord prompt appears.
a
The default userr name and p
password forr each accesss
level are listed in the preceding ta
able.
Rev. 12.3
31
7 –15
Implementing HP BladeSystem Solutions
Manually configuring a switch
The switch is configured manually by using a CLI, BBI, or an SNMP interface. After a
switch is configured, you have to back up the configuration as a text file to a TFTP
server. The backup configuration file is then downloaded from the TFTP server to
restore the switch back to the original configuration. This restoration is necessary if
one of these conditions apply:
The switch configuration becomes corrupted during operation.

The switch must be replaced because of a hardware failure.
on
ly

Configuring multiple switches
y
Note
See the HP 1:10Gb Ethernet BL-c Switch Command Reference Guide available on
the HP website for more information on using these management interfaces to
configure the switch.
de
liv
er
Configure multiple switches by using scripted CLI commands through telnet or by
downloading a configuration file by using a TFTP server.
Using scripted CLI commands through telnet
The CLI, provided with the switch, executes customized configuration scripts on
multiple switches. A configuration script is tailored to one of the multiple switches,
and then that configuration can be deployed to other switches from a central
deployment server.
TT
Using a configuration file
Fo
rT
If you are planning for the base configuration of multiple switches in a network to be
the same, manually configure one switch, upload the configuration to a TFTP server,
and use that configuration as a base configuration template file. Switch IP addresses
are acquired by default using BOOTP; therefore, each switch has a unique IP
address. Each switch is remotely accessed from a central deployment server and an
individual switch configuration is downloaded to meet specific network requirements.
7 –16
Note
See the HP 1:10Gb Ethernet BL-c Switch Command Reference Guide on the HP
website for additional information on using a TFTP server to upload and
download configuration files.
Rev. 12.31
Configuring Ethernet Connectivity Options
Configuring an HP 6120XG or 6120G/XG Switch
HP 6120XG and 6120G/XG switches are interconnect modules designed for the
BladeSystem infrastructure.
Switch IP configuration
The switch IP address can be assigned by:
Using the CLI Manager-level prompt

Using a web browser interface
de
liv
er

y
on
ly
Configuring the switch with an IP address expands your ability to manage the switch
and use its features. By default, the switch is configured to automatically receive
IP addressing on the default VLAN from a DHCP/BOOTP server that has been
configured correctly with information to support the switch. However, if you are not
using a DHCP/BOOTP server to configure IP addressing, use the menu interface or
the CLI to manually configure the initial IP values. After you have network access to a
device, you can use the web browser interface to modify the initial IP configuration if
needed.
Using the CLI Manager-level prompt
If you just want to give the switch an IP address so that it can communicate on your
network, or if you are not using VLANs, use the Switch Setup screen to quickly
configure IP addressing. To do so, either:

Enter setup at the CLI Manager-level prompt:

TT
ProCurve# setup
Select 8. Run Setup in the Main Menu of the menu interface.
Configuring the IP address by using a web browser interface
rT
You can only use the web browser interface to access IP addressing if the switch
already has an IP address that is reachable through your network.
Click the Configuration tab.
2.
Click IP Configuration.
3.
If you need further information on using the web browser interface, click [?] to
access the web-based help available for the switch.
Fo
1.
Rev. 12.31
7 –17
Implementing HP BladeSystem Solutions
Accessing a blade switch from the Onboard Administrator
These instructions assume that you have already set up the BladeSystem Onboard
Administrator by using the First Time Setup Wizard.
on
ly
See the HP BladeSystem Onboard Administrator User Guide for details on OA
setup. For information on OA command line interface (CLI) commands, see the
HP BladeSystem Onboard Administrator Command Line Interface User Guide.
Both guides are available at:
http://www.hp.com/go/bladesystem/documentation
To connect to the CLI interface through the Onboard Administrator:
Connect a workstation or laptop computer to the serial port on the
HP BladeSystem c3000 or c7000 OA module using a null-modem serial cable
(RS-232).
2.
Using a terminal program such as HyperTerminal or TeraTerm, open a
connection to the serial port using connection parameters of 9600, 8, N, 1.
3.
Press Enter. OA prompts you for administrator login credentials.
4.
Enter a valid user name and password. The OA system prompt displays.
5.
Enter the command:
de
liv
er
y
1.
connect interconnect <bay_number>
where <bay_number> is the number of the bay containing the blade switch. OA
connects you to the initial screen of the blade switch CLI.
Press Enter. The blade switch CLI prompt displays. You can now enter blade
switch CLI commands.
Fo
rT
TT
6.
7 –18
Rev. 12.31
Configuring Ethernet Connectivity Options
Accessing a blade switch through the mini-USB interface (out of band)
The blade switch console supports out-of-band access through direct connection to
the mini-USB console port of a Windows computer. To communicate with the blade
switch:
Download the USB driver to the PC. To find the driver:
2.
Go to: http://www.hp.com/#Support
3.
Click the Download drivers and software radio button.
4.
In the text box, enter 6120XG and then click Go.
5.
Click the link for correct operating system.
6.
Download the Utilities package.
7.
Install the driver by double-clicking the HPProCurve_USBConsole.msi file.
Connect the small end of the supplied USB console cable to the mini-USB port.
8.
Connect the standard end of the supplied USB console cable to a workstation or
laptop computer. The computer will recognize the presence of a new USB device
and will load the driver for it.
9.
Using a terminal program such as HyperTerminal or TeraTerm, open a
connection to the USB port. (By default, this port will appear as COM4.)
de
liv
er
y
on
ly
1.
10. Press Enter twice. The blade switch CLI prompt displays. You can now enter
blade switch commands.
Accessing a blade switch from the Ethernet interface (in band)
Fo
rT
TT
The blade switch console supports in-band access through the data ports using telnet
from a PC or UNIX computer on the network, and a VT100 terminal emulator. This
method requires the blade switch to have an IP address, subnet mask, and default
gateway. The IP address, subnet mask, and default gateway can be supplied by a
Dynamic Host Configuration Protocol (DHCP) or Bootp server, or you can manually
configure them using the CLI. By default, the blade switch gets its IP address through
DHCP or Bootp; see the next section for instructions on manually configuring a static
IP address.
To communicate with a blade switch that has an IP address, subnet mask, and
default gateway:
Rev. 12.31
1.
Use a ping command to verify network connectivity between the blade switch
and your workstation or laptop computer.
2.
Using a terminal program such as HyperTerminal or TeraTerm, open a
connection using the IP address, telnet protocol, and port 23 of the blade switch.
3.
Press Enter twice. The blade switch CLI prompt displays. You can now issue
blade switch commands.
7 –19
Implementing HP BladeSystem Solutions
Assigning an IP address to a blade switch
By default, the blade switch tries to acquire an IP address from a DHCP or Bootp
server. The IP address for the blade switch can be configured using the CLI, through
the Onboard Administrator, or through a mini-USB port on the blade switch.
To set a static IP address manually:
1.
From the operator’s CLI prompt (>) on the blade switch, enter:
enable
2.
on
ly
Supply a user name and password if you are prompted to do so.
From the manager’s CLI prompt (#) on the blade switch, enter:
config
Specify the VLAN of the port that attaches to the network. By default, all ports
are in VLAN 1.
vlan <vlan_id>
Enter an IP address and subnet mask for the switch. Both the IP address and
subnet mask are in the x.x.x.x format.
de
liv
er
4.
y
3.
ip address <ip_address> <subnet_mask>
5.
Enter a default gateway IP address in the x.x.x.x format.
ip default-gateway <ip_address>
Return to the operator or manager prompt by using a series of exit commands.
Fo
rT
TT
6.
7 –20
Rev. 12.31
Configuring Ethernet Connectivity Options
IP addressing with multiple VLANs
In the factory-default configuration, the switch has one permanent default VLAN
(named DEFAULT_VLAN) that includes all ports on the switch. Thus, when only the
default VLAN exists in the switch, if you assign an IP address and subnet mask to the
switch, you are actually assigning the IP addressing to the DEFAULT_VLAN.
In the factory-default configuration, the default VLAN (named DEFAULT_VLAN) is
the primary VLAN of the switch. The switch uses the primary VLAN for learning
the default gateway address. The switch can also learn other settings from a
DHCP or BOOTP server, such as (packet) Time-To-Live (TTL) and TimeP or SNMP
settings.
y

If multiple VLANs are configured, then each VLAN can have its own IP address.
This is because each VLAN operates as a separate broadcast domain and
requires a unique IP address and subnet mask. A default gateway (IP) address
for the switch is optional, but recommended.
on
ly

If you change the IP address through either telnet access or the web browser
interface, the connection to the switch will be lost. You can reconnect by either
restarting telnet with the new IP address or entering the new address as the URL
in your web browser.
Fo
rT

The IP addressing used in the switch should be compatible with your network.
That is, the IP address must be unique and the subnet mask must be appropriate
for your IP network.
TT

de
liv
er
Note
Other VLANs can also use DHCP or BOOTP to acquire IP addressing. However,
the gateway, TTL, and TimeP or SNTP values of the switch, which are applied
globally and not per-VLAN, will be acquired through the primary VLAN only,
unless manually set by using the CLI, Menu, or web browser interface. If these
parameters are manually set, they will not be overwritten by alternate values
received from a DHCP or BOOTP server.
Rev. 12.31
7 –21
Implementing HP BladeSystem Solutions
IP Preserve: Retaining VLAN-1 IP addressing across configuration file
downloads
Operating rules for IP Preserve
on
ly
IP Preserve enables you to copy a configuration file to multiple switches while
retaining the individual IP address and subnet mask on VLAN 1 in each switch and
the gateway IP address assigned to the switch. This enables you to distribute the
same configuration file to multiple switches without overwriting their individual
IP addresses.
When ip preserve is entered as the last line in a configuration file stored on a
TFTP server, the following conditions are true:
If the current IP address for VLAN 1 was not configured by DHCP/BOOTP,
IP Preserve retains the current IP address, subnet mask, and IP gateway address
of the switch when the switch downloads the file and reboots. The switch adopts
all other configuration parameters in the configuration file into the startup-config
file.
The ip preserve statement does not appear in the show config listings. To
verify IP Preserve in a configuration file, open the file in a text editor and view
the last line.
Fo
rT

If the current IP addressing for VLAN 1 of the switch is from a DHCP server,
IP Preserve is suspended. In this case, whatever IP addressing the configuration
file specifies is implemented when the switch downloads the file and reboots. If
the file includes DHCP/BOOTP as the IP addressing source for VLAN 1, the
switch will configure itself accordingly and will use DHCP/BOOTP. If instead, the
file includes a dedicated IP address and subnet mask for VLAN 1 and a specific
gateway IP address, the switch will implement these settings in the startup-config
file.
TT

de
liv
er
y

7 –22
Rev. 12.31
Configuring Ethernet Connectivity Options
Learning check
1.
On the GbE2c Layer 2/3 Ethernet Blade Switch, the operator account is
disabled by default and has no password.
 True
 False
List the conditions that must be met for the switch module to obtain an IP address
for the fa0 interface through the Onboard Administrator.
on
ly
2.
.................................................................................................................
.................................................................................................................
.................................................................................................................
.................................................................................................................
y
List the available management interfaces for the HP 6120XG and 6120G/XG
switches.
de
liv
er
3.
.................................................................................................................
.................................................................................................................
.................................................................................................................
.................................................................................................................
.................................................................................................................
4.
List three types of privileges available on 1:10Gb Ethernet BL-c switch.
TT
.................................................................................................................
.................................................................................................................
Fo
rT
.................................................................................................................
Rev. 12.31
7 –23
Fo
rT
TT
de
liv
er
y
on
ly
Implementing HP BladeSystem Solutions
7 –24
Rev. 12.31
Configuring Storage Connectivity Options
Module 8
Objectives
After completing this module, you should be able to explain how to configure the
following switches:
Brocade 8Gb SAN Switch for HP BladeSystem

Cisco MDS 9124e Fabric Switch for HP BladeSystem

HP 3Gb SAS BL Switch for HP BladeSystem
Fo
rT
TT
de
liv
er
y
on
ly

Rev. 12.31
8 –1
Implementing HP BladeSystem Solutions
Configuring a Brocade 8Gb SAN switch
The Brocade 8Gb SAN Switch for HP BladeSystem is a Fibre Channel switch that
supports link speeds of up to 8 Gb/s. The 8Gb SAN switch can operate in a fabric
containing multiple switches or as the only switch in a fabric.
Setting the switch Ethernet IP address
To set the Ethernet IP address on the 8Gb SAN switch:
Verify that the enclosure is powered on

Verify that the switch is installed

Choose one of the following methods to set the Ethernet IP address:
on
ly

Using Enclosure Bay IP Addressing (EBIPA)

Using the external Dynamic Host Configuration Protocol (DHCP)

Setting the IP address manually
de
liv
er
y

Using EBIPA
To set the Ethernet IP address using EBIPA:
Open a web browser and connect to the active HP Onboard Administrator.
2.
Enable EBIPA for the corresponding interconnect bay.
3.
Click Apply to restart the switch.
4.
Verify the IP address using a Telnet or Secure Shell (SSH) encryption login to the
switch, or by selecting the switch in the Rack Overview window of the Onboard
Administrator GUI.
TT
1.
Using external DHCP
rT
To set the Ethernet IP address using the external DHCP:
Connect to the active Onboard Administrator through a web browser.
2.
Document the DHCP-assigned address by selecting the switch from the Rack
Overview window of the Onboard Administrator GUI.
Fo
1.
3.
8 –2
Verify the IP address using a telnet or SSH login to the switch, or select the
switch in the Rack Overview window.
Rev. 12.31
Configuring Storage Connectivity Options
Setting the IP address manually
To set the IP address manually:
1.
Obtain the following items to set the IP address with a serial connection:

Computer with a terminal application (such as HyperTerminal in a Microsoft
Windows environment or TERM in a UNIX environment)

Null modem serial cable
Replace the default IP address (if present) and related information with the
information provided by your network administrator. By default, the IP address is
set to 10.77.77.77 for switches with revision levels earlier than 0C.
3.
Verify that the enclosure is powered on.
4.
Identify the active Onboard Administrator in the BladeSystem enclosure.
5.
Connect a null modem serial cable from your computer to the serial port of the
active Onboard Administrator.
6.
Configure the terminal application as follows:

In a Windows environment, enter:

Bits per second — 9600

Databits — 8

Parity — None

Stop bits — 1

Flow control — None
TT

de
liv
er
y
on
ly
2.
In a UNIX environment, enter: tip /dev/ttyb–9600
Log in to the Onboard Administrator.
8.
Press Enter to display the switch console.
9.
Identify the interconnect bay number where the switch is installed. At the
Onboard Administrator command line, enter:
rT
7.
Fo
connect interconnect x
where x is the interconnect bay slot where the switch is installed.
Rev. 12.31
8 –3
Implementing HP BladeSystem Solutions
10. Enter the following login credentials:

User: admin

Password: password
Alternatively, follow the onscreen prompts to change your password.
11. At the command line, enter: ipaddrset
on
ly
The Onboard Administrator connects its serial line to the switch in the specified
interconnect bay. A prompt displays, indicating that the escape character for
returning to the Onboard Administrator is Ctrl __ (underscore).
12. Enter the remaining IP addressing information, as prompted.
13. Optionally, enter ipaddrshow at the command prompt to verify that the
IP address is set correctly.
y
14. Record the IP addressing information, and store it in a safe place.
de
liv
er
15. Enter Exit, and press Enter to log out of the serial console.
Fo
rT
TT
16. Disconnect the serial cable.
8 –4
Rev. 12.31
Configuring Storage Connectivity Options
Configuring the 8Gb SAN switch
The 8Gb SAN switch must be configured to ensure correct operation within a
network and fabric.
Items required for configuration
on
ly
Note
For instructions about configuring the switch to operate in a fabric containing
switches from other vendors, refer to the HP SAN Design Reference Guide
available from: http://www.hp.com/go/sandesignguide
The following items are required for configuring and connecting the 8Gb SAN
switch for use in a network and fabric:



IP address and corresponding subnet mask and gateway address recorded
during the setting of the IP address
y

8Gb SAN switch installed in the enclosure
Ethernet cable
de
liv
er

Small form-factor pluggable (SFP) transceivers and compatible optical cables, as
required
Access to an FTP server for backing up the switch configuration (optional)
Setting the date and time
TT
The date and time are used for logging events. The operation of the 8Gb SAN
switch does not depend on the date and time; a switch with an incorrect date and
time value will function properly. To set the date and time, use the command line
interface (CLI).
Verifying installed licenses
Fo
rT
To determine the type of licensing included with your 8Gb SAN switch, enter
licenseshow at the command prompt.
Rev. 12.31
8 –5
Implementing HP BladeSystem Solutions
Modifying the Fibre Channel domain ID (optional)
If desired, you can modify the Fibre Channel domain ID. The default Fibre Channel
domain ID is domain 1. If the 8Gb SAN switch is not powered on until after it is
connected to the fabric, and the default Fibre Channel domain ID is already in use,
the domain ID for the new switch is automatically reset to a unique value. If the
switch is connected to the fabric after is has been powered on and the default
domain ID is already in use, the fabric segments.
on
ly
Enter fabricshow to determine the domain IDs that are currently in use. The
maximum number of domains with which the 8Gb SAN switch communicates is
determined by this fabric license of the switch.
Disabling and enabling a switch
y
By default, the switch is enabled after power on and after the diagnostics and
switch initialization routines complete. You can disable and re-enable the switch as
necessary.
de
liv
er
Using DPOD
Dynamic Ports On Demand (DPOD) functionality does not require a predefined
assignment of ports. Port assignment is determined by the total number of ports in
use as well as the number of purchased ports.
In summary, the DPOD feature simplifies port management by:

Automatically detecting server ports or cabled ports connected through a host
bus adapter (HBA)
Automatically enabling ports

Automatically assigning port licenses
TT

To initiate DPOD, use the licensePort command.
rT
For the 8Gb SAN Switch, DPOD works only if the server blade is installed with an
HBA present. A server blade that does not have a functioning HBA will not be
treated as an active link for the purpose of initial DPOD port assignment.
Fo
Backing up the configuration
To back up the switch configuration to an FTP server, enter configupload and
follow the prompts. The configupload command copies the switch configuration to
the server, making it available for downloading to a replacement switch, if
necessary.
8 –6
Rev. 12.31
Configuuring Storage C
Connectivity Op
ptions
de
liv
er
y
on
ly
Reset button
Reset button lo
ocation
The Resett button on th
he Brocade SAN
S
switchees is located to the left off the status
LEDs. It iss a small, reccessed micro
o switch that is accessed by inserting a pin or
similarly sized object in the small hole to pushh the button.
The Resett button enab
bles you to re
eboot the sw
witch when th
he switch is n
not respondin
ng
or if you have forgotte
en the passw
word.
TT
To reboot the switch, press the Reset button fo
or up to five sseconds.
Note
Fo
rT
The Reset buttton does not re
eturn the switch to factory-defa
ault settings.
Rev. 12.3
31
8 –7
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Mana
agement tools
Fo
rT
TT
The mana
agement too
ols built into the
t 8Gb SAN
N switch can
n be used to monitor fab
bric
topology, port status, physical sta
atus, and othher informatio
on used for performance
e
a system debugging.
d
When
W
runni ng IP over Fiibre Channe
el, these
analysis and
managem
ment tools must be run on
n both the Fi bre Channel host and th
he switch, an
nd
they mustt be supporte
ed by the Fib
bre Channel host driver.
8 –8
Rev. 12
2.31
Configuring Storage Connectivity Options
Configuring a Cisco MDS 9124e Fabric Switch
The Cisco MDS 9124e Fabric Switch for HP BladeSystem is a Fibre Channel switch
that supports link speeds of up to 4Gb. The Cisco MDS 9124e Fabric Switch can
operate in a fabric containing multiple switches or as the only switch in a fabric.
Setting the IP address
To set the IP address by means of a serial connection, you need:

A computer with a terminal application (such as HyperTerminal in a Windows
environment or TERM in a UNIX environment
on
ly

A null modem serial cable
To set the IP address:
Verify that the enclosure is powered on.
2.
Identify the active Onboard Administrator in the BladeSystem enclosure.
3.
Connect a null modem serial cable from the computer to the serial port of the
active Onboard Administrator.
4.
Configure the terminal application as follows:
In a Windows environment, enter:

Baud rate: 9600 bits per second

8 data bits

None (No parity)

1 stop bit

No flow control
TT

In a UNIX environment, enter: tip /dev/ttyb –9600
rT

de
liv
er
y
1.
Log in to the Onboard Administrator.
6.
Identify the interconnect bay number where the switch is installed.
7.
Enter the following command:
Fo
5.
OA> connect interconnect x
where x is the interconnect bay number where the switch is installed.
If you are using the switch for the first time, the switch setup utility starts
automatically. If this is not the first time the switch has been used, enter the setup
command at the system prompt.
Rev. 12.31
8 –9
Implementing HP BladeSystem Solutions
8.
Enter a password for the system administrator. (There is no default password.)
9.
Follow the instructions in the switch setup utility to configure the IP address, the
netmask, and other parameters for the switch.
10. When you have finished with the switch setup utility, log out and disconnect the
serial cable.
Configuring the fabric switch
Items required for configuration
on
ly
The Cisco MDS 9124e Fabric Switch must be configured to ensure correct operation
within a network and fabric.
To configure and connect the Cisco MDS 9124e Fabric Switch for use in a network
and fabric, you need:
Switch installed in a BladeSystem enclosure

IP address and corresponding subnet mask and gateway address

Ethernet cable

SFP transceivers and compatible optical cables, as required

Access to an FTP server for backing up the switch configuration (optional)
de
liv
er
y

Setting the date and time
TT
The date and time are used for logging events. The operation of the Cisco MDS
9124e Fabric Switch does not depend on the date and time; a switch with an
incorrect date and time value will function properly. Use the CLI to set the date and
time.
Verifying installed licenses
rT
To determine the type of licensing included with the Cisco MDS 9124e Fabric Switch,
enter show license usage at the command prompt using the following syntax:
Fo
switch# show license usage
8 –10
Rev. 12.31
Configuring Storage Connectivity Options
Modifying the Fibre Channel domain ID (optional)
Recovering the administrator password
on
ly
If desired, you can modify the Fibre Channel domain ID. If the Cisco MDS 9124e
Fabric Switch is not powered on until after it is connected to the fabric and the
default Fibre Channel domain ID is already in use, the domain ID for the new switch
is automatically reset to a unique value. If the switch is connected to the fabric after is
has been powered on and the default domain ID is already in use, the fabric
segments.
Fo
rT
TT
de
liv
er
y
You might need to recover the administrator password on the Cisco MDS 9124e
switch if the user does not have another user account on the switch with networkadministrator privileges. Refer to the Cisco MDS 9000 Family Fabric Manager
Configuration Guide and to the Cisco MDS 9000 Family CLI Configuration Guide
for detailed instructions.
Rev. 12.31
8 –11
Implemen
nting HP BladeS
System Solutions
on
ly
Fabricc switch management too
ols
y
Ciscco MDS 9124e Fabric Switch management feeatures table
de
liv
er
The mana
agement tools built in to the Cisco M
MDS 9124e Fa
abric Switch can be used
d to
monitor fabric topology, port statu
us, physical sstatus, and o
other informa
ation used fo
or
ance analysiss and system debugging.. When runn
ning IP over FFibre Channe
el,
performa
these management to
ools must be run on both the Fibre Ch
hannel host a
and the switcch,
pported by th
he Fibre Cha nnel host driiver.
and they must be sup
Fo
rT
TT
You can connect
c
a management station
s
to on e switch thro
ough Etherne
et while
managing other switcches connectted to the firsst switch thro
ough Fibre Channel. To d
do
so, set the
e Fibre Chan
nnel gateway
y address of each of the other switch
hes to be
managed
d to the Fibre
e Channel IP address of tthe first switcch.
8 –12
Rev. 12
2.31
Configuring Storage Connectivity Options
Configuring an HP 3Gb SAS BL Switch
The HP 3Gb SAS BL Switch is a single-wide interconnect module for HP BladeSystem
enclosures. The 3Gb SAS BL Switch is a key component of HP Direct-Connect
External SAS Storage for HP BladeSystem Solutions, with firmware and hardware
capabilities that enable the connection of external storage and tape devices to
BladeSystem enclosures.
on
ly
Configuration rules for the 3Gb/s SAS Switch
The 3Gb/s SAS Switch is only supported in BladeSystem enclosures (c7000 and
c3000). You can install the 3Gb SAS BL Switch in up to four interconnect bays in the
c7000 and in up to two interconnect bays in the c3000.
Two SAS switches are required in the same BladeSystem enclosure interconnect bay
row for redundancy. Single-switch configurations are supported as nonredundant.

y
For the c3000 enclosure, the SAS switch can be placed in interconnect bays 3
and 4 only.
de
liv
er

For the c7000 enclosure, the SAS switch can be placed in interconnect bays 3
and 4, 5 and 6, or 7 and 8 only.
For the c3000 enclosure, using mezzanine slot 1 of a server along with enclosure
interconnect bay 2 is not supported. Use the 3Gb/s SAS BL Switch Virtual SAS
Manager (VSM) software to configure external SAS storage.
Note
TT
Supported Internet browser versions are Microsoft Internet Explorer 6.0/7.0 and Mozilla
Firefox 3.
rT
A half-height server blade must have a P700m controller installed in server
mezzanine slot 1 or mezzanine slot 2. A full-height blade server must have a P700m
controller installed in server mezzanine slot 1, mezzanine slot 2, or mezzanine slot 3.
Fo
A double-density server blade must have a P700m controller installed in server
mezzanine slot 2 of each server. Four SAS switches in interconnect bays 5, 6, 7, and
8 must also be included in this configuration.
Rev. 12.31
8 –13
Implemen
nting HP BladeS
System Solutions
y
on
ly
Confiiguring th
he 3Gb SAS
S BL Switch
S
de
liv
er
Zoning proceedures
Key configuration tassks include:

Enab
bling or disa
abling multi-in
nitiator modee.

Crea
ating the follo
owing zone groups:
Switch-port zone groupss — For sharred SAS storrage enclosures and tape
e
libraries

Drive-bay zo
one groups — For zoned
d SAS storag
ge enclosures

Capturing the co
onfiguration for
f safekeepiing. HP stron
ngly recommends this ste
ep,
ecially in sing
gle-domain configuration
c
ns (available only in the V
VSM CLI).
espe
rT

Assig
gning zone groups
g
to servers.
TT

Fo
For firmw
ware versionss earlier than
n 2.0.0.0, no
o configuration tasks are
e available. The
switch is configured using
u
the VSM applicatio
on. As shown in the precceding table,
configura
ation (zoning
g) procedure
es are the sam
me for shareed SAS stora
age enclosure
es
and tape
e devices, bu
ut differ for zoned SAS sttorage enclo
osures.
8 –14
Rev. 12
2.31
Configuuring Storage C
Connectivity Op
ptions
Accessing the
e 3Gb SA
AS BL Sw
witch
The switcch is configured and man
naged throug
gh the Onbo
oard Adminisstrator and
VSM app
plications.
To accesss VSM:
Acce
ess the Onbo
oard Administrator of thee enclosure. (The 3Gb SA
AS BL Switch is
supp
ported on On
nboard Adm
ministrator 2.4
40 and laterr.)
2.
In the Onboard Administrato
A
or Systems annd Devices trree, expand the Interconnect
Gb SAS BL S
Switch.
Bayss option and select the 3G
3.
Afterr selecting th
he SAS switch
h to managee, click Mana
agement Con
nsole and wa
ait
a few
w moments for the VSM application
a
tto open.
on
ly
1.
de
liv
er
y
are versio
on
Confiirming the firmwa
Firmware versio n position
Firmware
e is preinstalled on each switch
s
in thee factory, but updated, allternative, or a
preferred version mig
ght be available. The follo
owing types of firmware are available
Gb SAS BL Switch:
S
for the 3G
Firmw
ware version
ns earlier tha
an 2.0.0.0 — Provide sin
ngle zone sup
pport and
supp
port connections only to shared
s
SAS sstorage enclo
osures such a
as the HP
Stora
age 2000sa Modular Sm
mart Array (M
MSA2000sa) and tape d
devices such as
the MSL
M G3 tape
e libraries. All
A server blad
de bays havve access to a
all storage
enclo
osures conne
ected to the switch.
s
Thesee settings aree preconfigured and cannot
be altered.
a
To restrict access to the storag
ge, use the feeatures proviided with the
e
stora
age management softwarre.
rT
TT

Firmw
ware version
ns 2.0.0.0 an
nd later — PProvide multizzone supportt for use with
h
share
ed SAS stora
age enclosures such as thhe MSA2000
0sa, tape de
evices such a
as
MSL G3 tape lib
braries, and zoned
z
SAS sstorage enclo
osures such a
as the MDS6
600.
Servver bays acce
ess the storag
ge enclosurees through zo
one groups ccreated in the
e
VSM
M application
n embedded in the switchh firmware. TThe zone gro
oups provide
user--defined assignment of se
erver bays to
o one or morre desired zo
one groups.
Fo

The curre
ently installed
d firmware ve
ersion is disp
played in thee VSM near tthe center off the
HP Virtua
al SAS Mana
ager banner. Access the V
VSM and ma
ake note of tthe installed
firmware version on each
e
3Gb SA
AS BL Switchh. As needed
d, update firm
mware on the
e
switches to the desire
ed version. Firmware is innstalled using
g the VSM application.
wo 3Gb SAS BL Switches are installed
d in the samee row of an e
enclosure,
When tw
ensure that they are running
r
the same
s
firmwa re version.
Rev. 12.3
31
8 –15
Implementing HP BladeSystem Solutions
Learning check
1.
List the tools used to configure the 3Gb SAS switch.
…………………………………………………………………………………………
…………………………………………………………………………………………
2.
How do you set the IP address on a Cisco MDS switch?
…………………………………………………………………………………………
on
ly
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
y
With Dynamic Ports on Demand (DPOD), port assignment is determined by the
total number of ports in use as well as the number of purchased ports.
 True
Fo
rT
TT
 False
de
liv
er
3.
8 –16
Rev. 12.31
Virtual Connect
Installation and Configuration
Module 9
After completing this module, you should be able to:
on
ly
Objectives
Describe the HP Virtual Connect portfolio and the basic technology

Plan and implement a Virtual Connect environment

Configure a Virtual Connect module

Manage a Virtual Connect domain

Explain how to use Virtual Connect modules in a real-world environment
Fo
rT
TT
de
liv
er
y

Rev. 12.31
9 –1
Implemen
nting HP BladeS
System Solutions
HP Virtual
V
Connecct portffolio
Virtual Connect is an industry-stan
ndard-based
d implementa
ation of serve
er-edge I/O
ation. It puts an abstractio
on layer betw
ween the serrvers and the
e external
virtualiza
networks so that the LAN
L
and sto
orage area n etwork (SAN
N) see a poo
ol of servers
rather tha
an individual servers.
de
liv
er
y
on
ly
HP 1/
/10Gb VC
V Ethernet
Simplify and
a make th
he customer’ss data centerr change-rea
ady. The inno
ovative HP
1/10Gb Virtual Conn
nect Ethernett Module forr the HP Blad
deSystem is th
he simplest,
most flexiible connectiion to networks. The Virtuual Connect Ethernet Mo
odule is a new
w
class of blade
b
interco
onnect that simplifies servver connectio
ons by cleanlly separating
g
the server enclosure from
f
LAN, sim
mplifies netw
works by reducing cabless without
witches to manage, and allows a chhange in servvers in just m
minutes, not
adding sw
days.
Fo
rT
TT
HP 1/
/10Gb-FF VC Ethe
ernet
The HP 1/10Gb Virtu
ual Connect Ethernet Mo
odule for the HP BladeSysstem is the
e network co
onnection. Th e Virtual Co
onnect Ethern
net Module iss a
simplest, most flexible
class of blade
b
interco
onnect that simplifies servver connectio
ons by cleanlly separating
g
the server enclosure from
f
LAN, sim
mplifies netw
works by reducing cabless without
witches to manage, and allows channging serverss in just minu
utes, not dayss.
adding sw
This model is similar to the HP 1/
/10Gb VC Etthernet Module, but offerrs optical
uplinks.
9 –2
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
on
ly
HP Virtual Co
onnect Fle
ex-10 10G
Gb Etherrnet
de
liv
er
y
The HP Virtual
V
Conne
ect Flex-10 10
0Gb Ethernett Module is a class of bla
ade
interconn
nects that sim
mplifies server connection s by cleanly separating tthe server
enclosure
e from LAN. It simplifies networks
n
by reducing cables without adding
switches to manage, allowing a server
s
chang e in just minutes, and taiilors networkk
ons and spee
eds based on
n applicationn needs.
connectio
Fo
rT
TT
HP Flex-10 technology
y significantly
y reduces inffrastructure ccosts by incre
easing the
number of
o NICs per connection
c
without
w
addinng extra blad
de I/O modules, and
reducing cabling upliinks to the da
ata center neetwork.
Rev. 12.3
31
9 –3
Implemen
nting HP BladeS
System Solutions
on
ly
HP Virtual Co
onnect 4G
Gb Fibre Channeel Module
e
de
liv
er
y
The HP Virtual
V
Conne
ect 4Gb FC Module
M
expa
ands existing
g Virtual Con
nnect
capabilities by allowiing up to 128
8 virtual macchines (VMs)) running on the same
s
to acccess separate
e storage ressources. Provvisioned stora
age resource
e is
physical server
associate
ed directly to a specific VM,
V
even if thhe virtual serrver is re-allo
ocated within
n
the Blade
eSystem. Storrage manage
ement is no longer consttrained to a ssingle physiccal
HBA on a server blad
de. SAN adm
ministrators ccan manage virtual HBAss with the sa
ame
methods and viewpoiint of physica
al HBAs.
V
Conne
ect 4Gb Fibre
e Channel M
Module clean
nly separatess the server
The HP Virtual
enclosure
e from the SA
AN, simplifie
es SAN fabriccs by reducin
ng cables wiithout adding
g
switches to the domain, and allow
ws a fast cha
ange in serveers.
Fo
rT
TT
onnect 8G
Gb 20-po
ort Fibre Channel Module
e
HP Virtual Co
The HP Virtual
V
Conne
ect 8Gb 20-p
port FC Mod
dule enables up to 128 V
VMs running on
the same physical serrver to accesss separate sstorage resouurces. Provisioned storage
e
resource is associated
d directly to a specific VM
M, even if thee VM is re-allocated with
hin
the Blade
eSystem. Storrage manage
ement of VM
Ms is no long
ger limited byy the single
physical HBA on a se
erver blade: SAN adminiistrators can manage virttual HBAs w
with
nd viewpoint of physical HBAs.
the same methods an
9 –4
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
on
ly
HP Virtual Co
onnect 8G
Gb 24-po
ort Fibre Channel Module
e
HP Virtua
al Connect 8Gb 24-port Fibre Channnel Module kkey features include:




Two Fibre Chann
nel SFP+ Tran
nsceivers included with th
he Virtual Co
onnect Fibre
Channel Module
e
y

Eight 2/4/8Gb Auto-negotia
ating Fibre C
Channel uplin
nks connecte
ed to externa
al
N switches
SAN
de
liv
er

Sixte
een 2/4/8G
Gb Auto-nego
otiating Fibree Channel do
ownlink portss for maximu
um
HBA
A performancce
HBA
A aggregation on uplinks ports using ANSI T11 sta
andards-based N_Port ID
D
Virtu
ualization (NPIV) technolo
ogy
Up to
o 255 VMs running
r
on th
he same phyysical server can access sseparate
stora
age resourcess
Extre
emely low-lattency through
hput for switcch-like performance
Fo
rT
TT
This module is compa
atible with cu
urrent releasees of ProLian
nt and Integriity servers
blades th
hat support th
he QLogic QMH2462
Q
4G
Gb FC HBA and QMH2562 8Gb FC
C
HBA or Emulex
E
LPe1105-HP 4Gb HBA and LPee1205 8Gb HBA for HP BladeSystem
m.
Rev. 12.3
31
9 –5
Implemen
nting HP BladeS
System Solutions
on
ly
HP Virtual Co
onnect Fle
exFabric moduless
Flex
xFabric connecction options
de
liv
er
y
HP Virtua
al Connect FllexFabric mo
odule is a log
gical combin
nation of Flexx-10 technolo
ogy
with indu
ustry standard
d VC Fibre Channel
C
techhnology in a single intercconnect modu
ule.
The VC FlexFabric
F
Mo
odule and FlexFabric Ad apters conveerge Ethernet and Fibre
Chanel trraffic within the
t BladeSysstem enclosu re and then separate the
e two at
enclosure
e edge. Conn
nectivity to both
b
the exterrnal Ethernett and native Fibre Chann
nel
from the same
s
module allows custtomers to red
duce the com
mplexity without disruptin
ng
existing LAN
L
and SAN infrastructure and elim
minates the neeed for Fibre
e Channel
modules and adapterrs.
The intern
nal facing po
orts (downlin
nks) on the FllexFabric mo
odule can ad
dapt to whate
ever
they are connected
c
to
o in the serve
ers:
G6 LOM
L
— 10G
Gb Ethernet with
w four FlexxNICs
rT

G7 LOM
L
— 10G
Gb CEE with three FlexNIICs and one FlexHBA or four FlexNIC
Cs if
you do not config
gure a storag
ge connectio
on in the pro
ofile
TT


G1/
/G5 LOM — 1Gb Ethern
net, one NIC only
Fo
You can connect
c
the VC
V FlexFabric uplinks to 1GbE netwo
orks using SFFP transceive
ers
for an ea
asy transition to 10Gb latter.
9 –6
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
on
ly
FlexFa
abric adap
pter — Phy
ysical funcctions
y
FlexFabric LOM overview


PF 1,, 3, and 4 on each port are always a
and can onlyy be Etherne
et.
The second
s
PCIe
e function (PFF) can be Ethhernet, Fibre Channel ove
er Ethernet
(FCo
oE), or iSCSI.. It absolutely
y must have the same co
onfiguration b
between ports 1
and 2 on the sam
me FlexFabric adapter.
There
efore, port 1 FCoE and port
p 2 iSCSI cannot be o
on the same a
adapter.
Fo
rT
TT

de
liv
er
Each Flex
xFabric adap
pter has two 10Gb physiccal ports tha
at can be parrtitioned into
o
four physsical functions (PF):
Rev. 12.3
31
9 –7
Implemen
nting HP BladeS
System Solutions
on
ly
Flex-10 adapter ma
apping with
h VC Flex-10
0 modules
FlexFabric LOM and VC
C Flex-10 moduule
y
With the Flex-10 modules, FlexFab
bric LOM funnctions the sa
ame as Flex-1
10 network
cards. Fo
our Ethernet ports
p
are ava
ailable from any LOM.
TT
de
liv
er
FlexFabrric adapter mapping with
w VC Flex
xFabric mod
dules
rT
FlexFabricc LOM and VC FlexFabric mod
dule
Fo
You can extend
e
this te
echnology to
o combine w
with Fibre Cha
annel and iS
SCSI data
storage connectivity
c
all
a in a single
e interconnecct module fro
om a single FlexFabric
adapter: One of the physical functions now hhas multi-con
nfiguration an
nd can be
utilized as
a a NIC, FC
CoE, or iSCSI device and is recognizeed as such byy the server
operating
g system.
9 –8
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Rx side allocation
a
fo
or FlexFabrric
on
ly
Individua
al Ethernet, iS
SCSI, or FCo
oE functions rreceived trafffic (Rx) flowss are not limited
and could
d consume up
u to the full line rate of 1
10Gb. With FCoE, howe
ever, Enhance
ed
Transmisssion Selection flow contro
ol managem ent guaranteees minimum
m bandwidth set
by the Virtual Connecct Manager. Thus, when there is no ccongestion, FFCoE or LAN
N
ed the specified data ratees for traffic fflowing from
m VC to
bandwidth can excee
U
congessted conditio
ons, the VC m
module will e
enforce a fair
FlexFabric adapter. Under
n of bandwid
dth as determ
mined by thee FCoE functiion rate limitt defined in tthe
allocation
server pro
ofile. The rem
mainder will be set as thee aggregate rate limit for the FlexNIC
Cs.
On the trransmitted tra
affic (Tx) side
e, FlexNIC iss limited by tthe server pro
ofile definitio
on
and set as
a the maxim
mum in the ne
etwork definiition.
de
liv
er
y
FlexFabrric adapter mapping with
w 10G Pa
ass-Thru mo
odules
FlexFabric LOM and 10G
Gb Pass-Thru mo
odule
The PCIe function
ns have fixed
d configuratio
ons and can
nnot be easilyy changed o
or
disabled.
rT

TT
When co
onnecting to 10G Pass-Thru Modules, the FlexFabric adapters lose most off
their adva
anced featurres:

The only
o
two ava
ailable config
gurations aree one NIC and one stora
age (either
FCoE
E or iSCSI de
epending on
n the server m
model).
The only
o
adjustable bandwid
dth control is between NIC and FCoE
E

There
e is no:
Fo

Rev. 12.3
31

World Wide
Virtualization of Ethernett MAC addreesses or Fibrre Channel W
WNs)
Name (WW

Centralized managemen
nt of SAN (Fiibre Channeel or iSCSI) boot parametters

Integration with
w BladeSy
ystem Matrix or upper layyer software tools such as
HP Infrastruccture Orchestration
9 –9
Implementing HP BladeSystem Solutions
Planning and implementing Virtual Connect
Before beginning installation, complete the following tasks:



on
ly
Virtual Connect Ethernet networks can be completely contained within the
domain for server-to-server communication or connected to external networks
through rear panel port cable connections (uplinks). For each network, the
administrator must use the VC Manager to identify the network by name and to
define any external port connections.
Determine the Ethernet MAC address and Fibre Channel WWN range to be
used for the servers within the enclosure. Server and networking administrators
should fully understand the selection and use of MAC address ranges before
configuring the enclosure.
Name the fabric that servers will connect to. The setup wizard enables you to
specify the Fibre Channel fabrics that will be made available. Each VC-FC
module supports a single SAN fabric and is connected to a Fibre Channel
switch that has been configured to run in NPIV mode.
Set the Fibre Channel oversubscription rate using the Virtual Connect setup
wizard. Oversubscription degrades Fibre Channel performance and occurs
when hosts require more bandwidth than a port can provide. As devices send
frames through more switches and hops, other data traffic in the fabric routed
through the same interswitch link (ISL) or path can cause oversubscription. It is
also referred to as the ratio of potential port bandwidth to available backplane
slot bandwidth.
Fo
rT

Determine which Ethernet networks will be connected to or contained within the
domain. Most installations have multiple Ethernet networks, each typically
mapped to a specific IP subnet. The VC Manager enables definition of up to 64
different Ethernet networks that can be used to provide network connectivity to
server blades. Each physical NIC on a server blade can be connected to any
one of these Ethernet networks.
y

Determine the Ethernet stacking cable layout, and ensure that the proper cables
are ordered. Stacking cables allow any Ethernet NIC from any server to be
connected to any of the Ethernet networks defined for the domain.
de
liv
er

Determine which mezzanine cards and interconnect modules will be used and
where they will be installed in the enclosure.
TT


9 –10
Identify the administrators for the Virtual Connect environment and identify
which roles and administrative privileges they require. VC Manager classifies
each operation as requiring server, network, domain, or storage privileges. A
single user can have any combination of these privileges.
Rev. 12.31
Virtual Connect Installation and Configuration
Building a Virtual Connect environment
Typically, a Virtual Connect environment is built in the following manner:
The lab technician sets up the enclosure by:

Installing the Virtual Connect modules

Cabling the stacked modules

Running the enclosure setup wizard

Running the Virtual Connect setup wizard
on
ly


The LAN administrator defines the Ethernet networks and connections.

The SAN administrator defines the storage fabrics and connections.

The network administrator:

Configures the data center switch so that selected networks are made
available to the enclosure

Documents the network names and VLAN IDs
de
liv
er
y
Ensures that the appropriate uplink cables are dropped to the rack (for
example, using two 10Gb links or a bundle of two 8 x 1Gb links, primary
and standby)
The server administrator:

Defines the server profiles and connections

Makes additions, changes, and moves whenever needed

Confirms that the enclosure is properly installed in the rack

Configures stacking links
Obtains the list of network names and VLAN IDs from the network
administrator
rT

TT


Connects the data center network cables to the enclosure

Uses VC Manager to set up a share uplink set

Can define private or dedicated networks
Fo

Rev. 12.31
9 –11
Implementing HP BladeSystem Solutions
Virtual Connect out-of-the-box steps
The steps to install Virtual Connect are:
1.
Install the interconnect modules.
2.
Install the stacking links.
Note
on
ly
Stacking links are used to interconnect VC-Enet modules when more than two modules
are installed in a single enclosure. This feature enables all Ethernet NICs on all servers
in the Virtual Connect domain to have access to any VC-Enet module uplink port. By
using these module-to-module links, a single pair of uplinks can function as the data
center network connections for the entire Virtual Connect domain.
Cable the Virtual Connect Ethernet uplinks to the data center networks.
4.
Connect the data center Fibre Channel fabric links (if applicable).
5.
Note the default network settings for VC-Enet module in bay 1 (from the tear-off
tag).
6.
Note the default network settings for the Onboard Administrator.
7.
Apply power to the enclosures.
8.
Use Onboard Administrator for basic setup of the enclosures (enclosure name,
passwords, and so forth).
9.
Access Virtual Connect (through the Onboard Administrator or dynamic DNS
name from the tear-off tag).
Fo
rT
TT
de
liv
er
y
3.
9 –12
Rev. 12.31
Virtual Connect Installation
n and Configura
ation
de
liv
er
y
on
ly
Virtua
al Conne
ect Ethern
net stackiing
VC with stackiing links
Virtual Co
onnect stackking rules:


All VC
V Ethernet modules
m
have
e at least on e internal sta
acking link th
hrough the
midp
plane. The 10
0/10Gb VC--Ethernet mo
odule has two
o internal sta
acking links ffor
a tottal of 20Gb of cable-free
e stacking.
Best practice for stacking is to
o connect ea
ach Ethernet module to tw
wo different
Ethernet moduless. In the precceding graphhic, every mo
odule is conn
nected to two
o
different moduless. Each module connectss to the adjaccent bay usin
ng the internal
midp
plane path (tthe orange lines). Then, eeither 1Gb o
or 10Gb cab
bles are used
d to
stackk to another module (the blue lines).
Fo
rT
TT

Any port can be used for stacking. Stackking cables a
are auto-dete
ected.
Rev. 12.3
31
9 –13
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Virtual Connect Ethernet module stacking
Virtual Connect
C
modul es stacking linkks
TT
In the pre
eceding grap
phic, stacking links are sshown to be both externa
al and intern
nal.
The intern
nal 10Gb lin
nks are conn
nected by wa
ay of the sign
nal midplane
e inside the
enclosure
e and connecct the modules horizonta
ally. The exteernal links arre both 10Gb
(CX4) and 1Gb (RJ-45) and can extend
e
a serrver’s networrk connection
ns across
V modules.. These exterrnal links cann also conneect to the exte
ernal
multiple VC
infrastruccture switches.
Fo
rT
Notice th
hat all the mo
odules in the
e graphic aree Ethernet-ba
ased indicating that the V
VC
Fibre Cha
annel modules do not pa
articipate in the stacking example.
9 –14
Rev. 12
2.31
Virtual Connect Installation and Configuration
Using VC-FC modules
HP offers a few Virtual Connect modules for virtualization of the SAN environment.
To configure a VC-FC module, you must use a VC-Ethernet module.
Virtual Connect Fibre Channel WWNs
A Fibre Channel WWN is a 64-bit value used during login to uniquely identify a
Fibre Channel HBA port and get a port ID.
y
on
ly
Each server blade Fibre Channel HBA mezzanine card ships with factory-default port
and node WWNs for each Fibre Channel HBA port. Although the hardware ships
with default WWNs, Virtual Connect can assign WWNs that will override the
factory default WWNs while the server remains in that Virtual Connect enclosure.
When configured to assign WWNs, Virtual Connect securely manages the WWNs
by accessing the physical Fibre Channel HBA through the enclosure Onboard
Administrator and the iLO interfaces on the individual server blades.
de
liv
er
When assigning WWNs to a Fibre Channel HBA port, Virtual Connect assigns both
a port WWN and a node WWN. Because the port WWN is typically used for
configuring fabric zoning, it is the WWN displayed throughout the Virtual Connect
user interface. The assigned node WWN is always the same as the port WWN
incremented by 1.
Configuring Virtual Connect to assign WWNs in server blades maintains a
consistent storage identity even when the underlying server hardware is changed.
This method allows server blades to be replaced without affecting the external Fibre
Channel SAN administration.

The first 4 bits identify the naming authority.
When the first two bytes are either hex 10:00 or 2x:xx (where the xs are vendorspecified), they are then followed by the 3-byte vendor identifier and the 3-byte
vendor-specified serial number.
rT

TT
The naming convention is as follows:
When the first nibble is either 5 or 6, it is then followed by a 3-byte vendor
identifier (IEEE OUI) and 4.5 bytes for a vendor-specified serial number.
Fo

HP has set aside a dedicated range of Fibre Channel WWNs. You can set each
Virtual Connect domain to either a WWN defined by Virtual Connect or a factorydefault WWN.
Rev. 12.31

50:06:0B:00:00:C2:62:00 to 50:06:0B:00:00:C3:61:FF

Equals 64KB WWNs
9 –15
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Virtua
al Conne
ect Fibre Channel port typ
pes and lo
ogins
Config
guration of SAN
N with VC mod
dules and witho
out VC modules
Key Fibre
e Channel po
ort types:


N_Po
ort (End Port)
F_Po
ort (Fabric Port) addressab
ble by the N
N_Port attacheed to it with a common
well--known addrress identifierr (hex ‘FF FF FE’)
FL_Po
ort—An F_Po
ort that conta
ains arbitrateed loop functtions

NL_P
Port (Loop En
nd Port)—An N_Port that contains arb
bitrated loop
p support

E_Po
ort (Expansio
on Port)—A sw
witch port ussed for switch-to-switch co
onnections
TT

rT
Fibre Channel
C
lo
ogins
N_Port devices must log in to the fabric. Loginns enable a n
node to dete
ermine the
fabric/topology type and enable assignment of an N_Porrt Identifier. TThey also sett up
buffer-to-b
buffer creditss.
Fo
The login
n sequence iss as follows:
9 –16
1.
Link is establishe
ed.
2.
N_Po
ort sends a Fabric
F
Login (FLOGI) fram
me to the well-known fabric address ((FF
FF FE
E).
3.
Fabrric responds with an Acce
ept (ACC) fra
ame.
Rev. 12
2.31
Virtual Connect Installation and Configuration
Fibre Channel zoning and SSP
Fabric zoning enables a Fibre Channel fabric to be separated into different
segments. It is performed within the switched fabric. Zoning types are by node or by
port.
Selective Storage Presentation (SSP) is implemented in storage targets. SSP enables a
target to show only certain logical unit numbers (LUNs) to certain initiators (typically
World Wide Port Names).
on
ly
Important
!
Fo
rT
TT
de
liv
er
y
Each switch must have its own domain between 1 and 254 with no duplicate IDs in the
same fabric.
Rev. 12.31
9 –17
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
N_Po
ort_ID virttualizatio
on
N_Port_ID
N
virtu alization
TT
A VC-FC module funcctions as an HBA aggreg
gator and usees NPIV, whiich assigns
multiple N_Port_IDs
N
to
o a single N_Port,
N
thereb
by enabling multiple distiinguishable
entities.
rT
NPIV funcctions within a Fibre Cha
annel HBA a nd enables uunique WW
WNs and IDs for
each virtu
ual machine within a server. A VC-FC
C module fun
nctions as a transparent
HBA agg
gregator deviice; NPIV enables it to reeduce cabless in a vendorr-neutral
fashion.
Fo
d
by th
he ANSI T11 Fibre Channnel standardss:
NPIV is defined
9 –18

Fibre
e Channel De
evice Attach (FC-DA) Speecification, S
Section 4.13

Fibre
e Channel Lin
nk Services (FC-LS)
(
Speciification
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
1—FFabric login using HBA aggregator
a
W
WWN (WW
WN X)
t buffer cre
edits for the o
overall link
Establishes the

Receives an overall Port ID
y

de
liv
er

on
ly
Fabricc login using the HB
BA aggreg
gator’s WW
WN
2a to
o 4a—Serve
er HBA logs in
i normally uusing the WW
WNs

2b to
o 4b—Serve
er HBA fabricc logins are ttranslated to
o Fabric Disco
overy (FDISC
C)

5—TTraffic for all four N_Port IDs are carrried on the sa
ame link
Fo
rT
TT

Rev. 12.3
31
9 –19
Implemen
nting HP BladeS
System Solutions
y
on
ly
N_Porrt_ID virtua
alization
de
liv
er
NPIV is in
ndependent of both operrating system
ms and device drivers, an
nd the standa
ard
Qlogic and Emulex Fibre Channe
el HBAs supp
port NPIV.
NPIV doe
es not interfe
ere with serve
er/SAN com
mpatibility. Affter the serve
er is logged in,
Fibre Cha
annel framess pass throug
gh unchangeed.
Important
!
When installiing a VC-FC mo
odule, you mustt enable NPIV o
on the fabric sw
witch that is
attached to th
he VC-FC modu
ule uplinks befo
ore the server b lade HBAs can
n log in to the
fabric.
Fo
rT
TT
Configuring
g Virtua
al Conn
nect
Virrtual Connect cconnections
Each Virtual Connect Ethernet mo
odule has sevveral numberred Ethernet connectors. All
c
can be used to
t connect to
o data centerr switches, or they can be
of these connectors
9 –20
Rev. 12
2.31
Virtual Connect Installation and Configuration
used to stack Virtual Connect modules and enclosures as part of a single Virtual
Connect domain.
Networks must be defined within the VC Manager so that specific, named networks
can be associated with specific external data center connections. These named
networks can then be used to specify networking connectivity for individual servers.
on
ly
A single external network can be connected to a single enclosure uplink, or it can
make use of multiple uplinks to provide improved throughput or higher availability. In
addition, multiple external networks can be connected over a single uplink (or set of
uplinks) through the use of VLAN tagging.
The simplest approach to connecting the defined networks to the data center is to
map each network to a specific external port. An external port is defined by the
following:
Enclosure name

Interconnect bay containing the Virtual Connect Ethernet module

Selected port on that module (1-8, X1, X2, . . .)
Fo
rT
TT
de
liv
er
y

Rev. 12.31
9 –21
Implemen
nting HP BladeS
System Solutions
Virtua
al Conne
ect logica
al flow
The Virtua
al Connect configuration
c
n process usees a consistent methodolo
ogy.
de
liv
er
y
on
ly
Create
e a VC do
omain
One of th
he first requirrements in se
etting up a V
VC environmeent is to establish a VC
domain through the web-based
w
VC
V Manager interface.
A Virtual Connect do
omain consistts of an enclo
osure and a set of associated module
es
and serve
er blades tha
at are manag
ged togetherr by a singlee instance of the VC
Managerr. The Virtuall Connect do
omain contai ns specified networks, se
erver profiless,
and user accounts tha
at simplify th
he setup and administratiion of server connectionss.
Fo
rT
TT
Establishiing a Virtual Connect do
omain enablees administra
ators to upgrrade, replace
e,
or move servers
s
within their enclo
osures withouut changes being visible to the extern
nal
LAN/SAN
N environme
ents.
9 –22
Rev. 12
2.31
Virtual Connect Installation and Configuration
Virtual Connect multi-enclosure VC domains
Starting with firmware 2.10, a VC domain can contain more than one enclosure.
A multi-enclosure VC domain requires:



One base enclosure (primary VC Managers in bays 1 and 2). This rule does not
have to be followed when FlexFabric modules are used.
Onboard Administrator and VC modules must be on the same management
network.
on
ly

All Ethernet modules interconnected.
All enclosures must have the identical VC-FC configuration (no stacking of Fibre
Channel modules).
It supports:
Up to four c7000s enclosures

Up to 16 Virtual Connect Ethernet modules

Up to 16 Virtual Connect Fibre Channel modules
de
liv
er
y

Stacking cable options:


Fibre cables (SFP+)
10Gb copper Ethernet cables with CX-4 connectors. (Do not use InfiniBand
cables because they are tuned differently.)
1Gb Ethernet cables

DAC cables (SFP+)
TT

Note
Fo
rT
HP currently limits each domain to 16 Ethernet modules and 16 Fibre Channel modules. If
more than 16 are detected, the domain will be degraded with a
DOMAIN_OVERPROVISIONED statement.
Rev. 12.31
9 –23
Implementing HP BladeSystem Solutions
Ethernet stacking connections
For each Virtual Connect Ethernet network (vNet), Virtual Connect creates a loop-free
tree with the uplink as the root. Each VC-Ethernet hop adds latency between 2 and 4
milliseconds per hop. Extra links can reduce hops and provide additional
redundancy.
Note
on
ly
Latency is a function of the bridge chip used in the module. Both of the VC 1/10 modules
use the same bridge chip and, therefore, will have identical latency. The Flex-10 module
uses a bridge chip with much lower latency.
Important
!
y
A switch that does not understand Link Aggregation Control Protocol (LACP) or Link Layer
Discovery Protocol (LLDP) (such as Nortel 8500-series switches) can introduce a loop. If
the switch does not support LACP, change the uplink port mode from Auto to Failover.
de
liv
er
PortFast
TT
The Spanning Tree PortFast feature was designed for Cisco switch ports connected to
edge devices, such as server NIC ports. This feature allows a Cisco switch port to
bypass the “listening” and “learning” stages of spanning tree and quickly transition
to the “forwarding” stage. By enabling this feature, edge devices are allowed to
immediately begin communicating on the network instead of having to wait for
Spanning Tree to determine whether it needs to block the port to prevent a loop—a
process that can take 30+ seconds with default Spanning Tree timers. Because edge
devices do not present a loop on the network, Spanning Tree is not needed to
prevent loops and can be effectively bypassed by using the PortFast feature. The
benefit of this feature is that server NIC ports can immediately communicate on the
network when plugged in rather than timing out for 30 or more seconds. This
strategy is especially useful for time-sensitive protocols such as PXE and DHCP.
rT
Important
Using features such as Portfast and BPDU Guard enable uplink failover to occur more
quickly and offer protection against the possibility of a loop.
Fo
!
Because VC uplinks operate on the network as an edge device (like teamed server
NICs), Spanning Tree is not needed on the directly connected Cisco switch ports.
Thus, PortFast can be enabled on the Cisco switch ports directly connected to VC
uplinks.
Note
The interface command to enable PortFast on a Cisco access port is: spanning-tree
portfast
The interface command to enable PortFast on a Cisco trunk port is: spanning-tree
portfast trunk
9 –24
Rev. 12.31
Virtual Connect Installation and Configuration
BPDU Guard
on
ly
BPDU Guard is a safety feature for Cisco switch ports that have PortFast enabled.
Enabling BPDU Guard allows the switch to monitor for the reception of Bridge
Protocol Data Unit (BPDU) frames (spanning tree configuration frames) on the port
configured for PortFast. When a BPDU is received on a switch port with PortFast and
BPDU Guard enabled, BPDU Guard will cause the switch port to err-disable (shut
down). Since ports with PortFast enabled should never be connected to another
switch (which transmits BPDUs), BPDU Guard protects against PortFast-enabled ports
from being connected to other switches. This arrangement prevents:

Loops caused by bypassing Spanning Tree on that port

Any device connected to that port from becoming the root bridge
Note
de
liv
er
y
Because Virtual Connect behaves as an edge device on the network, and because
VC does not participate in the data center spanning tree (that is, does not transmit
BPDUs on VC uplinks), BPDU Guard can be used, if desired, on Cisco switch ports
connected to VC uplinks.
Fo
rT
TT
The interface command to enable BPDU Guard on a Cisco port is: spanning-tree
bpduguard enable.
Rev. 12.31
9 –25
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
VC base
e enclosure
Multiplle enclosures sttacked togetherr
TT
Each staccked domain
n consists of a base enclo
osure and on
ne or more re
emote
enclosure
es. The base enclosure prrovides impo
ortant functionality becau
use it houses the
primary and
a second VC
V Manage
er. Therefore, all the mana
agement ope
erations for tthe
domain come
c
from one enclosure
e. If that entirre enclosure fails and goes offline, the
remote en
nclosures willl continue to
o operate no rmally, but yyou will be un
nable to makke
administrrative change
es to the dom
main becausse the two ma
anagement m
modules will be
offline. Th
his situation is analogouss to losing bo
oth the prima
ary and seco
ondary VC
Managerr in a single--enclosure do
omain. The o
other modulees in that dom
main continu
ue
to operatte normally, but
b you will be
b unable to
o make changes to the co
onfiguration.
Fo
rT
Merging existing dom
mains into on
ne is not sup ported. To crreate a multiiple-enclosure
e or
stacked domain,
d
startt with an unimported encclosure and ccreate a dom
main on the
base encclosure. Havin
ng a base en
nclosure withh a Virtual C
Connect doma
ain in place will
not affectt this; there iss no reason to delete thee domain on the enclosurre you will usse
as your base
b
enclosure. However,, remote encclosures must be unimporrted and anyy
domains deleted on the
t remote enclosures you want to ad
dd to a stackked domain.
9 –26
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
Enclosurre removal
Before removing enclo
osures, you need
n
to know
w the locatio
on of the upliinks and whiich
e. If the activve uplinks are
e in the basee enclosure, the following
g steps shoulld
are active
be non-disruptive:

From
m the VC Manager’s Dom
main Settingss window, deelete the enclosure (be su
ure
to re
emove the rig
ght one)
Unplug the intere
enclosure sta
acking links
de
liv
er
y
VC-Fibre
e Channel co
onfiguration
on
ly

VC-FC configuration with multiple enclossures
TT
VC-FC is not a Fibre Channel
C
swittch, so you ccannot stack VC-FC modu
ules. If one
enclosure
e in a stack iss connected to four Fibree Channel SA
ANs, as show
wn here, the
second enclosure
e
musst be conneccted to the sa
ame four SAN
Ns and so on. In other
words, all enclosures must have th
he same connfiguration.
Fo
rT
Enclosure
e stacking ca
an minimize Ethernet upli nk cables, b
but it will not save Fibre
Channel uplink cable
es beyond a single-enclossure VC dom
main.
Rev. 12.3
31
9 –27
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
VC-FC does not stacck
VC-FC
C connected to tthe same SAN
Although VC-FC doess not stack, th
he same num
mber of conn
nections to ea
ach SAN is n
not
mple, enclosure 1 and encclosure 2 aree both conne
ected to
required. In the exam
N, while enclosure 2 has
SAN_A, but enclosure 1 has four connectionss to that SAN
t that SAN. This set-up iis acceptablee because both enclosure
es
only one connection to
are connected to the same SANs..
TT
Note
The uplink po
ort assignment within
w
the VC D
Domain is enforrced by VC Ma
anager.
rT
Note
Fo
A single-enclo
osure VC Doma
ain that containns multiple VC-FFC modules witthin the SAME
chassis does not have this re
estriction, as lonng as it is not w
within a VC Dom
main stack.
9 –28
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
c3000 encl osure
on
ly
c3000 sttacking
y
You can connect
c
a Fle
ex10 Module
e directly to another Flexx10 Module in another
c3000 enclosure if th
he following requirementss are met:
There
e is a maxim
mum of two enclosures.
e

Both enclosures have
h
their ow
wn Virtual Co
onnect doma
ain.

The data
d
center is
i self-contained with no external uplinks.
de
liv
er

Fo
rT
TT
Stacking c3000 enclo
osures as if they
t
were c7
7000 enclosuures is not po
ossible.
Rev. 12.3
31
9 –29
Implemen
nting HP BladeS
System Solutions
y
on
ly
Define
e Ethernet networks
de
liv
er
Defining Ethernet nnetworks flow
After the domain has been create
ed, you can define the Etthernet netw
works. The
Network Setup Wiza
ard establishe
es external E
Ethernet netw
work connecttivity for a
BladeSysstem enclosure using Virtual Connect . A user account with ne
etwork
privilegess is required to perform these
t
operattions.
Use this wizard
w
to:

Identify the MAC
C addresses to be used o
on the serverrs deployed w
within this
Virtu
ual Connect domain
d
Set up
u connectio
ons from the BladeSystem
m enclosure tto the externa
al Ethernet
netw
works
TT

Fo
rT
These connections ca
an be uplinkss dedicated tto a specific Ethernet nettwork or sha
ared
hat carry multiple Etherne
et networks w
with the use of VLAN tag
gs.
uplinks th
9 –30
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
on
ly
Define
e Fibre Ch
hannel SAN connecctions
y
Defining FC SAN co
onnections flow
Use this wizard
w
to:

Identify WWNs to be used on
o the serverr blades dep
ployed within this Virtual
Connect domain
Defin
ne available SAN fabricss
Fo
rT
TT

de
liv
er
The Virtua
al Connect Fibre
F
Channe
el Setup Wizzard configurres external FFibre Channel
connectivvity for a BladeSystem en
nclosure using
g Virtual Connect. A use
er account wiith
storage privileges
p
is required
r
to perform
p
thesee operations..
Rev. 12.3
31
9 –31
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Create
e server profiles
The Virtua
al Connect Manager
M
Serrver Profile W
Wizard allow
ws you to quickly set up a
and
configure
e network/SA
AN connectio
ons for the seerver bladess within your enclosure.
With the wizard, you can define a server proffile template that identifie
es the serverr
connectivvity to use on
n server blad
des within thee enclosure. The template
e can then be
used to automatically
a
y create and apply serverr profiles to uup to 16 servver blades. TThe
individua
al server proffiles can be edited
e
indep
pendently.
Before be
eginning the server profille wizard, do
o the followin
ng:
Com
mplete the Ne
etwork Setup
p Wizard.

Com
mplete the Fib
bre Channel Setup Wizarrd (if applica
able).

Ensu
ure that any blades
b
to be configured using this wiizard are pow
wered off.
TT

rT
This wiza
ard walks you
u through the
e following ta
asks:

Defin
ne a server profile
p
templa
ate.

Assig
gn server pro
ofiles.
Nam
me server pro
ofiles.
Fo

9 –32
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
de
liv
er
y
on
ly
Implem
menting th
he server profile
p
Settting up a profille workflow
Follow these steps to set up the se
erver profile:
Conffigure the server profile using
u
the VC Manager usser interface.
2.
Inserrt the server blade.
3.
VC Manager
M
dettects that a server
s
blade was inserted
d and reads the fieldrepla
aceable unit (FRU) data for
f each inteerface.
4.
VC Manager
M
wriites the serve
er profile info
ormation to tthe server.
5.
Powe
er on the serrver.
TT
1.
rT
6. CPU BIOS and NIC/HBA
N
op
ption ROM so
oftware writee the profile information tto
ace.
the interfa
Fo
7. The successful
s
write is commu
unicated to tthe VC Mana
ager through
h the Onboard
Administrrator.
8.
The server
s
boots using the se
erver profile p
provided.
!
Rev. 12.3
31
Important
When a blad
de is inserted in
nto a bay that hhas a VC Mana
ager profile assigned, the VC
Manager dettects the insertio
on through com
mmunications w
with the Onboarrd Administratorr
and must gen
nerate profile in
nstructions for thhat server beforre the server is allowed to pow
wer
on. If VC Manager is not co
ommunicating w
with the Onboa
ard Administrato
or at the time th
he
server is inserted, the Onboard Administra tor will continuee to deny the se
erver iLO powe
er
request until the
t VC Manage
er has updated
d the profile. If a server is not p
powering on,
verify that the
e VC Manager has established
d communicatio
ons with that O
Onboard
Administratorr.
9 –33
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Manage data center
c
changes
Now thatt the VC Dom
main has bee
en created w
with Ethernet networks, Fibre Channel
SANs, an
nd assigned server profile
es, you can:

Replace a failed server witho
out logging i n to VC Man
nager becau
use the server
ed to the bay
y
profiile is assigne
Copy
y a server prrofile from on
ne bay to annother

Change a serverr’s network or
o SAN conn ections whilee the system is running

Movve a profile fo
or a failed se
erver to a sp are server

Assig
gn a profile to
t an empty server bay ffor future gro
owth
Fo
rT
TT

9 –34
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
de
liv
er
y
on
ly
Virtua
al Conne
ect – Servver profile migration
Virtual Co
onnect can take
t
a serverr profile from
m server A an
nd migrate th
hat profile to a
spare serrver if server A were to fa
ail or go offliine.
The profile contains th
he “personallity” of the seerver, includiing:
Virtu
ual Connect MAC
M
addressses

Virtu
ual Connect Fibre
F
Channe
el WWNs

LAN and SAN assignments
a

Boott parameters
Fo
rT
TT

Rev. 12.3
31
9 –35
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
Serverr profile migration
m
fo
or a failed
d server
Server
S
profile m
migration
After the migration ha
as completed
d, the spare blade assum
mes the settin
ngs of the failed
blade inccluding the MAC
M
addressses, Fibre Chhannel WWNs, SAN, an
nd network
connectio
ons.
Fo
rT
TT
In a boott from SAN situation,
s
the spare bladee then boots to the logica
al unit numbe
er
(LUN) tha
at contains th
he failed servver’s operati ng system. In
n a local boo
ot situation, tthe
hard drivves of the failled server ca
an be broughht over to thee spare for lo
ocal booting,,
provided the hard driives were not the cause o
of the failoveer.
9 –36
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
de
liv
er
y
on
ly
Virtu
ual Con
nnect Manage
M
er
Virtual Connect Mana
ager homepagee
The VC Manager
M
runs embedded
d on the VC-E
Ethernet Mod
dule in bay 1 or 2 of the
e
base encclosure and iss accessible through the Onboard Ad
dministrator managemen
nt
interface.. The VC Ma
anager conne
ects directly to the active Onboard A
Administrator
the
following
module in
n the enclosu
ure and has
g functions:
Man
nages enclosure connectivvity

Defin
nes available
e LANs and SANs

Sets up enclosure
e connection
ns to the LAN
N or SAN

Defin
nes and man
nages server I/O profiless
TT

rT
The VC Manager
M
con
ntains utilitiess and a Profiile Wizard to
o develop templates to
create an
nd assign pro
ofiles to multiple servers a
at one time. The I/O pro
ofiles include the
physical NIC MAC addresses, Fib
bre Channel HBA WWN
Ns, and the S
SAN boot
ations.
configura
Fo
The VC Manager
M
pro
ofile summary
y page inclu des a view o
of server stattus, port, and
d
network assignments.
a
. You can alsso edit the prrofile details,, reassign the profile, and
examine how HBAs and
a NICs are
e connected..
Rev. 12.3
31
9 –37
Implemen
nting HP BladeS
System Solutions
on
ly
Accesssing the
e Virtual Connect
C
Manageer
y
Accessing
A
VC Manager
M
from Onboard Adm
ministrator
de
liv
er
Access to
o the VC Manager is ove
er the same E
Ethernet conn
nection used to access th
he
enclosure
e Onboard Administrator
A
r and server blade iLO co
onnections. TTo access the
e
VC Mana
ager for the first
f
time, you
u can either log in using a web brow
wser to the
Onboard
d Administrattor and then select the VC
C Manager link, or use tthe dynamic
DNS nam
me printed on
n the tear-offf tag for the V
VC-Ethernet Module in In
nterconnect b
bay
1 (enter the
t DNS nam
me in the bro
owser addresss text field).
TT
Optionally, you can set
s up a static IP address for the VC M
Manager, wh
hich will ena
able
you to ma
aintain access to the VC Manager inn the event th
hat it fails ove
er to the VC-Ethernet Module
M
in bay 2.
Note
Fo
rT
The VC Mana
ager typically runs
r
on the Virtuual Connect Eth
hernet module iin bay 1 unlesss
that module is
i unavailable, causing a failo
over to the VC M
Manager runnin
ng in bay 2. If yyou
cannot conne
ect to the VC Manager
M
in Interrconnect bay 1,, use the Onboa
ard Administrator
to obtain the IP address of the Virtual Connnect module in bay 2.
9 –38
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
y
on
ly
Virtua
al Conne
ect Mana
ager login page
de
liv
er
Log on ussing the userr name (Adm
ministrator) annd password
d from the De
efault Netwo
ork
Settings toe
t tag for In
nterconnect bay
b 1. After yyou log in fo
or the first tim
me, the Virtua
al
Connect Manager Se
etup Wizard screen displ ays.
C
dom
main and nettwork, follow
wing these ste
eps:
To set up the Virtual Connect
Log in and run th
he Domain Setup
S
Wizard
d.

Import the enclosure.
e
(Th
he user must provide Onb
board Admin
nistrator login
information to enable en
nclosure impo
ort.)

Name the Virtual
V
Conne
ect domain.


Set up a static IP addresss for the VC Manager (o
optional).
Set up local user accoun
nts and privil eges.
Confirm the stacking linkks provide thhe needed co
onnectivity and redundan
ncy.
rT

TT


Laun
nch the Netw
work Setup Wizard.
W
Fo

Rev. 12.3
31
Select a MA
AC address ra
ange.

Confirm the stacking linkks (if steps 1 and 2 are p
performed byy different
administrato
ors).

Set up the networks.

Laun
nch the SAN Setup Wizard.

Laun
nch the Profile
e Setup Wizard, depend
ding on the fiirmware rele
ease.
9 –39
Implemen
nting HP BladeS
System Solutions
After an enclosure
e
is imported into a Virtual C
Connect dom
main, server b
blades that
have not been assign
ned a server profile are issolated from all networkss to ensure th
hat
only prop
perly configu
ured server blades are atttached to da
ata center ne
etworks.
A predep
ployment pro
ofile can be defined
d
for eeach device b
bay so that the server bla
ade
can be powered
p
on and
a connecte
ed to a deplo
oyment netw
work. These p
profiles can la
ater
be modiffied or replacced by anoth
her server prrofile.
de
liv
er
y
on
ly
Virtua
al Conne
ect Mana
ager hom
me page
TT
This scree
en provides access
a
for the managemeent of enclossures, serverss, and
networkin
ng. It also se
erves as the launch point for the initia
al setup of VC
C Manager.
rT
The VC Manager
M
navvigation syste
em consists o
of a tree view
w on the left side of the
page tha
at lists all of the system de
evices and avvailable actiions. The tree
e view remaiins
visible at all times.
Fo
The right side of the page
p
display
ys details for the selected
d device or a
activity, which
h
includes a pull-down menu at the top. To view
w detailed prroduct inform
mation, selectt
P VC Manage
er from the Help
H
pull-dow
wn menu.
About HP
9 –40
Note
The Home Pa
age will look slig
ghtly different d
depending on tthe firmware revvision.
Rev. 12
2.31
Virtual Connect Installation and Configuration
Virtual Connect role-based privileges
Virtual Connect supports four levels of role-based access:


Define local users, set passwords, and set roles

Name the Virtual Connect domain

Import enclosures

SNMP configuration, HP SIM configuration

Update firmware (Virtual Connect Ethernet and Virtual Connect Fibre
Channel)
Networking
on
ly

Configure network default settings

Select the MAC address range to be used by the Virtual Connect domain

Create/delete/edit networks

Create/delete/edit shared uplink sets
y

Storage (SAN)
de
liv
er

Domain

Configure storage-related default settings

Select the WWN range to be used by the Virtual Connect domain

Create/delete/edit Fibre Channel SAN fabrics
Server bay


Create/edit/delete server Virtual Connect profiles
Select and use available networks
Select and use available Fibre Channel fabrics
rT

TT

Set Fibre Channel SAN boot settings for a server

Enable/disable Preboot Execution Environment (PXE) is on each server NIC
Fo

By default, all users have read privileges in all roles (not being in any of the privilege
classes gives read-only access). Each user can have any combination of the four
privileges. The Administrator account is defined by default, and additional local user
accounts can be created.
Note
The VC Manager user account is an internal Onboard Administrator account created and
used by VC Manager to communicate with the Onboard Administrator. This account can
appear in the Onboard Administrator system log and cannot be changed or deleted.
Rev. 12.31
9 –41
Implemen
nting HP BladeS
System Solutions
y
on
ly
Virtua
al Conne
ect Mana
ager failo
over
de
liv
er
Redundant pair of VC modules
The VC Manager
M
runs as a high-a
availability p
pair and runss on Virtual C
Connect
Ethernet modules
m
in bay
b 1 and ba
ay 2. The acctive VC Man
nager is usua
ally on bay 1
1.
Redundancy daemon
ns on module
es 1 and 2 d
determine thee active mana
ager.
Heartbea
ats can be maintained ovver multiple p
paths:

Backkplane

Ethernet link
TT
Each time
e a configura
ation change
es, it is writteen to local fla
ash memory and checkpointed to
o the standb
by module (and written to
o flash memo
ory). Configu
urations can
also be backed
b
up to
o a workstatio
on.
rT
A failover will cause a restart of th
he VC Mana
ager, a restore from the ssaved
ation, and wiill require re--login by anyy web users.
configura
Fo
Note
A single static IP address may be configured for thee VC Manager.
9 –42
Note
The c7000/cc3000 enclosurre configurationn and setup is sstored on the O
Onboard
Administratorr (in flash). Thesse include enclo
osure name, En
nclosure Bay IP Address (EBIPA
A)
settings, pow
wer mode, SNM
MP settings, and so on. If you h
have a second O
OA, that
information iss also kept there, so that if youu lose or replacce an OA, you do not lose you
ur
settings. In ad
ddition, the OA
A does keep som
me VC profile-rrelated signaturres. VC Ethernett
Modules store
e VC-related co
onfigurations suuch as networkss and profiles. O
Other Ethernet
(Cisco, BNT) and Fibre Channel modules sstore their config
gurations in the
eir own flash.
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
de
liv
er
y
on
ly
Virtu
ual Con
nnect Enterprisse Man
nager
TT
HP Virtua
al Connect Enterprise Ma
anager (VCEM
M) is a softw
ware applica
ation that
centralize
es connection
n manageme
ent and workkload mobility for BladeS
System server
blades th
hat use Virtua
al Connect to
o control acccess to LANs,, SANs, and
d converged
network infrastructure
i
es. VCEM helps organiza
ations increasse productivity, respond
faster to infrastructure
i
e and worklo
oad changes,, and reducee operating ccosts. VCEM
seamlessly integrates with existing
g Virtual Connnect infrastrructures, and
d discovers and
onnect resou
urces into a ccentral conso
ole.
aggregattes Virtual Co
Fo
rT
Built on the Virtual Co
onnect archittecture integ rated into evvery BladeSyystem enclosu
ure,
M central con
nsole enable
es you to pro grammatically administe
er LAN and
the VCEM
SAN add
dress assignm
ments, perforrm group-bassed configura
ation manag
gement, and
rapidly deploy, move,, and fail over server connnections an
nd their workkloads for up
p to
ual Connect domains (1,0
000 enclosurres and 16,0
000 servers w
when used w
with
250 Virtu
Virtual Co
onnect Etherrnet enclosure
e stacking).
Rev. 12.3
31
9 –43
Implementing HP BladeSystem Solutions
on
ly
The central VCEM address repository is an extension of the HP Systems Insight
Manager (HP SIM) database. This repository provides programmatic administration
of MAC addresses and WWNs to establish server connections to LANs and SANs.
The VCEM repository reduces manual management overhead and eliminates the risk
of address conflicts. Within the VCEM repository, you can use the unique HP defined
addresses, create your own custom ranges, and also establish exclusion zones to
protect existing MAC and WWN assignments. The central VCEM repository supports
128K address ranges for MAC and WWN assignments, for a total of 256K network
addresses per VCEM console.
For more information, visit: http://www.hp.com/go/vcem/
VCEM presents its own dedicated homepage to perform the following core tasks:

Discover and import existing Virtual Connect domains
Aggregate individual Virtual Connect address names for LAN and SAN
connectivity into a centrally administered VCEM address repository
y

Create Virtual Connect domain groups

Assign and unassign Virtual Connect domains to Virtual Connect domain groups

Define server profiles and link to available LAN and SAN resources


Assign server profiles to BladeSystem enclosures, enclosure bays, and Virtual
Connect domain groups
Change, reassign, and automatically failover server profiles to spare servers
Rapidly install new bare-metal BladeSystem enclosures by assigning to a Virtual
Connect domain group
TT

de
liv
er

Additional management tasks are available from VCEM:
Managing bays — Administrator can power down a server inside bay, assign a
profile or designate a spare bay.
rT

Managing MAC and WWN addresses — Administrator can choose between
VCEM-defined MAC address ranges or user-defined MAC address ranges.
Working with Logical Serial Numbers — Administrator can use virtual serial
numbers inside server profiles
Fo



9 –44
Tracking VCEM job status — The Jobs list provides detailed information about
jobs that have occurred and are related to VCEM.
Rev. 12.31
Virtual Connect Installation
n and Configura
ation
de
liv
er
y
on
ly
VCEM
M compa
ared with
h VC Manager
onnect Mana
ager is a we
eb console b uilt into the ffirmware of V
Virtual Connect
Virtual Co
Ethernet modules,
m
dessigned to co
onfigure and manage a ssingle Virtuall Connect
domain, up to 64 serrvers. This co
ould be a sinngle enclosurre, or a multi enclosure
c
up
p to four phy
ysically linked
d enclosuress in the same
e or adjacentt
domain containing
racks. This option is id
deal for enviironments wiith up to fourr enclosures tthat have no
o
e
furth
her. VC Mana
ager does no
ot work acro
oss multiple d
domains. It
plans to expand
configure
es and mana
ages only its local domainn.
TT
In contrasst, Virtual Co
onnect Enterp
prise Manag
ger is the prim
mary HP app
plication for
managing smaller or larger infrasstructures and
d groups of V
Virtual Conn
nect domainss
e data cente
er.
across the
Fo
rT
VCEM also enables you
y to create
e domain gro
oups that usee a master co
onfiguration
With
profile for multiple Virtual Connecct domains thhat connect tto the same networks. W
VCEM, administrators
a
s can move profiles
p
and server worklloads betwee
en any
enclosure
es that belong to the sam
me or differennt domain grroup, which ccould be the
same racck, across the
e data centerr or even a d
different physsical location
n. The domain
group fun
nctionality in VCEM also simplifies thhe addition o
of new/bare metal
enclosure
es, helping organizations
o
s develop mo
ore consisten
nt infrastructure
configura
ations as the datacenter expands.
e
!
Rev. 12.3
31
Important
If a customer plans to only use
u the freely in cluded VC Ma nager instead o
of purchasing
Channel module, then at leastt one Virtual
VCEM to manage a Virtual Connect Fibre C
ernet module mu
ust also be insta
alled in the Virttual Connect do
omain. The reason
Connect Ethe
for this requirrement is becau
use the VC Mannager software only runs on a Virtual Connecct
Ethernet module.
9 –45
Implemen
nting HP BladeS
System Solutions
on
ly
VCEM
M licensing
y
VCEM is licensed perr BladeSystem
m enclosure, with separa
ate options fo
or-c3000 and
c7000 en
nclosures. One VCEM liccense is requuired for each
h enclosure tto be manag
ged
in both siingle and mu
ulti-enclosure
e domain connfigurations. Licenses are
e nontransferab
ble. Full deta
ails are conta
ained in the End User Liccense Agreem
ment. License
es
are addittive such thatt multiple lice
enses can bee combined together for the total
number of
o BladeSyste
em enclosure
e licenses youu have purch
hased.
de
liv
er
For each purchased license, a lice
ense entitlem
ment certificate is delivere
ed. The licensse
entitlement certificate contains infformation neeeded to redeeem license a
activation ke
eys
online or via fax. Thiss electronic redemption
r
p
process enab
bles easy lice
ense
managem
ment and serrvice and sup
pport trackin g.
For available license option
ns and more liccensing informa
ation, see the VC
CEM
QuickSpecs at: http://www
w.hp.com/go//vcem
rT
TT
VCEM includes one year
y
of 24 x 7 HP Softwa
are Technical Support an
nd Update
Service. This
T service provides
p
acccess to HP tecchnical resouurces for assistance in
resolving software implementation
n or operatio
ons problemss. The service
e also provid
des
access to
o software up
pdates and re
eference ma nuals either in electronicc form or on
physical media as the
ey are made
e available frrom HP. With
h this service
e, customers w
will
benefit fro
om expedite
ed problem re
esolution pluus proactive n
notification a
and delivery of
software updates.
Fo
For more infformation about 24 x 7 HP So
oftware Technica
al Support and Update Service,
visit: http://
/www.hp.com/
/services/insighht
9 –46
Rev. 12
2.31
Virtual Connect Installation and Configuration
Installing VCEM
VCEM can be installed in a variety of configurations that include a physical standalone console, as a plug-in to HP SIM 6.0 or later, and as a virtual machine.
Use the Insight Software DVD to install VCEM. Run the Insight Software Advisor to
test and evaluate the hardware and software configuration before beginning the
installation.
on
ly
When an upgrade to a new and different central management server (CMS) is
performed, or is moved to a 64-bit CMS, it may be necessary to migrate old data
using the HP SIM data migration tool. If an upgrade to a new version of VCEM on
the same CMS is performed, data migration with the HP SIM data migration tool is
not necessary.
Note
de
liv
er
y
Installation of VCEM requires Virtual Connect firmware 2.10 or later. For complete
hardware and software minimum requirements and other information, see the HP Systems
Insight Manager Installation and Configuration Guide for Microsoft Windows available
from the HP website.
Typical environments for VCEM
VCEM is designed to scale as the infrastructure grows and simplifies the addition of
new and bare metal enclosures. Small environments with goals to expand should use
VCEM from the beginning to get the most benefit. Ideal environments include:

Multiple-rack or distributed BladeSystem environments with more than one rack
of enclosures and with plans to expand
Medium to large BladeSystem environments that use Virtual Connect

BladeSystem environments that extend to multiple locations

Organizations that require centralized control of server-to-network connectivity
rT
TT

Organizations that require rapid server movement between enclosures
Fo

Rev. 12.31
Note
For more information about Virtual Connect and Virtual Connect Manager, see
the HP Virtual Connect for c-Class BladeSystem User Guide.
9 –47
Implemen
nting HP BladeS
System Solutions
de
liv
er
y
on
ly
VCEM
M user in
nterfaces
You can access
a
VCEM
M through either a graphhical user inteerface (GUI)) or a comma
and
line interfface (CLI). Th
he VCEM GU
UI enables yyou to:

Man
nage Virtual Connect dom
mains and d omain group
ps

Man
nage server profiles
p
and profile failovver

Perfo
orm central address
a
mana
agement (MA
AC, WWN, serial numb
bers)
TT
The CLI can
c be used as an alterna
ative method
d or if no bro
owser is avaiilable.
Available
e operations from the CLII include:
Perfo
orm profile fa
ailover on sp
pecified Virtua
al Connect d
domain bay server

List details
d
for specified VCEM
M job
rT


Show
w CLI usage online help
Using the
e CLI can be useful in the
e following sccenarios:
HP management
m
t applications, such as H P SIM or Insight Control tools, can
querry for informa
ation; these tools
t
need to
o present a ccomplete management view
of BladeSystem enclosures
e
an
nd devices. TThe CLI is alsso used by th
he managem
ment
toolss to execute provisioning
p
and configuuration tasks to devices in
n the enclosu
ure.
Fo


Userrs can develo
op tools that use VCEM fuunctions for data collectiion and for
execcuting provisioning and configuration
c
tasks.
The CLI re
eturns a num
meric value th
hat indicates success or a particular e
error or failure.
The CLI also
a displays an associate
ed error messsage. A zero
o numerical returned valu
ue
indicates success. Some values grreater than zzero indicatee an error or failure.
9 –48
Rev. 12
2.31
Virtual Connect Installation
n and Configura
ation
on
ly
VCEM
M profile fa
ailover
Using
g a spare serveer with VCEM
de
liv
er
y
System ad
dministratorss can use Virrtual Connecct Profile Failo
over to perfo
orm the rapid
d
and cost-- effective reccovery of phy
ysical serverss within the ssame Virtual Connect
domain group
g
with minimal
m
admiinistrator inteervention.
ables the automated
Virtual Co
onnect Profile Failover is a VCEM fea
ature that ena
movemen
nt of Virtual Connect
C
servver profiles a
and associateed network cconnections tto
customer--defined spare servers in a Virtual Co
onnect doma
ain group. Th
he manual
movemen
nt of a Virtua
al Connect se
erver profile requires the following ste
eps to complete
the opera
ation, but Virrtual Connecct Profile Failo
over combines these separate steps into
one seam
mless task:
Powe
er down the original or source
s
serverr.
4.
Selecct a new targ
get server.
5.
Movve the Virtual Connect serrver profile to
o the target sserver.
6.
Powe
er up the new
w server.
TT
1.
Fo
rT
When se
electing a targ
get server fro
om a pool off defined spa
are systems, Virtual Conn
nect
Profile Fa
ailover autom
matically choo
oses the sam
me server model as the so
ource server.
The proce
ess can be in
nitiated from the VCEM G
GUI as a onee-button ope
eration or from
the CLI. When
W
used with
w the auto
omatic event handling fun
nctionality in HP SIM,
Virtual Co
onnect Profile Failover op
perations cann be automa
atically trigge
ered-based,
user-defin
ned events.
Precondittions for a Viirtual Connect Profile Faiilover are:

Rev. 12.3
31
Sourrce and desig
gnated spare
e servers must be part off the same Virtual Conne
ect
domain.

The source
s
and target
t
serverss must be co
onfigured to b
boot from SA
AN.

The designated
d
spare
s
serverss must be po
owered off.

A sp
pare server must
m be the sa
ame model a
as the sourcee server.
9 –49
Implementing HP BladeSystem Solutions
Learning check
1.
List the components of Virtual Connect technology.
.................................................................................................................
.................................................................................................................
.................................................................................................................
.................................................................................................................
To install Fibre Channel in a Virtual Connect environment, the enclosure must
have at least one Virtual Connect Ethernet module because the VC Manager
software runs on a processor resident on the Ethernet module.
on
ly
7.
 True
 False
b.
NL_Port
c.
E_Port
y
FL_Port
......... A switch port used for switch-toswitch connections
de
liv
er
a.
......... An F_Port that contains arbitrated
loop functions
......... An N_Port that contains arbitrated
loop support
A single user can have any combination of server, network, domain, or storage
privileges.
rT
9.
Match each item with its correct description.
TT
8.
 True
Fo
 False
9 –50
Rev. 12.31
Download