Professional Documents
Culture Documents
HDI Hardware Components v1-0
HDI Hardware Components v1-0
HDI Hardware Components v1-0
Port names
Network connections
HDI single and HDI cluster uses same server platform (same as used in Hitachi Content
Platform)
Internal storage:
• Redundant array of independent disks (RAID) controller: LSI Logic SAS3108
• Front: Only present in single, not in cluster RAID-6 user data (file systems)
Serial attached SCSI (SAS) 3.5” 1TB or 2TB or 4TB x6 or x12
Or solid-state drive (SSD) 400GB or 800GB (high performance)
• Rear: present in single and cluster: RAID-1 O.S.
SAS 2.5” 300GB x2
• Host bus adapter (HBA): present in cluster only
Emulex LPe12002 x1 (Dual: 8Gb/sec x 2 ports )
Network:
• On board network interface controller NIC (Intel X540 ): 10Gb Cooper x2
• LAN Mezzanine (Intel i350): 1Gb Cooper x2
• Baseboard management controller (BMC): 1Gb Cooper x1
• Ethernet card: Different options
RAID-1 for OS
HDI standalone has 6 or 12 internal disks 300GB 300GB
on front and 2 internal disks on rear
USB port
HDD0 HDD3 HDD6 HDD9
Rear view:
Rear disks are used
as OS LU
Not used HDD12 (single and cluster)
Network Option
Not used Network Option HDD13
OS and user data are stored in internal disk drives of the node
System (OS) LU
OS and management LUs
~ 300GB
System (configuration and management) LU
RAID group configured using 6 HDDs in RAID-6 (4D+2P) or 12 HDDs in RAID 6 (10D+2P)
Management and user LUs are provisioned from the Hitachi VSP Gx00
LUs can be basic LDEVs from parity groups or DP-VOL LDEV (V-VOL)
associated to a HDP (Hitachi Dynamic Provisioning) or HDT (Hitachi Dynamic
Tiering) pool
Rest of LDEVs will be used as file system LUs and work space LUs
No restriction on capacity and number of LUs other than limits imposed by the storage system
CTL1 CTL2
Virtual Disk 0 for User Data – Maximum = 62TB in ESX with VMFS-5 Virtual disks for user data LUs
In VMWare up to 13 disks
maximum 62TB each in VMFS-5
Virtual Disk 12 for User Data – Maximum = 62TB in ESX with VMFS-5 Or
maximum 2TB each in VMFS-3
RAID Group determined by customer
Rear view
Not used HDD12
PCI-X Slot
Not used PCI-X Slot HDD13
HDI single:
PCI Slot for
1GbE or 10GbE Card
PCI Slot for
10GbE Card
# Name Use
1 pm0 Maintenance port (1GbE)
2 mng0 Management port (10GbE)
3 xgbe1 10GbE data port
4 BMC Baseboard Management Controller (1GbE)
# Name Use
1 pm0 Maintenance port (1GbE)
2 hba0 Heartbeat port (1GbE)
3 mng0 Management port (10GbE)
4 pm1 Reset port (10GbE)
5 BMC Baseboard Management Controller (1GbE)
2 © Hitachi Vantara Corporation 2018. All rights reserved.
Additional Ethernet Cards: Possible
Combinations
HDI does not support mixture of 10GbE (copper) and 10GbE (optical)
# Number of Option cards
Number of ports
1GbE 10GbE (Copper) 10GbE (Optical)
xgbe8 xgbe9
xgbe4 xgbe5
fc0002 fc0003
xgbe8 xgbe9
xgbe4 xgbe5
xgbe8 xgbe9
xgbe4 xgbe5
pm0
• Maintenance port: Used to connect maintenance PC of a customer engineer
(CE) technician to HDI
• Has only assigned physical IP (no virtual IP)
• Used in HDI single and HDI cluster
• Typically used in below situations:
In maintenance mode (boot from installation ISO)
When customer does not allow CE to connect customer’s network
When both management port and front-end port cannot be used
• Cannot be used for other purposes (file service, HCP connection and so on)
hb0
• Heartbeat port: Used for heartbeat between nodes in cluster
• Directly connected with nodes in cluster with each other
• Only used in cluster
mng0
• Management port: Used for management of HDI configuration (For this we can use the
physical IP)
Hitachi File Services Manager (HFSM) GUI connection or direct integrated GUI access
Secure shell (SSH) login for command line interface (CLI) execution
• Mandatory for every HDI model and configuration (single, VM and cluster)
• Can be used for file services, HCP connection or other all purposes (For this we can
use the virtual IP)
• Differences from front-end ports:
The only port activated on HDI node during OS upgrade
No redundancy: Bonding cannot be configured
No virtual LAN (VLAN)
mng0
• It is important to connect mng0 to a redundant Ethernet switch: 60 seg
without mng0 connection entails a resource group failover in case of HDI
cluster
pm1
• Special port for IP switch-less configuration in HDI cluster
• Only used in cluster
• Connected to BMC port on the other node in the cluster
• IP address is automatically assigned when BMC address is assigned
BMC
• BMC port: Used for monitoring the other node in HDI cluster
• Used for following purposes:
Get the OS status of other node
Reset the other node when detecting heartbeat down (Issue reset from mng0
(node0) to BMC (node1) or mng0 (node1) to BMC (node0)). Same IP network as
mng0
Mandatory in cluster
Power-on from HFSM
Maintenance operation using kernel-based virtual machine (KVM)
xgbe4 xgbe5
Administrator
Management LAN
BMC connection
is optional
5 © Hitachi Vantara Corporation 2018. All rights reserved.
HDI VM – Connections Diagram
Front-End LAN
HDI Router /
Clients Firewall ADS/DNS/
NTP
eth0
HDI-VM
mng0
1. In HDI single node and in HDI cluster 2 rear HDDs are used.
(True/False)
2. In HDI single node and in HDI cluster 6 front HDDs are used.
(True/False)