Professional Documents
Culture Documents
Ibm Bto SNV1 Student Guide Book 1 150 225 PDF
Ibm Bto SNV1 Student Guide Book 1 150 225 PDF
2145-CG8, CF8
2145 UPS-1U
1 Fibre Channel ports
2 Power connector
3 Serial connector
4 Ethernet ports
5 Main-power connector
6 Communication port
7 Load-segment 2 receptacle
© Copyright IBM Corporation 2011, 2014
Notes:
Perform the following steps to connect the SAN Volume Controller to the 2145 UPS-1U:
• At the back of the SAN Volume Controller 2145-CG8 or 2145-CF8 node, plug the power cables
of the combined power and serial cable assembly into the power connector (2).
• Place the other end of the power cables into the load-segment 2 receptacles (7) on the 2145
UPS-1U.
• Plug the signal cable into the serial connector (3) located on the SAN Volume Controller
2145-CG8 or 2145-CF8 node.
• Place the other end of the signal cable into the communication port (6) on the 2145 UPS-1U.
• The two UPS units of a node pair should not be connected to the same power source (if
possible). The UPS is intended to maintain power on the SVC nodes until control data and
cache can be saved to the node’s local disk. Only the SVC node should be plugged into its
UPS.
- 2145-DH8 nodes does not require a UPS as the battery backup unit are integrated in the
front of the unit.
Uempty
SVC 2145-CG8/CF8 UPS and redundant power
switch requirements
1 I/O group 0
A 2 SAN Volume Controller node A
2145-CG8s
3 2145 UPS U1 A
B
4 SAN Volume Controller node B
C 5 2145 UPS U1 B
6 I/O group 1
2145CF8s
D 7 SAN Volume Controller node C
8 2145 UPS U1 C
9 SAN Volume Controller node D
10 2145 UPS U1 D
11 Redundant ac-power switch 1
12 Redundant ac-power switch 2
13 Site PUD X (C13 outlets)
14 Site PDU Y (C13 outlets)
© Copyright IBM Corporation 2011, 2014
Figure 2-14. SVC 2145-CG8/CF8 UPS and redundant power switch requirements SNV13.0
Notes:
The visual illustrates an SVC 2145-CG8 and CF8 configuration with UPS and redundant power
switches. The UPS (2145 UPS-1U) is an integral component of the SVC solution. It maintains
continuous communications with its attached SVC node. The UPS provides a secondary power
source in the event of power failures, power surges and sags, or line noise. When a power outage
occurs the UPS maintains power to allow configuration and cached data to be saved to the SVC
node’s internal disk. The UPS is not used to enable continued operation of the node when power is
lost.
Each UPS includes power (line) cords that connect the UPS to either a rack power distribution unit
(PDU) or to an external power source. Each 2145 UPS-1U has its own built in 10 amp circuit
breaker. From the back of the UPS, plug the UPS main power cable into the power socket of the
rack(13, 14) or, if available, into a redundant power switch (11, 12).
To connect an SVC node to the UPS, plug one end of the power cable into the SVC node power
socket (2) and the other end into an output socket on the UPS (3). The 2145-CF8 and CG8 models
have two power supplies and both must be plugged into the same UPS.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
A redundant AC-power switch is an optional feature designed to enable the SVC nodes to be more
resilient to power failure (11, 12). Each redundant power switch connects to two separate power
circuits (13, 14). The power switch logically sits between the rack PDU and the SVC UPS.
Each power switch connects up to two UPS/SVC nodes, preferably one UPS/SVC node per I/O
group. In the event of a failure of either of the input circuits, power continues to be provided to the
UPS by redundant circuit.
Plug the RS232 serial cable of the power cable assembly into the serial socket of the SVC node
(not shown), and plug the other end of the serial cable into the serial connector on the UPS.
To avoid the possibility of the power and signal (serial) cables being connected to different UPS
units, these cables are wrapped together and supplied as a single field replaceable unit. The signal
cables enable the SVC node to read status and identification information from the UPS.
Each SVC node is connected to its own UPS if using the 2145 UPS-1U model. Each SVC node of
an I/O group must be connected to a different UPS if using the 2145 UPS model. Do not connect
any other device to the UPS.
Refer to the SAN Volume Controller Infocenter > Planning > Planning for Hardware for
instructions on the hardware installation of the UPS units, the redundant power supplies, and the
SVC nodes.
Uempty
Example:
Admin
IP Fabric 1
Fabric 2
Switch1
Switch2
SVC Node 1
SVC Node 2
SVC Node 3
SVC Node 4
Notes:
The visual illustrates the network port connections for the FC and IP. Each SVC node requires
connections to:
• Four Fibre Channel (FC) switch ports. A dual fabric is recommended with the node adapter
ports spread evenly across both fabrics.
• One or two (recommended) Ethernet hub/switch connection for cluster management.
• For the 2145-CG8 and 2145-CF8 models, one UPS
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The topics discusses the SVC management IP address requirements.
Uempty
SVC
Ethernet https
2 – 8 nodes
GUI, CLI, and
CIMOM
GUI: Web browser
CLI: Over SSH
over https
with key or password*
Embedded GUI with password
with best practices
Any
presets resource
SMI-S to
manager
CIMOM
CIM interface
Notes:
The SAN Volume Controller simplifies storage management by providing a single image for multiple
controllers and a consistent user interface for provisioning heterogeneous storage. The SVC
provided cluster management interfaces include:
• An embedded SAN Volume Controller Graphical User Interface (GUI) that supports a web
browser connection for configuration management. Each Storwize family member can run the
same software that is based on a common source codebase as IBM SAN Volume Controller
(SVC).
• A Command Line Interface (CLI) accessed using a Secure Shell connection (SSH) with PuTTY.
• An embedded CIMOM that supports the SMI-S which allows any CIM compliant resource
manager to communicate and manage the SVC cluster.
To access the cluster for management, there are two user authentication methods available:
• Local authentication: Local users are those managed within the cluster, that is, without using
a remote authentication service. Local users are created with a password to access the SVC
GUI, and/or assigned an SSH key pair (public/private) to access the SVC CLI.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Uempty
Notes:
The SVC GUI is reached using a web browser at https://<SVC Cluster IP address>. You can view
the system detail by selecting Monitoring > System. Page content is displayed in both graphical
and tabular format.
If the http protocol is specified, it is automatically redirected to the https protocol.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Public
3
2
Install public key
in cluster
Secure communications
SAN Volume Controller Pair Public
Notes:
The CLI commands use the Secure Shell (SSH) connection between the SSH client software on
the host system and the SSH server on the SVC cluster. For Windows environments, the Windows
SSH client program PuTTY can be downloaded.
A configured PuTTY session using a generated Secure Shell (SSH) key pair is needed to use the
CLI. The key pair is associated with a given user. The user and its key association are defined using
the superuser.
The public key is stored in the SVC cluster as part of the user definition process. When the client
(for example, a workstation) tries to connect and use the CLI, the private key on the client is used to
authenticate with its public key stored in the SVC cluster.
Beginning with v6.3, the CLI can be accessed using password instead of SSH. However, when
invoking commands from scripts, using the SSH key interface is recommended as it is more secure.
Uempty
Private
Public
Notes:
Select SSH2 RSA, leave the Number of bits in a generated key value at 1024, and click Generate.
Move the cursor over the PuTTY Key generator box until the key pair is generated. This procedure
generates random characters used to create a unique key.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Public
\Keys.PUBLICKEY.PUB
\Keys.PRIVATEKEY.PPK
Private
\Keys
Figure 2-21. Save the generated keys (for reference only) SNV13.0
Notes:
Save the generated public key by clicking Save public key. Save the generated private key by
clicking Save private key.
The name and location of the file to place the key will be prompted. The default location is
C:\Support Utils\PuTTY. If another location is chosen, make a record for later reference.
The public key is stored into the cluster as part of user management.
Uempty
Private
key file
Figure 2-22. SVC CLI session parms and private SSH key SNV13.0
Notes:
To use the CLI, the PuTTY program (on any workstation with PuTTY installed) must be set up to
provide the SSH connection to the SVC cluster.
Open the PuTTY program. The SSH private key (which matches its corresponding public key
already stored in the SVC cluster) is identified in the PuTTY Private key file for authentication
box using PuTTY Connection > SSH > Auth.
Click Session in the navigation tree to tailor basic options for the PuTTY session.
Identify the IP address (or DNS name) of the SVC cluster.
Select SSH under Connection type.
In the Load, save or delete a stored session section, type a name to associate with this session
environment definition in the Saved Sessions field, for example, NAVYadmin.
Click Save to save the PuTTY session settings (including the SSH private key) to be used for
subsequent connections to the SVC.
To start a PuTTY CLI session, select Start > Programs > PuTTY from the desktop. When the
PuTTY configuration window is opened, select the saved session name defined previously
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
(NAVYadmin in this case) and click Load to recall the saved SVC cluster IP address, selected
protocol (SSH option), and the private key location. Click Open at the bottom of the window to
connect to the SVC cluster.
At the SVC CLI login prompt, enter a defined user name (or admin) and press Enter to complete the
connection to the SVC cluster. The private key identified in this PuTTY session is then
authenticated against the public key contained in the cluster.
Uempty
Notes:
To log in with a password is similar. Set up the SVC cluster IP address and SSH protocol in a
PuTTY saved session, but do not provide the SSH key file location.
At the CLI login, a prompt appears to request the password for the specified user.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Figure 2-24. Complemented with logically consistent command line syntax SNV13.0
Notes:
Two major command sets are available:
• The svcinfo list command allows the display of a specific set of information about SVC
objects (nodes, MDisks, VDisks, and so forth) or the SVC environment. The command
argument typically begins with ls.
• The svctask action command allows changes to be made to various components or objects
within the SVC cluster.
• Beginning with SVC v6.2.0, the svcinfo and svctask command prefixes are no longer
required.
Commands related to activities that can be performed to SVC objects are categorized with common
prefixes. For example:
• ls: lshost to list all host objects; lsvdisk vdisk0 to list details for a specific VDisk (volume).
• add: addnode to add a node to a cluster; addhostport to add a WWPN to a host object;
addmdisk to add an MDisk to an MDisk group.
Uempty • mk: mkmdiskgrp to make or create a managed disk group; mkvdisk to create a VDisk or
volume.
• ch: chmdisk -name to change the name of an MDisk; chvdisk -name to change the name of a
VDisk.
• rm: rmvdisk to remove or delete a VDisk.
The following are reserved words. When specifying the name of an object, the name might not start
with any of the following reserved words:
• node
• io_grp
• controller
• mdisk
• mdisk_grp
• host
• vdisk
• flash
• fc_const_grp
• rerel
• re_const_grp
Avoid using the underscore "_" as the first character of the name for an object. The underscore is
reserved for internal SVC command processing and should not be used as a prefix for object
names.
The SVC CLI provides command line completion for command entry. Enter enough characters until
the command name is unambiguous, then press the Tab key. The rest of the command name is
then filled in automatically. If the entered characters are ambiguous or multiple commands begin
with the same prefix, a list of possible command is returned when the Tab key is pressed.
All commands are documented in the SVC Information Center > Command-Line Interface.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The command syntax can be viewed by entering:
• svcinfo -? (or -h): Shows the complete list of information commands.
• svctask -? (or -h): Shows the complete list of task or action commands.
• svctask commandname -? (or -h): Shows the syntax of the specific command; also
applicable to the svcinfo command set.
• svcinfo commandname -filtervalue: Shows the available filters to reduce the output of
the specific command.
Beginning with v7, the complete details of a given command can be listed with help
commandname or man commandname.
Uempty
Notes:
SVC v6.1 introduced the Service Assistant (SA), which is a browser-based GUI designed to assist
with service issues. You can access the interface for a node using its Ethernet port 1 service IP
address using either a web browser or a PuTTY SSH session. Only the superuser ID has access to
the Service Assistant interface. You log on with your Superuser passw0rd.
You can use Service Assistant to perform initialization of the cluster, recovery tasks, and other
service related issues. If for some reason your browser keeps bringing you to the normal GUI rather
than the Service Assistance GUI, just add /service to the URL.
With the previous CG8 and CF8 models, almost all the functions previously possible through the
node front panel are available from the Ethernet connection, offering the benefits of an
easier-to-use interface that can be invoked remotely from the cluster.
The 2145-DH8 node can only be initialized using the Technician port. The Technician port and the
Service Assistant IP address are not related.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
E1 E2 E3
1 GbE iSCSI and
1 2 3 4
cluster management
E1 E2
Notes:
The SVC cluster requires the following IP addresses:
• Cluster management IP address: Address used for all normal configuration and service
access to the cluster. There are two management IP ports on each node. Port 1 is required to
be configured as the port for cluster management.
• Service assistant IP address: One address per node. Note that the cluster will operate without
these node service IP addresses but it is highly recommended that each node is assigned an IP
address for service-related actions.
• The following IP addresses are optional:
For increased redundancy, an optional second Ethernet connection is supported for each SVC
node:
• The second IP port of the node can also be configured and used as an alternate address to
manage the cluster.
• iSCSI addresses: Two per node (only if iSCSI is intended to be used).
Uempty • In addition, the 10GbE ports of the 2145-CG8 and CF8 models and the 2145-DH8 can be used
for iSCSI.
To ensure system fail over operations, Ethernet port 1 on all nodes must be connected to the same
set of subnets. If used, Ethernet port 2 on all nodes must also be connected to the same set of
subnets. However, the subnets for Ethernet port 1 do not have to be the same as Ethernet port 2.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
iSCSI IP Mgmt IP
iSCSI targets Addresses Addresses
10.10.1.10 10.10.1.100
SVC Config Node 10.10.2.10 10.10.2.100
10.10.1.x 10.10.1.1
10.10.1.20
Gateway
SVC Node 10.10.2.20
Rest of IP
10.10.1.30 Network
SVC Node 10.10.2.30
10.10.2.1
10.10.1.40 10.10.2.x
Gateway
SVC Node 10.10.2.40
Notes:
Ever since SVC v5, support for IP network-attached hosts with the iSCSI protocol has been
available using one or both Ethernet ports on each SVC node. SVC enables IP based hosts to
access SVC managed Fibre Channel SAN-attached disk storage.
The 10GbE ports, available as an option with the 2145-C88 nodes and the 2145-DH8 node, can
also be used for iSCSI traffic.
Uempty
Converged SAN
Enhanced Fabric 1
Ethernet (CEE)
network
CG8 with
Hosts with 10GbE
CNAs
SAN
Fabric 2
Notes:
Beginning with v6.4.0, the 2145-C88 models and 2145-DH8 with 10 GbE ports support attachment
to Converged Enhanced Ethernet (CEE) networks using FCoE. A converged switch, such as the
IBM/Brocade Converged Switch B32 or the Cisco Nexus 5010/5020 supports FCoE, Fibre Channel,
Converged Enhanced Ethernet (CEE), and traditional Ethernet protocol connectivity for servers and
storage.
The FCoE support provided by v6.4.0 includes both target and initiator functions, which expands
the SVC host and storage connectivity to include:
• Fibre Channel hosts access to a volume using either FC or FCoE ports.
• FCoE hosts (hosts with Converged Network Adapters (CNAs)) to access a volume using either
FC or FCoE ports.
• SVC access using FC or FCoE ports to an external storage system FC accessed LUN.
• SVC access using FC or FCoE ports to an external storage system FCoE
• SVC to another SVC using any combination of FC or FCoE for Remote Copy operations. For
FCoE, a Fibre Channel Forwarder (FCF) function and a full Fibre Channel ISL are required.
In addition to FCoE, the same 10 GbE ports might also be concurrently used for iSCSI server
connections.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-37
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
I/O Group2
Cluster State Data I/O Group3
Configuration Node
Boss Node
Notes:
So how is communication and management possible?
When the initial node is used to create a cluster, it automatically becomes the configuration node for
the SVC cluster. The configuration node responds to the cluster IP address and provides the
configuration interface to the cluster. All configuration management and services are performed at
the cluster level. If the configuration node fails, another node is chosen to be the configuration node
automatically, and this node takes over the cluster IP address. Thus, configuration access to the
cluster remains unchanged. A cluster can contain up to four I/O groups or eight SVC nodes.
The cluster state holds all configuration and internal cluster data for the cluster. This cluster state
information is held in non-volatile memory of each node. If the main power supply fails, the UPS
units maintain battery power long enough for the cluster state information to be stored on the
internal disk of each node. The read/write cache information is also held in non-volatile memory. If
power fails to a node, the cached data is written to the internal disk.
A node in the cluster serves as the boss node. The boss node ensures synchronization and
controls the updating of the cluster state. When a request is made in a node that results in a change
being made to the cluster state data, that node notifies the boss node of the change. The boss node
then forwards the change to all nodes (including the requesting node), and all the nodes make the
Uempty state-change at the same point in time. This ensures that all nodes in the cluster have the same
cluster state data.
Beginning with SVC v4.3.1, cluster time can be obtained from an NTP (Network Time Protocol)
server from time synchronization.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-39
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The topics discusses the SAN zoning requirements for an SVC clustered system.
Uempty
SAN
Fabric 1
Host system
Redundancy
Fabric 2
Storage system
Notes:
An SVC environment requires SAN zoning configuration, which is implemented at the switch level.
SVC is one component of the SAN, which uses switches, switch fabrics, and switch zones to
connect host systems and storage devices. To meet business requirements for high availability,
SAN design practices recommend building of a dual fabric network using two independent fabrics
or SANs.
Switches from different vendors can co-exist in the same configuration. However, you might want to
review the documentation, since switch vendors might has different methods of configurations.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-41
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
You will configure the switches into two distinct types of fabric zones; a host zone and a storage
system zone. A host zone consist of the SVC system and hosts. You will need to define a zone for
each host in the fabric. If storage systems are to be attached, define a single storage system zone
that will consists of all the storage systems and the SVC. The SAN fabric zones allow the SVC
system to see each other’s nodes and the disk subsystems, and for the hosts to see the SVCs. The
host systems cannot directly see or operate LUNs on the disk subsystems that are assigned to the
SVC system. The SVC nodes within an SVC system must be able to see each other and all of the
storage that is assigned to the SVC system.
SVC 7.3 supports 2 Gb, 4 Gb, or 8 Gb FC fabric, depending on the hardware platform and on the
switch where the SVC is connected. In an environment where you have a fabric with multiple-speed
switches, the preferred practice is to connect the SVC and the disk subsystem to the switch
operating at the highest speed.
All SVC nodes in the SVC clustered system are connected to the same SANs, and they present
volumes to the hosts. These volumes are created from storage pools that are composed of MDisks
presented by the disk subsystems.
Uempty
FC SwitchA FC SwitchB
LUN
masking Note: LUN sharing
Lw Ls requires additional
software
Notes:
A host system is generally equipped with two HBAs, requiring one to be attached to each fabric.
Each storage system also attaches to each fabric with one or more adapter ports. A dual fabric is
also highly recommended when integrating the SVC into the SAN infrastructure.
LUN masking is typically implemented in the storage system, and in an analogous manner in the
SVC, to ensure data access integrity across multiple heterogeneous, or homogeneous host
servers. Zoning is deployed, often complementing LUN masking, to ensure resource access
integrity. Issues related to LUN or volume sharing across host servers are not changed by the SVC
implementation. Additional shared access software, such as clustering software, is still required if
sharing is desired.
Another aspect of zoning is to limit the number of paths among ports across the SAN, thus reducing
the number of instances the same LUN is reported to a host operating system.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-43
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
SVC and
SVC and
storage
Host zones
SVC zones
nodes zone
NonSVC Metro/Global
zones Mirror zone
Notes:
In a dual fabric environment, the two fabric zones are identical to one another in concept. Zoning
definitions integrating the SVC cluster typically need to be added alongside existing zoning
definitions. Additional zoning definitions include:
• A zone consisting of all SVC nodes for a given cluster.
• Back-end storage zones that contain all SVC node ports and the back-end storage controller
ports for a given controller type.
• Host zones: A single host should not have more than eight paths to an I/O group.
• A zone for intercluster Metro/Global Mirror operations if the feature is licensed. This zone
contains half of the SVC ports of the SVC clusters in partnerships.
Uempty
Fabric 1 11 12 13 14 Fabric 2
NODE1
21 22 23 24
Four additional ports NODE2 Four additional ports
per SVC node pair per SVC node pair
31 32 33 34
NODE3
41 42 43 44
Four ports per NODE4
SVC node
Up to 4 SVC node pairs, each pair
adds 8 ports to the SAN fabrics
Figure 2-36. Adding SVC Fibre Channel ports to the SAN SNV13.0
Notes:
The SVC can be implemented with up to four I/O groups or four pairs of SVC nodes forming an SVC
cluster. It is highly recommended to attach the SVC nodes to two independent fabrics (or a dual
fabric). An SVC cluster can be attached to up to four fabrics.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-45
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Fabric2
FC Switch1
Notes:
The visual illustrates a switch port connection example. The eight ports on the switch are used to
connect to a four-node SVC cluster. Each 2145-8F4, 8G4, 8A4, CF8, and CG8 has one FC adapter
with four ports. The port speed is auto-negotiated to 1, 2, or 4 Gb for models 8F4, 8G4, 8A4; and 2,
4, or 8 Gb for models CF8 and CG8.
Identical switch port numbers are used for the second fabric of the dual fabric SAN configuration.
Alternate the SVC port attachments between the two fabrics.
Use the cable connection chart to plan the connections of the SVC nodes and switches in the rack.
Go to the SVC Information Center website and click Physical Configuration Planning from the
launch page for additional reference to complete the cabling details of the SVC cluster.
Uempty
FC SwitchA1 FC SwitchA
Notes:
An SVC cluster with multiple nodes could potentially introduce more paths than necessary between
the host HBA ports and the SVC FC ports. For a given volume (which is owned by an I/O group),
the number of paths from the SVC nodes to a host must not exceed eight. A given host should have
two HBA ports for availability, and no more than four HBA ports.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-47
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
SVC Documentation
SVC Books,
1 Port 1 Messages,
2 Port 2 GUI diagnostics
3 Port 3
4 Port 4
Figure 2-39. SVC 2145-CG8 and CF8 models FC I/O ports SNV13.0
Notes:
Counting from left to right on the rear panel of an SVC 2145-CG8 and CF8 models, the four Fibre
Channel ports of each SVC node are numbered 1- 4. These port numbers are used in the SVC
documentation, SVC command output, and SVC service tasks.
Uempty
Notes:
Each SVC node has a WWNN (worldwide node name). Each of the four ports of a node has its own
SVC generated WWPN (worldwide port name). These world wide port names are persistent across
HBA replacements.
The WWPN of each port is generated from the SVC node’s WWN. The only variation among the
four ports of each node is the lower order third byte - which has a value of either 1, 2, 3, or 4. This
value of known as the Q value.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-49
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The 2145-DH8 supports up to 12 FC I/O port, depending on how many host interface cards. The
visual illustrates the physical Fibre Channel port numbers with host interface cards in slots 1, 2 and
5. As with Fibre Channel SAN participant, each SVC engine or node has a unique worldwide node
name (WWNN), and each Fibre Channel port on the adapter cards has a unique worldwide port
name (WWPN). These ports are used to connect the SVC node to the SAN.
Uempty
4 3 1 2
SVC Documentation WWPN Q Value
1 Port 1 4
2 Port 2 3
:Ports are not numbered from
left to right 3 Port 3 1
4 Port 4 2
4 3
Fabric1 Fabric2
WWPN Q value
2145-8F4, 8G4, CF8, CG8, DH8
Notes:
For availability, the ports of an SVC node should be spread across the two fabrics in a dual fabric
SAN configuration. For consistency and ease of cable management, consider labeling each HBA
port of the SVC back panel with its physical port number as well as the corresponding generated
WWPNs.
In this example we are using the 2145-CG8 node, the Q value on all nodes follows a 4, 3, 1, 2
sequence (from left to right). This might be counter-intuitive and has its rationale steeped in history.
For compatibility purposes this WWPN numbering scheme is still used for all SVC node models.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-51
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The 2145-DH8 nodes can be integrated within the existing IBM SAN Volume Controller clustered
systems with only a few additional steps in regard of the new WorldWide names (WWNs) structure.
The replacement procedure can be performed nondisruptively. The nodes can be intermixed in
pairs in the existing SVC systems. Consider first to upgrade the SAN Volume Controller to the latest
code level. When installing 2145-DH8 nodes into the existing SVC environment with compressed
volumes, all DH8 nodes must have the second processor, 64 GB memory, and at least one
Compression Accelerator card.
One of the important considerations when upgrading the system to DH8 nodes or when just
installing additional I/O groups based on DH8 nodes, is the use of WWPN range. The IBM SVC
2145-DH8 uses the new 80c product ID, so IBM has the opportunity to define a new scheme to
generate WWNs. Public WWNs take the form: 500507680c <slot number> <port number> xxxx.
With four bits for slot number and four for port number, giving 16 public names per slot, and 16 bits
for the serial number.
The procedure of upgrade is nondisruptive because changes to your SAN environment are not
required. The replacement (new) node uses the same worldwide node name (WWNN) as the node
that you are replacing. An alternative to this procedure is to replace nodes disruptively either by
Uempty moving volumes to a new I/O group or by rezoning the SAN. The disruptive procedures, however,
will require additional work on the hosts.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-53
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The visual references a new WWPN naming scheme for the SVC 2145-DH8, which is identifiable
by the 680c string in the WWPN. For example, if you are upgrading from an existing SVC system
such as the 2145-CG8 model to the DH8, then the WWPN would be referenced as 680140, 30, 10.
20, and so on. Therefore, the new node assumes the WWNNs of the CG8 node you are replacing,
thus requiring no changes in host configuration, SAN zoning or multipath software.
Uempty
Fabric
Switch domain#
LUN
masking
Lw Ls
© Copyright IBM Corporation 2011, 2014
Notes:
Zoning by switch domain ID and port number is positional, that is, if the cable is moved to another
switch or another port, then the zoning definition needs to be updated. This is sometimes referred to
as port zoning.
Zoning by WWPN provides the granularity at the adapter port level. If the cable is moved to another
port or to a different switch in the fabric, the zoning definition is not affected. However, if the adapter
card is replaced, and the WWPN is changed (this does not apply to the SVC WWPNs), then the
zoning definition needs to be updated accordingly.
When zoning by switch domain ID, ensure that all switch domain IDs are unique between both
fabrics and that the switch name incorporates the domain ID. Having a unique domain ID makes
troubleshooting problems much easier in situations where an error message contains the Fibre
Channel ID of the port with a problem. For example, have all domain IDs in first fabric starting with
10 and all domain IDs in second fabric starting with 20.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-55
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Fabric 1 Fabric 2
6
ID
0 1 2 3 4 5 6 12
ID
11 11 Switch 12
Switch 11 Switch
port#
11 12 13 14 Switch
domain ID
NODE1
Adapter Four ports
port cable per node 21 22 23 24
NODE2
WWPN
Q value
Notes:
The visual shows the following notation that is used:
• SVC ports: SVC node number along with the subscript representing the generated WWPN or Q
value for each port within the SVC node.
- Host ports or storage ports: Entity and subscripts represents HBA port number.
• ID: Switch domain ID.
- Small boxes inside the switch: Represents ports on the switch.
- Number on top of the small boxes inside the switch: Port number of the port on the switch.
Uempty
Fabric 1 Fabric 2
ID ID
21 22
1 2 3 4 1 2 3 4
ID ID
11 21 14 24 11
13 23 12 22 12
11 12 13 14 21 22 23 24
NODE1 NODE2
Notes:
When attaching SVC ports to a SAN fabric containing core directors and edge switches, it is
preferable to connect the SVC ports to the core directors and to connect the host ports to the edge
switches. Avoid attaching SVC ports to directors or switches with host-optimizing modules.
The SVC ports behave as a SCSI targets to host ports and interact with storage ports as SCSI
initiators. As such, proximity to storage ports is preferred. Connect SVC ports and storage ports to
the core director and connect host ports to the edge switches or host-optimizing blades.
In this example configuration, a pair of nodes, NODE1 and NODE2, are attached to the dual fabric
as an I/O group. The cabling of the SVC ports to the switch adheres to the following
recommendations and objectives:
• Implement two independent fabrics (dual fabric).
• Split the attachment of the ports of the SVC node across both fabrics.
• Illustrate the cabling to facilitate zone definitions coded using either switch domain ID and port
number, or WWPN values.
• Enable the paths from the host with either four-paths or eight-paths to the SVC I/O group to be
distributed across WWPNs of the SVC node ports.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-57
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Note that the ports of each SVC node are spread across the two fabrics and that ports alternate
between the two SVC nodes as they are attached to the switch. An additional switch has been
added to each fabric to reflect multi-switched fabric environments.
Uempty
Fabric 1 Fabric 2
0 1 2 3 4 0 1 2 3 4
ID ID
11 21 14 24 11
13 23 12 22 12
11 12 13 14 21 22 23 24
NODE1 NODE2
Notes:
In the example, there are two sets of zoning definitions - one for each fabric. Each zone includes all
ports from each SVC node cabled to the fabric.
Even though there is zone overlap for the SVC node port with host and storage zones, it is
recommended to have an SVC nodes zone to facilitate node to node communications without
dependency on other zones.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-59
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Fabric 1 Fabric 2
0 1 2 3 4 5 6 0 1 2 3 4 5 6
ID
E3 11 21 14 24 F1 E1 11
E4 13 23 12 22 F2 E2 ID
12
11 12 13 14 F1 F2 E1 E2
VendorX
NODE1 E3 E4
21 22 23 24
DSxK
NODE2
© Copyright IBM Corporation 2011, 2014
Notes:
All SVC nodes must be able to see the same set of storage ports. If two SVC nodes see a different
set of ports on the same storage system, operation is degraded and logged as error.
Multiple ports or connections from a given storage system can be defined to provide greater data
bandwidth and more availability. To avoid interaction among storage ports of different storage
system types, multiple back-end storage zones can be defined.
For example, one zone containing all the SVC ports and the VendorX port, and another zone
containing all the SVC ports and the DSxK ports. Storage system vendors might have additional
best practice recommendations, such as not mixing ports from different controllers of the same
storage system in the same zoning. SVC supports and follows those guidelines provided by the
storage vendors.
Uempty
Figure 2-50. Storage system zoned with all SVC ports example SNV13.0
Notes:
Verify SAN zoning from the perspective of the SVC by clicking Settings > Network and then select
Fibre Channel in the Network filter list. This Fibre Channel view is designed to display SAN
connectivity data as seen by this SVC cluster, that is, the port to port connectivity between the SVC
ports of this cluster with its attaching host ports, storage system ports, and partner SVC node ports.
The example shows the connectivity data between the storage system BLANCDS3K and the SVC
ports of this cluster. The BLANCDS3K has two ports and the 4-node SVC cluster has 16 ports. Both
parties have their ports evenly split between two SAN fabrics.
For ease of reference, the output has been divided into two boxes. One box per DS3K port as
shown under the Remote WWPN column. Each box contains eight entries because the DS3K port
is zoned to see ALL the SVC ports on its fabric. The WWPN values shown in the Local WWPN
column are the specific SVC node ports of the same fabric. The zoning output conforms to the
guideline that, for a given storage system, zone its ports with all the ports of the SVC cluster on that
fabric.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-61
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The topics discusses the external storage system and LUN assignments for the SVC 2145.
Uempty
Notes:
Visit the SAN Volume Controller product support website for the latest list of storage systems and
their corresponding supported software and firmware levels.
Refer to the SVC Information Center > Configuration > Configuring and servicing external
storage systems, for detailed descriptions of each supported storage system.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-63
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
VendorX
DSxK Storwize
DS8000 V7000
DS6000 DS5000 XIV
NetApp And so ESS
HDS HPQ EMC Sun DS4000
N series on… FlashSystem
DS3000
Notes:
When integrating the SAN Volume Controller into an existing SAN fabric, consider using separate
storage adapter ports for SVC I/O traffic versus non-SVC I/O traffic, if possible or practical. Some
storage systems support many adapter ports such that an isolation of SVC-related and non-SVC
traffic can be implemented.
Refer to the SVC Information Center > Configuration > Configuring and servicing external
storage systems for details regarding storage system setup parameters for each brand of storage
system supported.
Uempty
Notes:
From the perspective of the disk storage system, the SVC is defined as a SCSI host. This SVC host
is a cluster (each node in the cluster has four WWPNs) so an eight-node SVC has a total of 32
WWPNs. Define all of the cluster’s WWPNs to the storage system.
Disk storage systems tend to have different mechanisms or conventions to define hosts. For
example, a DS3/4/5000 uses the construct of a host group to define the SVC cluster with each node
in the SVC cluster identified as a host with four host ports within the host group. LUNs are then
mapped to the host group.
With a DS8000, a port group can be used to collectively identify all the WWPNs of the SVC cluster
and is referred to as a host attachment. A volume group is a named construct that defines a set of
LUNs. The SVC host attachment can then be associated with a volume group to access its allowed
or assigned LUNs.
All storage systems use variations of these approaches to implement LUN masking. Refer to the
SVC Information Center > Configuration > Configuring and servicing external storage
systems for more specific information about the numerous heterogeneous storage systems
supported by the SVC.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-65
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
The LUNs identified in the visual as Ly and Lz become SVC MDisks after SVC performs device
discovery on the SAN. These LUNs should be large, similar in size, and be assigned to all of the
SVC ports of the cluster. These LUNs must not be accessible by other host ports or other SVC
clusters.
Uempty
Storage system and SVC coexistence: LUN
masking
Host I/O Host I/O
2 WWPNs 2 WWPNs
V1 V2 V3 V4
Without SVC
4 WWPNs
SVC Node 1 to SVC Node 4 per SVC
SAN SVC Node 1 to SVC Node 4
SVC Node 1 to SVC Node 4 node
SVC Node 1 to SVC Node 4
With SVC
Host1 L1 L2 L3 L4
Host2 L5 L6 L7 L8
L9 La Lb Lc
Ly Lz SVCs
Figure 2-55. Storage system and SVC coexistence: LUN masking SNV13.0
Notes:
LUNs become MDisks to be grouped into storage pools. Create a storage pool by using MDisks
with similar performance and availability characteristics. For ease of management and availability,
do not span the storage pool across storage systems.
The recommendation is to allocate and assign LUNs with large capacities from the storage systems
to the SVC ports. These SCSI LUNs or MDisks once under the control of the SVC provide extents
from which volumes can be derived.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-67
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
SAN SVC
Additional multipath drivers
V1 V2 V3 V4 V5 supported:
9 ATTO multipath
9 AIXPCM
Pool1 Pool2 9 Citrix Xen
9 Debian
9 IBM i
9 Linux
9 Novell NetWare
9 OpenVMS
SVC SVC 9 ProtectTier
9 PV Links, HP native
9 SGI
Lx La Lb La Lb L1 L2 9 Sun MPxIO
9 Tru64
9 Veritas DMP, DMPDSM
Wide array of supported storage systems 9 VMware
9 Windows MPIO
Notes:
The Subsystem Device Driver (SDD, or SDDDSM for Windows MPIO environments, SDDPCM for
AIX MPIO environments) is a standard function of the SVC and provides multipathing support for
host servers accessing SVC provisioned volumes.
In addition to SDD, a wealth of other multipath drivers are supported. Refer to the SVC product
support website for latest support levels and platforms.
Uempty
SVC access to storage system: One WWNN and
up to 16 WWPNs example
Example: Many storage systems SVC supports up to 16 WWNNs
per storage system and up to
One WWNN 16 WWPNs per WWNN.
per storage system SVC attempts to use as many
with multiple WWPNs storage ports as available to
access LUNs (MDisks).
w w w w w w w w w w w w w w w w
w w w w w w w w w w w w w w w w
p p p p p p p p p p p p p p p p
n n n n n n n n n n n n n n n n
L0 L1 L2 L3 L4 L5 L6 L7
L8 L9 La Lb Lc Ld Le Lf
Best practice: Assign LUNs to ALL SVC ports of the SVC cluster.
© Copyright IBM Corporation 2011, 2014
Figure 2-57. SVC access to storage system: One WWNN and up to 16 WWPNs example SNV13.0
Notes:
Many storage systems implement one WWNN to represent the storage system itself and one
unique WWPN for each of the ports of the storage system.
SVC supports a maximum of 16 WWNNs per storage system and up to 16 WWPNs per WWNN.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-69
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
L0 L2 L4 L6 L1 L3 L5 L7
Preferred node = Node1 Preferred node = Node2
• Each WWNN appears as one controller (system) to SVC cluster
• LUNs (MDisks) access based on preferred node of LUN
• Automatic failover of MDisks if issues with individual controller
Best practice: Assign MDisks in multiples of storage ports
zoned with SVC cluster (8 WWPNs – 8 MDisks/16 MDisks).
© Copyright IBM Corporation 2011, 2014
Figure 2-58. SVC access to storage system: More than one WWNN SNV13.0
Notes:
An example of more than one WWNN used by a disk storage system is an IBM Storwize V7000
where each controller of the system has its own WWNN. When used to provide LUNs to an SVC
environment the IBM Storwize V7000 volumes become MDisks for the SVC.
The visual shows a single V7000 with two nodes (or controllers). The best practice
recommendations apply equally to a clustered IBM Storwize V7000.
Uempty
L0 L1 L2 L3 L4 L5 L6 L7
L8 L9 La Lb Lc Ld Le Lf
• Each WWNN appears as one controller (system) to SVC cluster
• Each LUN must be mapped to SVC ports using same LUN ID
• Automatic failover of MDisks if issues with individual controller port
© Copyright IBM Corporation 2011, 2014
Notes:
Some storage systems generate more than 16 WWNNs. In this case, up to 16 WWNNs of the
storage system can be set up as a group. The SVC treats each group of 16 WWNNs as a storage
system.
Deploy LUN masking so that each LUN is assigned to no more than 16 ports of these storage
systems. Refer to the SVC product support website for the latest information and refer to the SVC
Information Center > Configuration > Configuring and servicing external storage systems for
details regarding to storage systems setup parameters.
The environment of having multiple WWNNs used in certain disk storage systems is limited only by
the maximum of 1024 WWPNs and 1024 WWNNs.
Maximum configuration limits can be found at the web by searching with the keywords of IBM SVC
maximum configuration limits.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-71
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Figure 2-60. DS3K example: Storage system WWNN and WWPNs SNV13.0
Notes:
The visual shows the WWPNs and WWNN of an IBM DS3400 disk storage system.
The profile of this storage system can be displayed by its GUI, DS3000 Storage Manager. Two
different controllers within this DS3400 are displayed (note the Controllers tab). Each controller
has its own unique WWPN value but they share the same WWNN value.
In other words, the DS3400 storage system is identified by just one WWNN and each controller port
within the storage system has its own WWPN. This is also the case with other models of the
DS3000, DS4000, and DS5000 series of storage systems.
Uempty
Notes:
For a DS3000, DS4000, or DS5000 disk storage system, LUN masking is implemented using the
host group construct.
Continuing with the example storage system, a 4-node SVC cluster is defined as the SVC host
group with four hosts with each representing an SVC node. Each host (node) is defined with four
ports.
The host ports are shown in detail in the Configured Hosts: box. The host type is an IBM TS SAN
VCE (IBM TotalStorage SAN Volume Controller Engine).
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-73
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
Once the SVC cluster has been defined to the storage system (Host Group SVC in this case) then
LUNs can be mapped or assigned.
From the Host-to-Logical Drive Mappings view of the DS3400 Storage Manager for this example,
eight LUNs have been mapped to the host group called SVC with their respective LUN numbers.
These LUNs become the MDisks in the SVC.
Uempty
Notes:
The visual shows an example of the IBM Storwize V7000 disk storage system that has been set up
to provide LUNs to the same 4-node SVC cluster.
The Storwize V7000 uses the same software as the SVC, hence a Command Line Interface (CLI)
output for a lsnode command is shown for the two nodes of a Storwize V7000. Each node has its
unique WWNN and each node’s port WWPNs are derived from the node WWNN with a minor
variation in the low order third byte to denote port number.
In other words, the Storwize V7000 storage system is comprised of two nodes, with each node
having its own WWNN. Within each node, there are four ports - each with its own WWPN.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-75
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
SAN fabric
SCSI ID 0 SCSI ID 1
NAVYSVC0 NAVYSVC1
LUNs assigned to
SVC cluster
SCSI ID 2 SCSI ID 3
NAVYSVC2 NAVYSVC3
Storwize V7000
© Copyright IBM Corporation 2011, 2014
Notes:
The visual shows a perspective of the Storwize V7000 where 16 WWPNs of a 4-node SVC cluster
has been defined as a host object with the name of NAVY_SVC.
The graphic illustrates the 4-node SVC cluster each with four ports connected to the SAN fabric;
also connected to the SAN fabric is a 2-node Storwize V7000.
Four LUNs with SCSI IDs 0-3 from the Storwize V7000 are to be assigned to the SVC host known
as NAVY_SVC. The SVC in turn will use these LUNs as its MDisks surfaced from the two Storwize
V7000 nodes (WWNNs).
Uempty
LUNs assigned to
SVC cluster –
NAVY_SVC
Notes:
The Storwize V7000 GUI Hosts > Host Mappings view illustrates the LUN masking of its four
LUNs assigned to the SVC host NAVY_SVC. Note the SCSI IDs, volume IDs, volume names, as
well as the unique identifiers associated with each LUN.
From this storage system, these are the only LUNs accessible by the host known as NAVY_SVC.
Or more precisely the 16 WWPNs defined as the host NAVY_SVC have been permitted to access
these specific four LUNs.
The Storwize V7000 GUI Hosts > Volumes by Host view shows two of the LUNs (volumes) have a
preferred node of node ID 1 and the other two LUNs (volumes) have a preferred node of node ID 2.
The concept of a preferred node is very similar to that of a preferred controller found in most
midrange disk storage systems. A node in the Storwize V7000 is analogous to a storage controller.
Unlike the DS3400 this controller (node) just happen to have its own WWNN.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-77
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The lssi command output of the DS8000 Command Line Interface (dscli) shows a single WWNN
value for the storage facility image.
This DS8000 has 16 ports or WWPNs but the output of the dscli lsioport command has been
edited so that only four WWPNs are shown. These are the four ports zoned with the SVC cluster
ports using SAN zoning. Note that each port has an ID - such as I0030.
Of course, using only four ports or WWPNs is just for illustrative purposes. In production
environments, more ports of the DS8000 would likely have been zoned for use by an SVC cluster.
Uempty
Notes:
All 16 ports or WWPNs of the example 4-node SVC cluster must be defined to the attaching storage
system. With the DS8000, a port group is a construct that enables a host with multiple WWPNs to
be managed as a single entity. Each of the 16 WWPNs of the SVC has been associated with port
group 2. A port group is also known as a host attachment. Each WWPN entry, also known as a host
connection, contains a host type of SVC (whereas with the DS3400 the host type is SAN VCE).
Each host attachment can be associate with a volume group. The volume group construct is a LUN
masking vehicle as volumes added to the volume group become visible to the host connections of
the host attachment. In this example, port group 2 is associated with volume group V2.
The IOport all value means that the SVC ports can reach all DS8000 ports (an implementation of
best practices). SAN fabric zoning provides the reduction in the number of paths that the SVC can
actually use to reach the volumes (LUNs) surfaced from the DS8000.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-79
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
1221 1241
Notes:
The volumes (LUNs) of volume group V2 are shown with the showvolgrp command. The
-lunmap parameter displays the LUN numbers the DS8000 has assigned to the volumes.
The four volumes assigned to the SVC cluster (port group 1) have IDs 1220, 1221, 1240, and 1241.
These volume IDs are reflected in the LUN number as is shown in the showvolgrp -lunmap v2
output.
Volume group v2 has been given a name of NAVY_SVC to reflect the host name from a DS8000
perspective. Examine the LUN numbers. Volume 1220 has a LUN number of 40124020. Strip out
the two 40s to obtain the volume ID.
The LUN number is reported to the SVC during SVC SAN device discovery (as is the case with all
SVC to storage systems interactions). SVC uses the LUN number along with the storage system
identifiers (SCSI inquiry data) to maintain the correlation between the LUN in a given storage
system and the MDisk entry the SVC creates to represent the LUN.
Uempty
Notes:
The LUNs (volumes) surfaced by the disk storage systems become unmanaged MDisks.
Subsequently an administrator can place these MDisks into storage pools for usage.
The top half of this chart continues with the DS8000 example where the MDisk entries of those
LUNs have been renamed to enable easier identification of the storage system and the LUNs within
the storage system.
The storage pool names in this example reflect the storage system and disk device type making it
easier to identify relative performance and perhaps storage tier in an enterprise.
Names of SVC objects can be changed without impacting SVC processing. If installation naming
standards have been modified then names of SVC objects can be modified accordingly. All SVC
processing is predicated by object IDs, not object names.
Up to 63 characters can be used in an object name.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-81
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Power-on sequence
Fibre Channel
switches
Disk
Enclosures
Storage
Systems
SAN Volume
UPSs Controllers
Host
* The power off sequence is the reverse of the arrow.
Systems
Notes:
As a reminder for power up and power down sequences in a server room, the power-on sequence
is shown in the visual. The power off sequence is the reverse of the arrow. Depending on the
storage system, powering up disk enclosures and storage system might be a single step.
Uempty
VendorX
DSxK Still growing
more over time
DS5000 Storwize DS8000
NetApp And so
DS4000 family DS6000 HDS HPQ EMC Sun
DS3000 XIV ESS
N series on...
FlashSystem
© Copyright IBM Corporation 2011, 2014
Notes:
Please consult the SVC product support website for current information, including:
• Supported host platforms and additional solutions such as BladeCenter models and intercluster
distance extenders
• Supported host bus adapters (HBAs)
• Supported Fibre Channel switches
• Supported heterogeneous storage systems
• Supported software
• SVC code level
• SDD/other multipath driver coexistence, native OS multipath drivers
• OS system levels including clustering support
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-83
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The topics discusses the SVC clustered initiation using the Technical port and the SVC Service
Assistant to and the setup configuration using the SVC GUI.
Uempty
Notes:
To initialize an SVC 2145-DH8 system you must connect a personal computer to the technician port
(Ethernet port4) on the rear of a node canister and run the initialization tool. This port can be
identified by the letter “T”. The technician port is designed to simplify and ease the initial basic
configuration of the SVC system by the local administrator or by service personnel. It eliminates the
need for the LCD front panel as presented on all previous models. This process requires the user to
be physically at the hardware site in order to create a cluster using one node. The remaining
candidate nodes can then be added using the SVC GUI.
A few moments after the connection is made the node uses DHCP to configure IP and DNS settings
of the personal computer. Therefore, you need to make sure that your computer has DHCP
enabled. If you do not have DHCP then configure static IPv4 address 192.168.0.2, mask to
255.255.255.0, gateway to 192.168.0.1, and DNS to 192.168.0.1. After the Ethernet port of the
personal computer is connected to the technician port, open a supported browser and browse to
address http://install.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-85
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Notes:
The browser is automatically directed to the initialization tool welcome wizard panel. Follow the
instructions that are presented by the initialization tool to configure the system with a management
IP address. Select if you are using an IPv4 or IPv6 management IP address and then type in the
address (you can use DHCP or statically assign one). The subnet mask and gateway will be a listed
by default, but can be changed, if required. Click Finish to set the management IP address for the
system. System initialization begins and might take several minutes to complete.
If you experience a problem during the process due to a change in system states, wait 5 to 10
seconds and then either reopen the SSH connection or reload the service assistant.
Uempty
Notes:
When system initialization is complete, disconnect the cable between the personal computer and
the technician port. The system can now be reached by opening a supported web browser and
pointing it to http://management_IP_address.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-87
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
Service IP4?
Notes:
When initializing the SVC 2145-CG8 and CF8 models, one way to set the service IP address for a
node is through the node’s front panel interface.
An alternative would have been to first define the cluster IP address, create the cluster, and then set
the service IP address for each node of the cluster. SVC 2145-CG8/CF8 supports both IP version 4
as well as IP version 6 addressing.
Service IP addresses are configured from factory.
Uempty
Cluster:
Cluster_10.6.5.60
Cluster IP set
© Copyright IBM Corporation 2011, 2014
Figure 2-77. Cluster creation using SVC 2145-CG8 front panel SNV13.0
Notes:
As an alternative to assigning each SVC 2145-CG8 or CF8 node its service IP address first,
another option is to pick a node and use its front panel interface to set the cluster IP address and
create the cluster first.
After the cluster has been initially created using one node the node front panel will display the
default system name (Cluster_10.xx.xx.xx) with the specified cluster IP address. The status of this
node is no longer candidate. It is now an active member of a cluster. You can use the Service
Assistant Tool GUI using the cluster IP address to complete the cluster setup and add the remaining
nodes into the cluster.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-89
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook
http://node SA IP
Notes:
Instead of using the SVC 2145-CG8/CF8 front panel interface to create the SVC cluster, another
option to create the cluster is from the Service Assistant interface. You can access the Service
Assistant GUI using the service IP address of the node and the default passw0rd. The Service
Assistant interface of the node you are currently logged in to will identify the node panel ID. All
nodes will be presented in candidate status, as they are unconfigured. The first node is selected by
default.
Uempty
SVC system IP
Notes:
Click Manage System in the navigation tree of the Service Assistant to open the System
Information pane. Within the system edit boxes specify the cluster name (also known as system
name) and the cluster IP network information.
Next, click the Create System button to cause the cluster to be created.
The terms SVC cluster and SVC system are used interchangeably. SVC system is favored during
system setup.
© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-91
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.