Download as pdf or txt
Download as pdf or txt
You are on page 1of 76

Student Notebook

SVC 2145-CG8/CF8 node cable requirements

2145-CG8, CF8

2145 UPS-1U
1 Fibre Channel ports
2 Power connector
3 Serial connector
4 Ethernet ports
5 Main-power connector
6 Communication port
7 Load-segment 2 receptacle
© Copyright IBM Corporation 2011, 2014

Figure 2-13. SVC 2145-CG8/CF8 node cable requirements SNV13.0

Notes:
Perform the following steps to connect the SAN Volume Controller to the 2145 UPS-1U:
• At the back of the SAN Volume Controller 2145-CG8 or 2145-CF8 node, plug the power cables
of the combined power and serial cable assembly into the power connector (2).
• Place the other end of the power cables into the load-segment 2 receptacles (7) on the 2145
UPS-1U.
• Plug the signal cable into the serial connector (3) located on the SAN Volume Controller
2145-CG8 or 2145-CF8 node.
• Place the other end of the signal cable into the communication port (6) on the 2145 UPS-1U.
• The two UPS units of a node pair should not be connected to the same power source (if
possible). The UPS is intended to maintain power on the SVC nodes until control data and
cache can be saved to the node’s local disk. Only the SVC node should be plugged into its
UPS.
- 2145-DH8 nodes does not require a UPS as the battery backup unit are integrated in the
front of the unit.

2-16 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty
SVC 2145-CG8/CF8 UPS and redundant power
switch requirements
1 I/O group 0
A 2 SAN Volume Controller node A
2145-CG8s
3 2145 UPS U1 A
B
4 SAN Volume Controller node B

C 5 2145 UPS U1 B
6 I/O group 1
2145CF8s
D 7 SAN Volume Controller node C
8 2145 UPS U1 C
9 SAN Volume Controller node D
10 2145 UPS U1 D
11 Redundant ac-power switch 1
12 Redundant ac-power switch 2
13 Site PUD X (C13 outlets)
14 Site PDU Y (C13 outlets)
© Copyright IBM Corporation 2011, 2014

Figure 2-14. SVC 2145-CG8/CF8 UPS and redundant power switch requirements SNV13.0

Notes:
The visual illustrates an SVC 2145-CG8 and CF8 configuration with UPS and redundant power
switches. The UPS (2145 UPS-1U) is an integral component of the SVC solution. It maintains
continuous communications with its attached SVC node. The UPS provides a secondary power
source in the event of power failures, power surges and sags, or line noise. When a power outage
occurs the UPS maintains power to allow configuration and cached data to be saved to the SVC
node’s internal disk. The UPS is not used to enable continued operation of the node when power is
lost.
Each UPS includes power (line) cords that connect the UPS to either a rack power distribution unit
(PDU) or to an external power source. Each 2145 UPS-1U has its own built in 10 amp circuit
breaker. From the back of the UPS, plug the UPS main power cable into the power socket of the
rack(13, 14) or, if available, into a redundant power switch (11, 12).
To connect an SVC node to the UPS, plug one end of the power cable into the SVC node power
socket (2) and the other end into an output socket on the UPS (3). The 2145-CF8 and CG8 models
have two power supplies and both must be plugged into the same UPS.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

A redundant AC-power switch is an optional feature designed to enable the SVC nodes to be more
resilient to power failure (11, 12). Each redundant power switch connects to two separate power
circuits (13, 14). The power switch logically sits between the rack PDU and the SVC UPS.
Each power switch connects up to two UPS/SVC nodes, preferably one UPS/SVC node per I/O
group. In the event of a failure of either of the input circuits, power continues to be provided to the
UPS by redundant circuit.
Plug the RS232 serial cable of the power cable assembly into the serial socket of the SVC node
(not shown), and plug the other end of the serial cable into the serial connector on the UPS.
To avoid the possibility of the power and signal (serial) cables being connected to different UPS
units, these cables are wrapped together and supplied as a single field replaceable unit. The signal
cables enable the SVC node to read status and identification information from the UPS.
Each SVC node is connected to its own UPS if using the 2145 UPS-1U model. Each SVC node of
an I/O group must be connected to a different UPS if using the 2145 UPS model. Do not connect
any other device to the UPS.
Refer to the SAN Volume Controller Infocenter > Planning > Planning for Hardware for
instructions on the hardware installation of the UPS units, the redundant power supplies, and the
SVC nodes.

2-18 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Network port connections: FC and IP

Example:
Admin

IP Fabric 1
Fabric 2
Switch1
Switch2

SVC Node 1
SVC Node 2

SVC Node 3
SVC Node 4

© Copyright IBM Corporation 2011, 2014

Figure 2-15. Network port connections: FC and IP SNV13.0

Notes:
The visual illustrates the network port connections for the FC and IP. Each SVC node requires
connections to:
• Four Fibre Channel (FC) switch ports. A dual fabric is recommended with the node adapter
ports spread evenly across both fabrics.
• One or two (recommended) Ethernet hub/switch connection for cluster management.
• For the 2145-CG8 and 2145-CF8 models, one UPS

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC planning and implementation topics (3 of 6)


• SVC planning and implementation

• SVC physical planning


– Hardware requirement
– Cabling requirement

• SVC logical planning


– SVC management IP
– SVC SAN zoning
– Storage systems and LUN assignments
– SVC cluster initialization

© Copyright IBM Corporation 2011, 2014

Figure 2-16. SVC planning and implementation topics (3 of 6) SNV13.0

Notes:
The topics discusses the SVC management IP address requirements.

2-20 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC management interfaces


Open industry-standard interfaces

SVC
Ethernet https
2 – 8 nodes
GUI, CLI, and
CIMOM
GUI: Web browser
CLI: Over SSH
over https
with key or password*
Embedded GUI with password
with best practices
Any
presets resource
SMI-S to
manager
CIMOM
CIM interface

*CLI using password with v6.3.0


© Copyright IBM Corporation 2011, 2014

Figure 2-17. SVC management interfaces SNV13.0

Notes:
The SAN Volume Controller simplifies storage management by providing a single image for multiple
controllers and a consistent user interface for provisioning heterogeneous storage. The SVC
provided cluster management interfaces include:
• An embedded SAN Volume Controller Graphical User Interface (GUI) that supports a web
browser connection for configuration management. Each Storwize family member can run the
same software that is based on a common source codebase as IBM SAN Volume Controller
(SVC).
• A Command Line Interface (CLI) accessed using a Secure Shell connection (SSH) with PuTTY.
• An embedded CIMOM that supports the SMI-S which allows any CIM compliant resource
manager to communicate and manage the SVC cluster.
To access the cluster for management, there are two user authentication methods available:
• Local authentication: Local users are those managed within the cluster, that is, without using
a remote authentication service. Local users are created with a password to access the SVC
GUI, and/or assigned an SSH key pair (public/private) to access the SVC CLI.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

• Remote authentication: Remote users are defined and authenticated by a remote


authentication service. The remote authentication service enables integration of SVC with
LDAP (or MS Active Directory) to support single sign-on.

2-22 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Easy to use cluster management GUI

Auto redirects http traffic to https

© Copyright IBM Corporation 2011, 2014

Figure 2-18. Easy to use cluster management GUI SNV13.0

Notes:
The SVC GUI is reached using a web browser at https://<SVC Cluster IP address>. You can view
the system detail by selecting Monitoring > System. Page content is displayed in both graphical
and tabular format.
If the http protocol is specified, it is automatically redirected to the https protocol.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

CLI SSH keys: Encrypted communications

1 Generate public/private keys for CLI


Private PuTTYGEN
PuTTY

Public
3
2
Install public key
in cluster

SAN Volume Controller Pair

Secure communications
SAN Volume Controller Pair Public

© Copyright IBM Corporation 2011, 2014

Figure 2-19. CLI SSH keys: Encrypted communications SNV13.0

Notes:
The CLI commands use the Secure Shell (SSH) connection between the SSH client software on
the host system and the SSH server on the SVC cluster. For Windows environments, the Windows
SSH client program PuTTY can be downloaded.
A configured PuTTY session using a generated Secure Shell (SSH) key pair is needed to use the
CLI. The key pair is associated with a given user. The user and its key association are defined using
the superuser.
The public key is stored in the SVC cluster as part of the user definition process. When the client
(for example, a workstation) tries to connect and use the CLI, the private key on the client is used to
authenticate with its public key stored in the SVC cluster.
Beginning with v6.3, the CLI can be accessed using password instead of SSH. However, when
invoking commands from scripts, using the SSH key interface is recommended as it is more secure.

2-24 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

PuTTYgen: Key generation (for reference only)

Private
Public

© Copyright IBM Corporation 2011, 2014

Figure 2-20. PuTTYgen: Key generation (for reference only) SNV13.0

Notes:
Select SSH2 RSA, leave the Number of bits in a generated key value at 1024, and click Generate.
Move the cursor over the PuTTY Key generator box until the key pair is generated. This procedure
generates random characters used to create a unique key.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Save the generated keys (for reference only)

Public
\Keys.PUBLICKEY.PUB

\Keys.PRIVATEKEY.PPK
Private

\Keys

© Copyright IBM Corporation 2011, 2014

Figure 2-21. Save the generated keys (for reference only) SNV13.0

Notes:
Save the generated public key by clicking Save public key. Save the generated private key by
clicking Save private key.
The name and location of the file to place the key will be prompted. The default location is
C:\Support Utils\PuTTY. If another location is chosen, make a record for later reference.
The public key is stored into the cluster as part of user management.

2-26 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC CLI session parms and private SSH key

Private
key file

Log in with user name


supported with SVC v6.2.0

© Copyright IBM Corporation 2011, 2014

Figure 2-22. SVC CLI session parms and private SSH key SNV13.0

Notes:
To use the CLI, the PuTTY program (on any workstation with PuTTY installed) must be set up to
provide the SSH connection to the SVC cluster.
Open the PuTTY program. The SSH private key (which matches its corresponding public key
already stored in the SVC cluster) is identified in the PuTTY Private key file for authentication
box using PuTTY Connection > SSH > Auth.
Click Session in the navigation tree to tailor basic options for the PuTTY session.
Identify the IP address (or DNS name) of the SVC cluster.
Select SSH under Connection type.
In the Load, save or delete a stored session section, type a name to associate with this session
environment definition in the Saved Sessions field, for example, NAVYadmin.
Click Save to save the PuTTY session settings (including the SSH private key) to be used for
subsequent connections to the SVC.
To start a PuTTY CLI session, select Start > Programs > PuTTY from the desktop. When the
PuTTY configuration window is opened, select the saved session name defined previously

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

(NAVYadmin in this case) and click Load to recall the saved SVC cluster IP address, selected
protocol (SSH option), and the private key location. Click Open at the bottom of the window to
connect to the SVC cluster.
At the SVC CLI login prompt, enter a defined user name (or admin) and press Enter to complete the
connection to the SVC cluster. The private key identified in this PuTTY session is then
authenticated against the public key contained in the cluster.

2-28 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC CLI session login with password

Log in with user name/password


supported with SVC v6.3.0

© Copyright IBM Corporation 2011, 2014

Figure 2-23. SVC CLI session login with password SNV13.0

Notes:
To log in with a password is similar. Set up the SVC cluster IP address and SSH protocol in a
PuTTY saved session, but do not provide the SSH key file location.
At the CLI login, a prompt appears to request the password for the specified user.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Complemented with logically consistent command


line syntax

Command action_argument parameters

y Listing information examples:


y svcinfo lssystem svcinfo/svctask
y svcinfo lsmdisk prefix not needed
with v6.2.0
y svcinfo lsvdisk
y svcinfo lshost REDAIX1
• Performing tasks examples:
• svctask mkmdiskgrp -name DS3K1_SATA –ext 512
-mdisk mdisk2:mdisk4:mdisk5
• svctask chmdisk -name newname oldname
• svctask rmvdisk -? *A volume is also referred to as a VDisk.
*A storage pool is also referred to as a MDisk group.
© Copyright IBM Corporation 2011, 2014

Figure 2-24. Complemented with logically consistent command line syntax SNV13.0

Notes:
Two major command sets are available:
• The svcinfo list command allows the display of a specific set of information about SVC
objects (nodes, MDisks, VDisks, and so forth) or the SVC environment. The command
argument typically begins with ls.
• The svctask action command allows changes to be made to various components or objects
within the SVC cluster.
• Beginning with SVC v6.2.0, the svcinfo and svctask command prefixes are no longer
required.
Commands related to activities that can be performed to SVC objects are categorized with common
prefixes. For example:
• ls: lshost to list all host objects; lsvdisk vdisk0 to list details for a specific VDisk (volume).
• add: addnode to add a node to a cluster; addhostport to add a WWPN to a host object;
addmdisk to add an MDisk to an MDisk group.

2-30 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty • mk: mkmdiskgrp to make or create a managed disk group; mkvdisk to create a VDisk or
volume.
• ch: chmdisk -name to change the name of an MDisk; chvdisk -name to change the name of a
VDisk.
• rm: rmvdisk to remove or delete a VDisk.
The following are reserved words. When specifying the name of an object, the name might not start
with any of the following reserved words:
• node
• io_grp
• controller
• mdisk
• mdisk_grp
• host
• vdisk
• flash
• fc_const_grp
• rerel
• re_const_grp
Avoid using the underscore "_" as the first character of the name for an object. The underscore is
reserved for internal SVC command processing and should not be used as a prefix for object
names.
The SVC CLI provides command line completion for command entry. Enter enough characters until
the command name is unambiguous, then press the Tab key. The rest of the command name is
then filled in automatically. If the entered characters are ambiguous or multiple commands begin
with the same prefix, a list of possible command is returned when the Tab key is pressed.
All commands are documented in the SVC Information Center > Command-Line Interface.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

CLI help interface: SVC v7 enhancements


IBM_2145:NAVY_SVC:NAVYadmin> lsnode –h “ –h ” same as “ –? ”
lsnode
Syntax
>>- lsnode -- | lsnodecanister -- ------------------------------>
>--+-----------------------------------+-- --+----------+-- ---->
'- -filtervalue -- attribute=value -' '- -nohdr -'
>--+-----------------------+-- -- --+-----------------+--------->
'- -delim -- delimiter -' '- -filtervalue? -'
>--+---------------+-------------------------------------------><
+- object_id ---+
'- object_name -'
For more details type 'help lsnode'.

IBM_2145:NAVY_SVC:NAVYadmin> man lsnode “man” same as “help”


lsnode (SAN Volume Controller) / lsnodecanister (Storwize V7000)
Use the lsnode/ lsnodecanister command to return a concise list or a
detailed view of nodes or node canisters that are part of the clustered
system (system).
The list report style can be used to obtain two styles of report:
* A list containing concise information about all the nodes or node
canister on a system. Each entry in the list corresponds to a single
node or node canister.
* The detailed information about a single node or node canister.
Syntax
: :
>>- lsnode -- | lsnodecanister -- ------------------------------>
© Copyright IBM Corporation 2011, 2014

Figure 2-25. CLI help interface: SVC v7 enhancements SNV13.0

Notes:
The command syntax can be viewed by entering:
• svcinfo -? (or -h): Shows the complete list of information commands.
• svctask -? (or -h): Shows the complete list of task or action commands.
• svctask commandname -? (or -h): Shows the syntax of the specific command; also
applicable to the svcinfo command set.
• svcinfo commandname -filtervalue: Shows the available filters to reduce the output of
the specific command.
Beginning with v7, the complete details of a given command can be listed with help
commandname or man commandname.

2-32 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Service Assistant IP interface


• Service Assistant is a
browser-based GUI
• Access using the http://node SA IP/service
Service IP address or
PuTTY SSH session
– Requires default
Superuser passw0rd
• Perform initialization on
older 2145 models
– Technician port required
to initialize the cluster for
the 2145-DH8 model
• Troubleshoot service
relates issues

© Copyright IBM Corporation 2011, 2014

Figure 2-26. Service Assistant IP interface SNV13.0

Notes:
SVC v6.1 introduced the Service Assistant (SA), which is a browser-based GUI designed to assist
with service issues. You can access the interface for a node using its Ethernet port 1 service IP
address using either a web browser or a PuTTY SSH session. Only the superuser ID has access to
the Service Assistant interface. You log on with your Superuser passw0rd.
You can use Service Assistant to perform initialization of the cluster, recovery tasks, and other
service related issues. If for some reason your browser keeps bringing you to the normal GUI rather
than the Service Assistance GUI, just add /service to the URL.
With the previous CG8 and CF8 models, almost all the functions previously possible through the
node front panel are available from the Ethernet connection, offering the benefits of an
easier-to-use interface that can be invoked remotely from the cluster.
The 2145-DH8 node can only be initialized using the Technician port. The Technician port and the
Service Assistant IP address are not related.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC management IP addresses


• At least three IPv4 or IPv6 addresses are required for the SVC 2145
– Ether port 1: One cluster management IP address
• Owned by configuration node
– Service IP address for each node canister
• Highly recommended
– 1 GbE iSCSI IP (optional)
– Ethernet port 2:
• Alternate cluster management IP (optional)
• 1 GbE iSCSI IP (optional)
• Supports both IPv4 and IPv6 address formats

E1 E2 E3
1 GbE iSCSI and
1 2 3 4
cluster management

E1 E2

© Copyright IBM Corporation 2011, 2014

Figure 2-27. SVC management IP addresses SNV13.0

Notes:
The SVC cluster requires the following IP addresses:
• Cluster management IP address: Address used for all normal configuration and service
access to the cluster. There are two management IP ports on each node. Port 1 is required to
be configured as the port for cluster management.
• Service assistant IP address: One address per node. Note that the cluster will operate without
these node service IP addresses but it is highly recommended that each node is assigned an IP
address for service-related actions.
• The following IP addresses are optional:
For increased redundancy, an optional second Ethernet connection is supported for each SVC
node:
• The second IP port of the node can also be configured and used as an alternate address to
manage the cluster.
• iSCSI addresses: Two per node (only if iSCSI is intended to be used).

2-34 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty • In addition, the 10GbE ports of the 2145-CG8 and CF8 models and the 2145-DH8 can be used
for iSCSI.
To ensure system fail over operations, Ethernet port 1 on all nodes must be connected to the same
set of subnets. If used, Ethernet port 2 on all nodes must also be connected to the same set of
subnets. However, the subnets for Ethernet port 1 do not have to be the same as Ethernet port 2.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Connecting iSCSI hosts to SVC FC network


iSCSI initiator
iSCSI
Admin
Host

iSCSI IP Mgmt IP
iSCSI targets Addresses Addresses

10.10.1.10 10.10.1.100
SVC Config Node 10.10.2.10 10.10.2.100

10.10.1.x 10.10.1.1
10.10.1.20
Gateway
SVC Node 10.10.2.20
Rest of IP
10.10.1.30 Network
SVC Node 10.10.2.30
10.10.2.1
10.10.1.40 10.10.2.x
Gateway
SVC Node 10.10.2.40

Fibre Channel iSCSI Email


network 10 GbE for iSCSI available iSNS
Host gateway
with 2145-CG8
© Copyright IBM Corporation 2011, 2014

Figure 2-28. Connecting iSCSI hosts to SVC FC network SNV13.0

Notes:
Ever since SVC v5, support for IP network-attached hosts with the iSCSI protocol has been
available using one or both Ethernet ports on each SVC node. SVC enables IP based hosts to
access SVC managed Fibre Channel SAN-attached disk storage.
The 10GbE ports, available as an option with the 2145-C88 nodes and the 2145-DH8 node, can
also be used for iSCSI traffic.

2-36 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Fibre Channel over Ethernet support


Two 10GbE ports per CG8 node provide both FCoE target and initiator functions.
FCoE interface can be used for:
• FC host access to volume (using FC or FCoE)
• FCoE host access to a volume (using FC or FCoE)
• SVC access to external FC storage LUN (using FC or FCoE)
• SVC access to external FCoE storage LUN (using FC or FCoE)
• Replication between SVC/SWV7K for Remote Copy

Converged SAN
Enhanced Fabric 1
Ethernet (CEE)
network
CG8 with
Hosts with 10GbE
CNAs

SAN
Fabric 2

CEE ports FC ports


© Copyright IBM Corporation 2011, 2014

Figure 2-29. Fibre Channel over Ethernet support SNV13.0

Notes:
Beginning with v6.4.0, the 2145-C88 models and 2145-DH8 with 10 GbE ports support attachment
to Converged Enhanced Ethernet (CEE) networks using FCoE. A converged switch, such as the
IBM/Brocade Converged Switch B32 or the Cisco Nexus 5010/5020 supports FCoE, Fibre Channel,
Converged Enhanced Ethernet (CEE), and traditional Ethernet protocol connectivity for servers and
storage.
The FCoE support provided by v6.4.0 includes both target and initiator functions, which expands
the SVC host and storage connectivity to include:
• Fibre Channel hosts access to a volume using either FC or FCoE ports.
• FCoE hosts (hosts with Converged Network Adapters (CNAs)) to access a volume using either
FC or FCoE ports.
• SVC access using FC or FCoE ports to an external storage system FC accessed LUN.
• SVC access using FC or FCoE ports to an external storage system FCoE
• SVC to another SVC using any combination of FC or FCoE for Remote Copy operations. For
FCoE, a Fibre Channel Forwarder (FCF) function and a full Fibre Channel ISL are required.
In addition to FCoE, the same 10 GbE ports might also be concurrently used for iSCSI server
connections.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-37
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC cluster communication and management

I/O Group2
Cluster State Data I/O Group3

I/O Group0 I/O Group1


Copy of Copy of Copy of Copy of
cluster state cluster state cluster state cluster state
SVC node1 SVC node2 SVC node3 SVC node4

Configuration Node
Boss Node

• Owns cluster IP address • Controls cluster state


(up to two addresses) updates
• Provides configuration • Propagates cluster state
interface to cluster data to all nodes
© Copyright IBM Corporation 2011, 2014

Figure 2-30. SVC cluster communication and management SNV13.0

Notes:
So how is communication and management possible?
When the initial node is used to create a cluster, it automatically becomes the configuration node for
the SVC cluster. The configuration node responds to the cluster IP address and provides the
configuration interface to the cluster. All configuration management and services are performed at
the cluster level. If the configuration node fails, another node is chosen to be the configuration node
automatically, and this node takes over the cluster IP address. Thus, configuration access to the
cluster remains unchanged. A cluster can contain up to four I/O groups or eight SVC nodes.
The cluster state holds all configuration and internal cluster data for the cluster. This cluster state
information is held in non-volatile memory of each node. If the main power supply fails, the UPS
units maintain battery power long enough for the cluster state information to be stored on the
internal disk of each node. The read/write cache information is also held in non-volatile memory. If
power fails to a node, the cached data is written to the internal disk.
A node in the cluster serves as the boss node. The boss node ensures synchronization and
controls the updating of the cluster state. When a request is made in a node that results in a change
being made to the cluster state data, that node notifies the boss node of the change. The boss node
then forwards the change to all nodes (including the requesting node), and all the nodes make the

2-38 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty state-change at the same point in time. This ensures that all nodes in the cluster have the same
cluster state data.
Beginning with SVC v4.3.1, cluster time can be obtained from an NTP (Network Time Protocol)
server from time synchronization.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-39
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC planning and implementation topics (4 of 6)


• SVC planning and implementation

• SVC physical planning


– Hardware requirement
– Cabling requirement

• SVC logical planning


– SVC management IP
– SVC SAN zoning
– Storage systems and LUN assignments
– SVC cluster initialization

© Copyright IBM Corporation 2011, 2014

Figure 2-31. SVC planning and implementation topics (4 of 6) SNV13.0

Notes:
The topics discusses the SAN zoning requirements for an SVC clustered system.

2-40 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC SAN zoning

SAN

Fabric 1
Host system

Redundancy
Fabric 2

Storage system

© Copyright IBM Corporation 2011, 2014

Figure 2-32. SVC SAN zoning SNV13.0

Notes:
An SVC environment requires SAN zoning configuration, which is implemented at the switch level.
SVC is one component of the SAN, which uses switches, switch fabrics, and switch zones to
connect host systems and storage devices. To meet business requirements for high availability,
SAN design practices recommend building of a dual fabric network using two independent fabrics
or SANs.
Switches from different vendors can co-exist in the same configuration. However, you might want to
review the documentation, since switch vendors might has different methods of configurations.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-41
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Separate SAN zoning requirements

SAN Volume Controller

Host zone 1 Host zone 2

Host see only the SVC 7.3 supports 2 Gb,


volumes 4 Gb, or 8 Gb FC fabric

A single host should not Connect the SVC and


have more than eight Storage system zone the disk subsystem to
paths to an I/O group. the switch operating at
the highest speed.

© Copyright IBM Corporation 2011, 2014

Figure 2-33. Separate SAN zoning requirements SNV13.0

Notes:
You will configure the switches into two distinct types of fabric zones; a host zone and a storage
system zone. A host zone consist of the SVC system and hosts. You will need to define a zone for
each host in the fabric. If storage systems are to be attached, define a single storage system zone
that will consists of all the storage systems and the SVC. The SAN fabric zones allow the SVC
system to see each other’s nodes and the disk subsystems, and for the hosts to see the SVCs. The
host systems cannot directly see or operate LUNs on the disk subsystems that are assigned to the
SVC system. The SVC nodes within an SVC system must be able to see each other and all of the
storage that is assigned to the SVC system.
SVC 7.3 supports 2 Gb, 4 Gb, or 8 Gb FC fabric, depending on the hardware platform and on the
switch where the SVC is connected. In an environment where you have a fabric with multiple-speed
switches, the preferred practice is to connect the SVC and the disk subsystem to the switch
operating at the highest speed.
All SVC nodes in the SVC clustered system are connected to the same SANs, and they present
volumes to the hosts. These volumes are created from storage pools that are composed of MDisks
presented by the disk subsystems.

2-42 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

FC Zoning and multipathing LUN access control


Zones Zones
WinA SunA How many paths?
SDDDSM MPIO How many LUNs?
w1 w2 s1 s2
Fabric 1 Fabric 2

FC SwitchA FC SwitchB

LUN
masking Note: LUN sharing
Lw Ls requires additional
software

© Copyright IBM Corporation 2011, 2014

Figure 2-34. FC Zoning and multipathing LUN access control SNV13.0

Notes:
A host system is generally equipped with two HBAs, requiring one to be attached to each fabric.
Each storage system also attaches to each fabric with one or more adapter ports. A dual fabric is
also highly recommended when integrating the SVC into the SAN infrastructure.
LUN masking is typically implemented in the storage system, and in an analogous manner in the
SVC, to ensure data access integrity across multiple heterogeneous, or homogeneous host
servers. Zoning is deployed, often complementing LUN masking, to ensure resource access
integrity. Issues related to LUN or volume sharing across host servers are not changed by the SVC
implementation. Additional shared access software, such as clustering software, is still required if
sharing is desired.
Another aspect of zoning is to limit the number of paths among ports across the SAN, thus reducing
the number of instances the same LUN is reported to a host operating system.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-43
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Zoning definitions are identical


Zone SVC_Cluster
configuration
or Zone set Fabric 2
SVC_Cluster
Fabric 1
Host zones

SVC and
SVC and
storage
Host zones
SVC zones
nodes zone

NonSVC Metro/Global
zones Mirror zone

© Copyright IBM Corporation 2011, 2014

Figure 2-35. Zoning definitions are identical SNV13.0

Notes:
In a dual fabric environment, the two fabric zones are identical to one another in concept. Zoning
definitions integrating the SVC cluster typically need to be added alongside existing zoning
definitions. Additional zoning definitions include:
• A zone consisting of all SVC nodes for a given cluster.
• Back-end storage zones that contain all SVC node ports and the back-end storage controller
ports for a given controller type.
• Host zones: A single host should not have more than eight paths to an I/O group.
• A zone for intercluster Metro/Global Mirror operations if the feature is licensed. This zone
contains half of the SVC ports of the SVC clusters in partnerships.

2-44 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Adding SVC Fibre Channel ports to the SAN

Fabric 1 11 12 13 14 Fabric 2
NODE1

21 22 23 24
Four additional ports NODE2 Four additional ports
per SVC node pair per SVC node pair
31 32 33 34
NODE3

41 42 43 44
Four ports per NODE4
SVC node
Up to 4 SVC node pairs, each pair
adds 8 ports to the SAN fabrics

© Copyright IBM Corporation 2011, 2014

Figure 2-36. Adding SVC Fibre Channel ports to the SAN SNV13.0

Notes:
The SVC can be implemented with up to four I/O groups or four pairs of SVC nodes forming an SVC
cluster. It is highly recommended to attach the SVC nodes to two independent fabrics (or a dual
fabric). An SVC cluster can be attached to up to four fabrics.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-45
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC node pairs: FC cable connection example


Fabric1
1 2 3 4 5 6 7 8 FC Switch1

Fabric2
FC Switch1

Node1 Node2 Node3 Node4

2145-8G4, 8A4, CF8, CG8


One FC HBA per node
Four ports per HBA

Suggested port allocation based on


dual 4-port HBA in each node
© Copyright IBM Corporation 2011, 2014

Figure 2-37. SVC node pairs: FC cable connection example SNV13.0

Notes:
The visual illustrates a switch port connection example. The eight ports on the switch are used to
connect to a four-node SVC cluster. Each 2145-8F4, 8G4, 8A4, CF8, and CG8 has one FC adapter
with four ports. The port speed is auto-negotiated to 1, 2, or 4 Gb for models 8F4, 8G4, 8A4; and 2,
4, or 8 Gb for models CF8 and CG8.
Identical switch port numbers are used for the second fabric of the dual fabric SAN configuration.
Alternate the SVC port attachments between the two fabrics.
Use the cable connection chart to plan the connections of the SVC nodes and switches in the rack.
Go to the SVC Information Center website and click Physical Configuration Planning from the
launch page for additional reference to complete the cabling details of the SVC cluster.

2-46 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Maximum paths supported: No more than eight


WinA AIXA How many
paths?
SDDDSM SDDPCM
w1 w2 a1 a2 a3 a4
Fabric 1 Fabric 2

FC SwitchA1 FC SwitchA

Use zoning to manage the number of paths


11 12 13 14 21 22 23 24
NODE1
V1 NODE2

© Copyright IBM Corporation 2011, 2014

Figure 2-38. Maximum paths supported: No more than eight SNV13.0

Notes:
An SVC cluster with multiple nodes could potentially introduce more paths than necessary between
the host HBA ports and the SVC FC ports. For a given volume (which is owned by an I/O group),
the number of paths from the SVC nodes to a host must not exceed eight. A given host should have
two HBA ports for availability, and no more than four HBA ports.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-47
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC 2145-CG8 and CF8 models FC I/O ports


Back panel of SVC node:
2145-CF8, CG8,
One FC adapter
four WWPNs

*Ports are numbered left to right

SVC Documentation
SVC Books,
1 Port 1 Messages,
2 Port 2 GUI diagnostics
3 Port 3
4 Port 4

© Copyright IBM Corporation 2011, 2014

Figure 2-39. SVC 2145-CG8 and CF8 models FC I/O ports SNV13.0

Notes:
Counting from left to right on the rear panel of an SVC 2145-CG8 and CF8 models, the four Fibre
Channel ports of each SVC node are numbered 1- 4. These port numbers are used in the SVC
documentation, SVC command output, and SVC service tasks.

2-48 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC node and port details example


IBM_2145:BLANC_SVC:admin> lsnode 2
id 2 SAN Volume Controller
name NODE2
UPS_serial_number 100035D052 product ID = 076801
WWNN 500507680100EE17 in WWNN and WWPNs
status online
IO_group_id 0 node ID 2 details cont’d
IO_group_name io_grp0 iscsi_name iqn.1986-03.com.ibm:2145.blancsvc.node2
partner_node_id 1 iscsi_alias
partner_node_name NODE1 failover_active no
config_node no failover_name NODE1
UPS_unique_id 20400000C5500142 failover_iscsi_name iqn.1986-
port_id 500507680140EE17 03.com.ibm:2145.blancsvc.node1
port_status active failover_iscsi_alias
port_speed 8 Gb panel_name 169683
port_id 500507680130EE17 enclosure_id
port_status active canister_id
port_speed 8 Gb enclosure_serial_number
port_id 500507680110EE17 service_IP_address 10.62.161.112
port_status active service_gateway 10.62.161.254
port_speed 8 Gb service_subnet_mask 255.255.255.0
port_id 500507680120EE17 service_IP_address_6
port_status active
port_speed 8 Gb Q service_gateway_6
service_prefix_6 1 2 3 4 1 1 1 1 Q
hardware CG8 service_IP_mode static
service_IP_mode_6
NODE1

© Copyright IBM Corporation 2011, 2014

Figure 2-40. SVC node and port details example SNV13.0

Notes:
Each SVC node has a WWNN (worldwide node name). Each of the four ports of a node has its own
SVC generated WWPN (worldwide port name). These world wide port names are persistent across
HBA replacements.
The WWPN of each port is generated from the SVC node’s WWN. The only variation among the
four ports of each node is the lower order third byte - which has a value of either 1, 2, 3, or 4. This
value of known as the Q value.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-49
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC 2145-DH8 FC I/O ports


• Maximum of 12 Fibre Channel I/O ports
– Slot 1 FC 4-port 10 Gb Ethernet adapter shipped standard from
factory
– Slots 2 and 5 supports an optional FC adapter cards
• Or optional 10 Gb 4-port Ethernet adapter card per node

8 Gb Fibre Channel-fault LEDs.

© Copyright IBM Corporation 2011, 2014

Figure 2-41. SVC 2145-DH8 FC I/O ports SNV13.0

Notes:
The 2145-DH8 supports up to 12 FC I/O port, depending on how many host interface cards. The
visual illustrates the physical Fibre Channel port numbers with host interface cards in slots 1, 2 and
5. As with Fibre Channel SAN participant, each SVC engine or node has a unique worldwide node
name (WWNN), and each Fibre Channel port on the adapter cards has a unique worldwide port
name (WWPN). These ports are used to connect the SVC node to the SAN.

2-50 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC WWPNs numbering structure


Fabric1 Fabric2
1 2

4 3 1 2
SVC Documentation WWPN Q Value
1 Port 1 4
2 Port 2 3
:Ports are not numbered from
left to right 3 Port 3 1
4 Port 4 2
4 3
Fabric1 Fabric2
WWPN Q value
2145-8F4, 8G4, CF8, CG8, DH8

© Copyright IBM Corporation 2011, 2014

Figure 2-42. SVC WWPNs numbering structure SNV13.0

Notes:
For availability, the ports of an SVC node should be spread across the two fabrics in a dual fabric
SAN configuration. For consistency and ease of cable management, consider labeling each HBA
port of the SVC back panel with its physical port number as well as the corresponding generated
WWPNs.
In this example we are using the 2145-CG8 node, the Q value on all nodes follows a 4, 3, 1, 2
sequence (from left to right). This might be counter-intuitive and has its rationale steeped in history.
For compatibility purposes this WWPN numbering scheme is still used for all SVC node models.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-51
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding the 2145-DH8 to an existing cluster


• The existing system software must be at a version that supports the new node
– If a node is being replaced by a 2145 DH8, the system software version must
be v7.3.0 or later
• If the node being replaced is a CG8, CF8 or 8A4 and the replacement node is a
DH8 then the replacement node must have a four port FC card in slot 1. If the
node being replaced has a second I/O card in addition to the required FC card,
then the replacement node must have the same card in slot 2
• SVC DH8 uses the new 80c product ID that provides the ability for a new scheme
of WWNN/WWPNs
• Native 'WWPNs' follow:
– 500507680c <S><P> XXXX
– Where <S> is the PCIe slot number (1-6) and <P> is the port number in that
slot (1-4)
– XXXX is the sequence number of the SVC DH8 assigned at manufacturing
which might be changed by the user if needed for migration
• WWNN has <S><P><0>

© Copyright IBM Corporation 2011, 2014

Figure 2-43. Adding the 2145-DH8 to an existing cluster SNV13.0

Notes:
The 2145-DH8 nodes can be integrated within the existing IBM SAN Volume Controller clustered
systems with only a few additional steps in regard of the new WorldWide names (WWNs) structure.
The replacement procedure can be performed nondisruptively. The nodes can be intermixed in
pairs in the existing SVC systems. Consider first to upgrade the SAN Volume Controller to the latest
code level. When installing 2145-DH8 nodes into the existing SVC environment with compressed
volumes, all DH8 nodes must have the second processor, 64 GB memory, and at least one
Compression Accelerator card.
One of the important considerations when upgrading the system to DH8 nodes or when just
installing additional I/O groups based on DH8 nodes, is the use of WWPN range. The IBM SVC
2145-DH8 uses the new 80c product ID, so IBM has the opportunity to define a new scheme to
generate WWNs. Public WWNs take the form: 500507680c <slot number> <port number> xxxx.
With four bits for slot number and four for port number, giving 16 public names per slot, and 16 bits
for the serial number.
The procedure of upgrade is nondisruptive because changes to your SAN environment are not
required. The replacement (new) node uses the same worldwide node name (WWNN) as the node
that you are replacing. An alternative to this procedure is to replace nodes disruptively either by

2-52 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty moving volumes to a new I/O group or by rezoning the SAN. The disruptive procedures, however,
will require additional work on the hosts.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-53
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC 2145-DH8 WWPN naming scheme


Port WWPN
1 500507680c11xxxx
New Scheme for SVC DH8
2 500507680c12xxxx
3 500507680c13xxxx

Old Slot DH8 Slot Port WWPN


1 500507680140xxxx
Upgrading to the
SVC DH8 2 500507680130xxxx
1 1
3 500507680110xxxx
4 500507680120xxxx
1 500507680150xxxx
2 500507680160xxxx
2 3
3 500507680170xxxx
4 500507680180xxxx
© Copyright IBM Corporation 2011, 2014

Figure 2-44. SVC 2145-DH8 WWPN naming scheme SNV13.0

Notes:
The visual references a new WWPN naming scheme for the SVC 2145-DH8, which is identifiable
by the 680c string in the WWPN. For example, if you are upgrading from an existing SVC system
such as the 2145-CG8 model to the DH8, then the WWPN would be referenced as 680140, 30, 10.
20, and so on. Therefore, the new node assumes the WWNNs of the CG8 node you are replacing,
thus requiring no changes in host configuration, SAN zoning or multipath software.

2-54 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Zone definitions by port number or WWPN


Zoning by port
(Domain ID and Port #)
WinA SunA Zoning by
WWPN

Fabric

Switch domain#

LUN
masking
Lw Ls
© Copyright IBM Corporation 2011, 2014

Figure 2-45. Zone definitions by port number or WWPN SNV13.0

Notes:
Zoning by switch domain ID and port number is positional, that is, if the cable is moved to another
switch or another port, then the zoning definition needs to be updated. This is sometimes referred to
as port zoning.
Zoning by WWPN provides the granularity at the adapter port level. If the cable is moved to another
port or to a different switch in the fabric, the zoning definition is not affected. However, if the adapter
card is replaced, and the WWPN is changed (this does not apply to the SVC WWPNs), then the
zoning definition needs to be updated accordingly.
When zoning by switch domain ID, ensure that all switch domain IDs are unique between both
fabrics and that the switch name incorporates the domain ID. Having a unique domain ID makes
troubleshooting problems much easier in situations where an error message contains the Fibre
Channel ID of the port with a problem. For example, have all domain IDs in first fabric starting with
10 and all domain IDs in second fabric starting with 20.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-55
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Adding SVC FC ports to the SAN fabrics

Fabric 1 Fabric 2

6
ID
0 1 2 3 4 5 6 12
ID
11 11 Switch 12
Switch 11 Switch
port#
11 12 13 14 Switch
domain ID
NODE1
Adapter Four ports
port cable per node 21 22 23 24
NODE2
WWPN
Q value

© Copyright IBM Corporation 2011, 2014

Figure 2-46. Adding SVC FC ports to the SAN fabrics SNV13.0

Notes:
The visual shows the following notation that is used:
• SVC ports: SVC node number along with the subscript representing the generated WWPN or Q
value for each port within the SVC node.
- Host ports or storage ports: Entity and subscripts represents HBA port number.
• ID: Switch domain ID.
- Small boxes inside the switch: Represents ports on the switch.
- Number on top of the small boxes inside the switch: Port number of the port on the switch.

2-56 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Cabling SVC FC ports to the SAN fabrics


Example:

Fabric 1 Fabric 2

ID ID
21 22

1 2 3 4 1 2 3 4
ID ID
11 21 14 24 11
13 23 12 22 12

11 12 13 14 21 22 23 24
NODE1 NODE2

© Copyright IBM Corporation 2011, 2014

Figure 2-47. Cabling SVC FC ports to the SAN fabrics SNV13.0

Notes:
When attaching SVC ports to a SAN fabric containing core directors and edge switches, it is
preferable to connect the SVC ports to the core directors and to connect the host ports to the edge
switches. Avoid attaching SVC ports to directors or switches with host-optimizing modules.
The SVC ports behave as a SCSI targets to host ports and interact with storage ports as SCSI
initiators. As such, proximity to storage ports is preferred. Connect SVC ports and storage ports to
the core director and connect host ports to the edge switches or host-optimizing blades.
In this example configuration, a pair of nodes, NODE1 and NODE2, are attached to the dual fabric
as an I/O group. The cabling of the SVC ports to the switch adheres to the following
recommendations and objectives:
• Implement two independent fabrics (dual fabric).
• Split the attachment of the ports of the SVC node across both fabrics.
• Illustrate the cabling to facilitate zone definitions coded using either switch domain ID and port
number, or WWPN values.
• Enable the paths from the host with either four-paths or eight-paths to the SVC I/O group to be
distributed across WWPNs of the SVC node ports.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-57
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Note that the ports of each SVC node are spread across the two fabrics and that ports alternate
between the two SVC nodes as they are attached to the switch. An additional switch has been
added to each fabric to reflect multi-switched fabric environments.

2-58 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC nodes zone


Nodes zone Nodes zone
[(11,1) (11,2) (11,3) (11,4)] [(12,1) (12,2) (12,3) (12,4)]

All storage ports


ID and all SVC ports ID
21 22

Fabric 1 Fabric 2
0 1 2 3 4 0 1 2 3 4
ID ID
11 21 14 24 11
13 23 12 22 12

11 12 13 14 21 22 23 24
NODE1 NODE2

© Copyright IBM Corporation 2011, 2014

Figure 2-48. SVC nodes zone SNV13.0

Notes:
In the example, there are two sets of zoning definitions - one for each fabric. Each zone includes all
ports from each SVC node cabled to the fabric.
Even though there is zone overlap for the SVC node port with host and storage zones, it is
recommended to have an SVC nodes zone to facilitate node to node communications without
dependency on other zones.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-59
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC nodes and storage zones


Nodes-Stgbox1 zone Nodes-Stgbox1 zone
[(11,1) (11,2) (11,3) (11,4) (11, 5)] [(12,1) (12,2) (12,3) (12,4) (12,5)]
Nodes-Stgbox2 zone Nodes-Stgbox2 zone
[(11,1) (11,2) (11,3) (11,4) (11,6) (11,0)] [(12,1) (12,2) (12,3) (12,4) (12,6) (12,0)]
All storage ports
ID and all SVC ports ID
21 22

Fabric 1 Fabric 2
0 1 2 3 4 5 6 0 1 2 3 4 5 6
ID
E3 11 21 14 24 F1 E1 11
E4 13 23 12 22 F2 E2 ID
12

11 12 13 14 F1 F2 E1 E2
VendorX
NODE1 E3 E4
21 22 23 24
DSxK
NODE2
© Copyright IBM Corporation 2011, 2014

Figure 2-49. SVC nodes and storage zones SNV13.0

Notes:
All SVC nodes must be able to see the same set of storage ports. If two SVC nodes see a different
set of ports on the same storage system, operation is degraded and logged as error.
Multiple ports or connections from a given storage system can be defined to provide greater data
bandwidth and more availability. To avoid interaction among storage ports of different storage
system types, multiple back-end storage zones can be defined.
For example, one zone containing all the SVC ports and the VendorX port, and another zone
containing all the SVC ports and the DSxK ports. Storage system vendors might have additional
best practice recommendations, such as not mixing ports from different controllers of the same
storage system in the same zoning. SVC supports and follows those guidelines provided by the
storage vendors.

2-60 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Storage system zoned with all SVC ports example


Each storage box WWPN is zoned with
two SVC WWPNs per node per fabric.

© Copyright IBM Corporation 2011, 2014

Figure 2-50. Storage system zoned with all SVC ports example SNV13.0

Notes:
Verify SAN zoning from the perspective of the SVC by clicking Settings > Network and then select
Fibre Channel in the Network filter list. This Fibre Channel view is designed to display SAN
connectivity data as seen by this SVC cluster, that is, the port to port connectivity between the SVC
ports of this cluster with its attaching host ports, storage system ports, and partner SVC node ports.
The example shows the connectivity data between the storage system BLANCDS3K and the SVC
ports of this cluster. The BLANCDS3K has two ports and the 4-node SVC cluster has 16 ports. Both
parties have their ports evenly split between two SAN fabrics.
For ease of reference, the output has been divided into two boxes. One box per DS3K port as
shown under the Remote WWPN column. Each box contains eight entries because the DS3K port
is zoned to see ALL the SVC ports on its fabric. The WWPN values shown in the Local WWPN
column are the specific SVC node ports of the same fabric. The zoning output conforms to the
guideline that, for a given storage system, zone its ports with all the ports of the SVC cluster on that
fabric.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-61
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC planning and implementation topics (5 of 6)


• SVC planning and implementation

• SVC physical planning


– Hardware requirement
– Cabling requirement

• SVC logical planning


– SVC management IP
– SVC SAN zoning
– Storage systems and LUN assignments
– SVC cluster initialization

© Copyright IBM Corporation 2011, 2014

Figure 2-51. SVC planning and implementation topics (5 of 6) SNV13.0

Notes:
The topics discusses the external storage system and LUN assignments for the SVC 2145.

2-62 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Supported storage systems (aka controllers)


http://www.ibm.com/storage/support/2145

© Copyright IBM Corporation 2011, 2014

Figure 2-52. Supported storage systems (aka controllers) SNV13.0

Notes:
Visit the SAN Volume Controller product support website for the latest list of storage systems and
their corresponding supported software and firmware levels.
Refer to the SVC Information Center > Configuration > Configuring and servicing external
storage systems, for detailed descriptions of each supported storage system.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-63
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Storage system Fibre Channel ports


Supported
host systems

Ensure Fabric 1 Fabric 2


LUN Best practice:
Masking Dedicated ports
for SVC traffic
if possible

SVC Node 1 SVC Node 2

VendorX
DSxK Storwize
DS8000 V7000
DS6000 DS5000 XIV
NetApp And so ESS
HDS HPQ EMC Sun DS4000
N series on… FlashSystem
DS3000

© Copyright IBM Corporation 2011, 2014

Figure 2-53. Storage system Fibre Channel ports SNV13.0

Notes:
When integrating the SAN Volume Controller into an existing SAN fabric, consider using separate
storage adapter ports for SVC I/O traffic versus non-SVC I/O traffic, if possible or practical. Some
storage systems support many adapter ports such that an isolation of SVC-related and non-SVC
traffic can be implemented.
Refer to the SVC Information Center > Configuration > Configuring and servicing external
storage systems for details regarding storage system setup parameters for each brand of storage
system supported.

2-64 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC cluster relationship to storage system


SVC cluster is a SCSI host
to attaching storage system

Best practice: SVC Node 1 to SVC Node 4


SVC Node 1 to SVC Node 4
Define ALL SVC ports SVC Node 1 to SVC Node 4

to storage system. SVC Node 1 to SVC Node 4


Each SVC node has
SVC Node 1 to SVC Node 4
SVC Node 1 to SVC Node 4
SVC Node 1 to SVC Node 8 four WWPNs

Best practice: Assign


Best practice: Allocate Ly Lz each LUN (MDisk)
one LUN (MDisk) to all SVC ports.
per array.

© Copyright IBM Corporation 2011, 2014

Figure 2-54. SVC cluster relationship to storage system SNV13.0

Notes:
From the perspective of the disk storage system, the SVC is defined as a SCSI host. This SVC host
is a cluster (each node in the cluster has four WWPNs) so an eight-node SVC has a total of 32
WWPNs. Define all of the cluster’s WWPNs to the storage system.
Disk storage systems tend to have different mechanisms or conventions to define hosts. For
example, a DS3/4/5000 uses the construct of a host group to define the SVC cluster with each node
in the SVC cluster identified as a host with four host ports within the host group. LUNs are then
mapped to the host group.
With a DS8000, a port group can be used to collectively identify all the WWPNs of the SVC cluster
and is referred to as a host attachment. A volume group is a named construct that defines a set of
LUNs. The SVC host attachment can then be associated with a volume group to access its allowed
or assigned LUNs.
All storage systems use variations of these approaches to implement LUN masking. Refer to the
SVC Information Center > Configuration > Configuring and servicing external storage
systems for more specific information about the numerous heterogeneous storage systems
supported by the SVC.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-65
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

The LUNs identified in the visual as Ly and Lz become SVC MDisks after SVC performs device
discovery on the SAN. These LUNs should be large, similar in size, and be assigned to all of the
SVC ports of the cluster. These LUNs must not be accessible by other host ports or other SVC
clusters.

2-66 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty
Storage system and SVC coexistence: LUN
masking
Host I/O Host I/O

2 WWPNs 2 WWPNs

V1 V2 V3 V4
Without SVC

4 WWPNs
SVC Node 1 to SVC Node 4 per SVC
SAN SVC Node 1 to SVC Node 4
SVC Node 1 to SVC Node 4 node
SVC Node 1 to SVC Node 4

With SVC
Host1 L1 L2 L3 L4
Host2 L5 L6 L7 L8
L9 La Lb Lc
Ly Lz SVCs

Hostn Ld Le L3 L4 Array sized LUNs


© Copyright IBM Corporation 2011, 2014

Figure 2-55. Storage system and SVC coexistence: LUN masking SNV13.0

Notes:
LUNs become MDisks to be grouped into storage pools. Create a storage pool by using MDisks
with similar performance and availability characteristics. For ease of management and availability,
do not span the storage pool across storage systems.
The recommendation is to allocate and assign LUNs with large capacities from the storage systems
to the SVC ports. These SCSI LUNs or MDisks once under the control of the SVC provide extents
from which volumes can be derived.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-67
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Host to SVC access: Supported multipath drivers


SVC+nonSVC SVC only SVC only NonSVC

Server1 Server2 Server3 ServerX


SDDDSM SDDPCM MPIO multipath
driver
IP

SAN SVC
Additional multipath drivers
V1 V2 V3 V4 V5 supported:
9 ATTO multipath
9 AIXPCM
Pool1 Pool2 9 Citrix Xen
9 Debian
9 IBM i
9 Linux
9 Novell NetWare
9 OpenVMS
SVC SVC 9 ProtectTier
9 PV Links, HP native
9 SGI
Lx La Lb La Lb L1 L2 9 Sun MPxIO
9 Tru64
9 Veritas DMP, DMPDSM
Wide array of supported storage systems 9 VMware
9 Windows MPIO

© Copyright IBM Corporation 2011, 2014

Figure 2-56. Host to SVC access: Supported multipath drivers SNV13.0

Notes:
The Subsystem Device Driver (SDD, or SDDDSM for Windows MPIO environments, SDDPCM for
AIX MPIO environments) is a standard function of the SVC and provides multipathing support for
host servers accessing SVC provisioned volumes.
In addition to SDD, a wealth of other multipath drivers are supported. Refer to the SVC product
support website for latest support levels and platforms.

2-68 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty
SVC access to storage system: One WWNN and
up to 16 WWPNs example
Example: Many storage systems SVC supports up to 16 WWNNs
per storage system and up to
One WWNN 16 WWPNs per WWNN.
per storage system SVC attempts to use as many
with multiple WWPNs storage ports as available to
access LUNs (MDisks).
w w w w w w w w w w w w w w w w
w w w w w w w w w w w w w w w w
p p p p p p p p p p p p p p p p
n n n n n n n n n n n n n n n n

L0 L1 L2 L3 L4 L5 L6 L7

L8 L9 La Lb Lc Ld Le Lf
Best practice: Assign LUNs to ALL SVC ports of the SVC cluster.
© Copyright IBM Corporation 2011, 2014

Figure 2-57. SVC access to storage system: One WWNN and up to 16 WWPNs example SNV13.0

Notes:
Many storage systems implement one WWNN to represent the storage system itself and one
unique WWPN for each of the ports of the storage system.
SVC supports a maximum of 16 WWNNs per storage system and up to 16 WWPNs per WWNN.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-69
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC access to storage system: More than one


WWNN
Example: IBM Storwize V7000
Node1 Node2
(controller1) (controller2)
WWNN1 WWNN2
w w w w w w w w
w w w w w w w w
p p p p p p p p
n n n n n n n n

L0 L2 L4 L6 L1 L3 L5 L7
Preferred node = Node1 Preferred node = Node2
• Each WWNN appears as one controller (system) to SVC cluster
• LUNs (MDisks) access based on preferred node of LUN
• Automatic failover of MDisks if issues with individual controller
Best practice: Assign MDisks in multiples of storage ports
zoned with SVC cluster (8 WWPNs – 8 MDisks/16 MDisks).
© Copyright IBM Corporation 2011, 2014

Figure 2-58. SVC access to storage system: More than one WWNN SNV13.0

Notes:
An example of more than one WWNN used by a disk storage system is an IBM Storwize V7000
where each controller of the system has its own WWNN. When used to provide LUNs to an SVC
environment the IBM Storwize V7000 volumes become MDisks for the SVC.
The visual shows a single V7000 with two nodes (or controllers). The best practice
recommendations apply equally to a clustered IBM Storwize V7000.

2-70 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC access to storage system: Many WWNNs


Examples: Various EMC and HDS models SVC supports up to 16 WWNNs
per storage system.
w w w w w w w w w w w w w w w w
w w w w w w w w w w w SVC
w supports
w w wa total
w of 1024
n n n n n n n n n n n WWNNs
n n and
n 1024
n n WWPNs.
n n n n n n n n n n n n n n n n
w w w w w w w w w w w w w w w w
w w w w w w w w w w w w w w w w
p p p p p p p p p p p p p p p p
n n n n n n n n n n n n n n n n

L0 L1 L2 L3 L4 L5 L6 L7
L8 L9 La Lb Lc Ld Le Lf
• Each WWNN appears as one controller (system) to SVC cluster
• Each LUN must be mapped to SVC ports using same LUN ID
• Automatic failover of MDisks if issues with individual controller port
© Copyright IBM Corporation 2011, 2014

Figure 2-59. SVC access to storage system: Many WWNNs SNV13.0

Notes:
Some storage systems generate more than 16 WWNNs. In this case, up to 16 WWNNs of the
storage system can be set up as a group. The SVC treats each group of 16 WWNNs as a storage
system.
Deploy LUN masking so that each LUN is assigned to no more than 16 ports of these storage
systems. Refer to the SVC product support website for the latest information and refer to the SVC
Information Center > Configuration > Configuring and servicing external storage systems for
details regarding to storage systems setup parameters.
The environment of having multiple WWNNs used in certain disk storage systems is limited only by
the maximum of 1024 WWPNs and 1024 WWNNs.
Maximum configuration limits can be found at the web by searching with the keywords of IBM SVC
maximum configuration limits.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-71
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

DS3K example: Storage system WWNN and


WWPNs

© Copyright IBM Corporation 2011, 2014

Figure 2-60. DS3K example: Storage system WWNN and WWPNs SNV13.0

Notes:
The visual shows the WWPNs and WWNN of an IBM DS3400 disk storage system.
The profile of this storage system can be displayed by its GUI, DS3000 Storage Manager. Two
different controllers within this DS3400 are displayed (note the Controllers tab). Each controller
has its own unique WWPN value but they share the same WWNN value.
In other words, the DS3400 storage system is identified by just one WWNN and each controller port
within the storage system has its own WWPN. This is also the case with other models of the
DS3000, DS4000, and DS5000 series of storage systems.

2-72 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

DS3K example: SVC host group definition

4-node SVC cluster


example

All SVC cluster WWPNs


defined to storage system

© Copyright IBM Corporation 2011, 2014

Figure 2-61. DS3K example: SVC host group definition SNV13.0

Notes:
For a DS3000, DS4000, or DS5000 disk storage system, LUN masking is implemented using the
host group construct.
Continuing with the example storage system, a 4-node SVC cluster is defined as the SVC host
group with four hosts with each representing an SVC node. Each host (node) is defined with four
ports.
The host ports are shown in detail in the Configured Hosts: box. The host type is an IBM TS SAN
VCE (IBM TotalStorage SAN Volume Controller Engine).

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-73
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

DS3K LUNs assigned to SVC host group

LUNs = SVC MDisks

© Copyright IBM Corporation 2011, 2014

Figure 2-62. DS3K LUNs assigned to SVC host group SNV13.0

Notes:
Once the SVC cluster has been defined to the storage system (Host Group SVC in this case) then
LUNs can be mapped or assigned.
From the Host-to-Logical Drive Mappings view of the DS3400 Storage Manager for this example,
eight LUNs have been mapped to the host group called SVC with their respective LUN numbers.
These LUNs become the MDisks in the SVC.

2-74 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Storwize V7000 example: WWNNs and WWPNs


IBM2076:NAVY_SWV7K:admin>lsnode 1
id 1
name SWV7Knode1
UPS_serial_number IBM2076:NAVY_SWV7K:admin>lsnode 2
WWNN 50050768020018A6 id 2
status online name SWV7Knode2
IO_group_id 0 UPS_serial_number
IO_group_name io_grp0 WWNN 50050768020018A7
partner_node_id 2 status online
partner_node_name node2 IO_group_id 0
config_node yes IO_group_name io_grp0
UPS_unique_id 50050768020018A6 partner_node_id 1
port_id 50050768021018A6 partner_node_name node1
port_status active config_node no
port_speed 8 Gb UPS_unique_id 50050768020018A7
port_id 50050768022018A6 port_id 50050768021018A7
port_status active port_status active
port_speed 8 Gb port_speed 8 Gb
port_id 50050768023018A6 port_id 50050768022018A7
port_status active port_status active
port_speed 8 Gb port_speed 8 Gb
port_id 50050768024018A6 port_id 50050768023018A7
port_status active port_status active
port_speed 8 Gb port_speed 8 Gb
hardware 100 port_id 50050768024018A7
port_status active
port_speed 8 Gb
hardware 100

© Copyright IBM Corporation 2011, 2014

Figure 2-63. Storwize V7000 example: WWNNs and WWPNs SNV13.0

Notes:
The visual shows an example of the IBM Storwize V7000 disk storage system that has been set up
to provide LUNs to the same 4-node SVC cluster.
The Storwize V7000 uses the same software as the SVC, hence a Command Line Interface (CLI)
output for a lsnode command is shown for the two nodes of a Storwize V7000. Each node has its
unique WWNN and each node’s port WWPNs are derived from the node WWNN with a minor
variation in the low order third byte to denote port number.
In other words, the Storwize V7000 storage system is comprised of two nodes, with each node
having its own WWNN. Within each node, there are four ports - each with its own WWPN.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-75
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Storwize V7000 example: SVC host definition

All 16 WWPNs of SVC


SVC Node 1 to SVC Node 4
SVC Node 1 to SVC Node 4 4-node cluster defined to
SVC Node 1 to SVC Node 4
SVC Node 1 to SVC Node 4 this storage system

SAN fabric

SCSI ID 0 SCSI ID 1
NAVYSVC0 NAVYSVC1
LUNs assigned to
SVC cluster
SCSI ID 2 SCSI ID 3
NAVYSVC2 NAVYSVC3

Storwize V7000
© Copyright IBM Corporation 2011, 2014

Figure 2-64. Storwize V7000 example: SVC host definition SNV13.0

Notes:
The visual shows a perspective of the Storwize V7000 where 16 WWPNs of a 4-node SVC cluster
has been defined as a host object with the name of NAVY_SVC.
The graphic illustrates the 4-node SVC cluster each with four ports connected to the SAN fabric;
also connected to the SAN fabric is a 2-node Storwize V7000.
Four LUNs with SCSI IDs 0-3 from the Storwize V7000 are to be assigned to the SVC host known
as NAVY_SVC. The SVC in turn will use these LUNs as its MDisks surfaced from the two Storwize
V7000 nodes (WWNNs).

2-76 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Storwize V7000 LUNs assigned to SVC cluster

LUNs assigned to
SVC cluster –
NAVY_SVC

© Copyright IBM Corporation 2011, 2014

Figure 2-65. Storwize V7000 LUNs assigned to SVC cluster SNV13.0

Notes:
The Storwize V7000 GUI Hosts > Host Mappings view illustrates the LUN masking of its four
LUNs assigned to the SVC host NAVY_SVC. Note the SCSI IDs, volume IDs, volume names, as
well as the unique identifiers associated with each LUN.
From this storage system, these are the only LUNs accessible by the host known as NAVY_SVC.
Or more precisely the 16 WWPNs defined as the host NAVY_SVC have been permitted to access
these specific four LUNs.
The Storwize V7000 GUI Hosts > Volumes by Host view shows two of the LUNs (volumes) have a
preferred node of node ID 1 and the other two LUNs (volumes) have a preferred node of node ID 2.
The concept of a preferred node is very similar to that of a preferred controller found in most
midrange disk storage systems. A node in the Storwize V7000 is analogous to a storage controller.
Unlike the DS3400 this controller (node) just happen to have its own WWNN.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-77
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

DS8K example: WWNNs and WWPNs


dscli> lssi
Date/Time:July 5,2013 3:28:40 PM GMT-05:00 IBM DSCLI Version: 6.6.31.18 DS:
Name ID Storage Unit Model WWNN State ESSNet
============================================================================
MOP5 IBM.2107-75V9721 IBM.2107-75V9720 941 5005076306FFC534 Online Enabled

Date/Time: July 5, 2013 3:58:35 PM GMT-05:00 IBM DSCLI Version: 6.6.31.18


DS: IBM.2107-75V9721
ID WWPN State Type topo portgrp
=================================================================
. . .
I0030 5005076306030534 Online Fibre Channel-SW SCSI-FCP 0
I0031 5005076306034534 Online Fibre Channel-SW SCSI-FCP 0
I0231 5005076306134534 Online Fibre Channel-SW SCSI-FCP 0
I0301 5005076306184534 Online Fibre Channel-SW SCSI-FCP 0
. . .

Among the DS8K ports, these


four WWPNs are zoned with
example SVC cluster

© Copyright IBM Corporation 2011, 2014

Figure 2-66. DS8K example: WWNNs and WWPNs SNV13.0

Notes:
The lssi command output of the DS8000 Command Line Interface (dscli) shows a single WWNN
value for the storage facility image.
This DS8000 has 16 ports or WWPNs but the output of the dscli lsioport command has been
edited so that only four WWPNs are shown. These are the four ports zoned with the SVC cluster
ports using SAN zoning. Note that each port has an ID - such as I0030.
Of course, using only four ports or WWPNs is just for illustrative purposes. In production
environments, more ports of the DS8000 would likely have been zoned for use by an SVC cluster.

2-78 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

DS8K example: SVC host definition details

dscli> lshostconnect -portgrp 2


Date/Time: July 5, 2013 3:57:43 PM GMT-05:00 IBM DSCLI Version: 6.6.31.18 DS:
IBM.2107-75V9721
Name ID WWPN HostType Profile portgrp volgrp ESS
ID IOport
====================================================================================
NAVYSVCn1p1 0000 500507680110BE40 SVC San Volume Controller 2 V2 all
NAVYSVCn1p2 0001 500507680120BE40 SVC San Volume Controller 2 V2 all
NAVYSVCn1p3 0002 500507680130BE40 SVC San Volume Controller 2 V2 all
NAVYSVCn1p4 0003 500507680140BE40 SVC San Volume Controller 2 V2 all
NAVYSVCn2p1 0004 500507680110BD91 SVC San Volume Controller 2 V2 all
NAVYSVCn2p2 0005 500507680120BD91 SVC San Volume Controller 2 V2 all
NAVYSVCn2p3 0006 500507680130BD91 SVC San Volume Controller 2 V2 all
NAVYSVCn2p4 0007 500507680140BD91 SVC San Volume Controller 2 V2 all
NAVYSVCn3p1 0008 500507680110BDE3 SVC San Volume Controller 2 V2 all
NAVYSVCn3p2 0009 500507680120BDE3 SVC San Volume Controller 2 V2 all
NAVYSVCn3p3 000A 500507680130BDE3 SVC San Volume Controller 2 V2 all
NAVYSVCn3p4 000B 500507680140BDE3 SVC San Volume Controller 2 V2 all
NAVYSVCn4p1 000C 500507680110BDE1 SVC San Volume Controller 2 V2 all
NAVYSVCn4p2 000D 500507680120BDE1 SVC San Volume Controller 2 V2 all
NAVYSVCn4p3 000E 500507680130BDE1 SVC San Volume Controller 2 V2 all
NAVYSVCn4p4 000F 500507680140BDE1 SVC San Volume Controller 2 V2 all

All 16 ports of 4-node LUNs in volume group V2


SVC cluster defined to DS8K are assigned to these 16 SVC ports

© Copyright IBM Corporation 2011, 2014

Figure 2-67. DS8K example: SVC host definition details SNV13.0

Notes:
All 16 ports or WWPNs of the example 4-node SVC cluster must be defined to the attaching storage
system. With the DS8000, a port group is a construct that enables a host with multiple WWPNs to
be managed as a single entity. Each of the 16 WWPNs of the SVC has been associated with port
group 2. A port group is also known as a host attachment. Each WWPN entry, also known as a host
connection, contains a host type of SVC (whereas with the DS3400 the host type is SAN VCE).
Each host attachment can be associate with a volume group. The volume group construct is a LUN
masking vehicle as volumes added to the volume group become visible to the host connections of
the host attachment. In this example, port group 2 is associated with volume group V2.
The IOport all value means that the SVC ports can reach all DS8000 ports (an implementation of
best practices). SAN fabric zoning provides the reduction in the number of paths that the SVC can
actually use to reach the volumes (LUNs) surfaced from the DS8000.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-79
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

DS8K LUNs assigned to SVC cluster


dscli> showvolgrp -lunmap v2
Date/Time: July 5, 2013 3:45:34 PM GMT-05:00 IBM DSCLI
Version: 6.6.31.18 DS: IBM.2107-75V9721
Name NAVY_SVC I0030 I0031 I0231 I0301
ID V2
Type SCSI Mask WWPN WWPN WWPN WWPN
Vols 1220 1221 1240 1241
==============LUN Mapping============
vol lun
============= 1220 1240
1220 40124020
1221 40124021 LUNs assigned to
1240 40124040 SVC cluster
1241 40124041

1221 1241

dscli> lsfbvol 1220-1221 1240-1241


Date/Time: July 5, 2013 3:46:44 PM GMT-05:00 IBM DSCLI Version: 6.6.31.18
DS: IBM.2107-75V9721
Name ID accstate datastate configstate deviceMTM datatype extpool cap
==================================================================================
NAVY_SVC11220 1220 Online Normal Normal 2107-900 FB 512 P0 100.0
NAVY_SVC11221 1221 Online Normal Normal 2107-900 FB 512 P0 100.0
NAVY_SVC11240 1240 Online Normal Normal 2107-900 FB 512 P2 100.0
NAVY_SVC11241 1241 Online Normal Normal 2107-900 FB 512 P2 100.0

© Copyright IBM Corporation 2011, 2014

Figure 2-68. DS8K LUNs assigned to SVC cluster SNV13.0

Notes:
The volumes (LUNs) of volume group V2 are shown with the showvolgrp command. The
-lunmap parameter displays the LUN numbers the DS8000 has assigned to the volumes.
The four volumes assigned to the SVC cluster (port group 1) have IDs 1220, 1221, 1240, and 1241.
These volume IDs are reflected in the LUN number as is shown in the showvolgrp -lunmap v2
output.
Volume group v2 has been given a name of NAVY_SVC to reflect the host name from a DS8000
perspective. Examine the LUN numbers. Volume 1220 has a LUN number of 40124020. Strip out
the two 40s to obtain the volume ID.
The LUN number is reported to the SVC during SVC SAN device discovery (as is the case with all
SVC to storage systems interactions). SVC uses the LUN number along with the storage system
identifiers (SCSI inquiry data) to maintain the correlation between the LUN in a given storage
system and the MDisk entry the SVC creates to represent the LUN.

2-80 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

MDisks and pools: Naming convention examples


Examples: Storage pool
DS8KP1_FC15K_Pool
MDisk Names
40124020 DS8KP1_1220
40124020 mdisk1
x TB
SVC1220
x TB 40124041
DS8KP1_1241
40124041 mdisk2
x TB
SVC1241
x TB
Storage pool
DS3Kdev_SATA_Pool
LUN1
1
arraysata2
LUN0 xx GB 0 mdisk3 DS3Kdev_sata2
xx GB
arraysata1
mdisk4
xx GB
xx GB
DS3Kdev_sata1

© Copyright IBM Corporation 2011, 2014

Figure 2-69. MDisks and pools: Naming convention examples SNV13.0

Notes:
The LUNs (volumes) surfaced by the disk storage systems become unmanaged MDisks.
Subsequently an administrator can place these MDisks into storage pools for usage.
The top half of this chart continues with the DS8000 example where the MDisk entries of those
LUNs have been renamed to enable easier identification of the storage system and the LUNs within
the storage system.
The storage pool names in this example reflect the storage system and disk device type making it
easier to identify relative performance and perhaps storage tier in an enterprise.
Names of SVC objects can be changed without impacting SVC processing. If installation naming
standards have been modified then names of SVC objects can be modified accordingly. All SVC
processing is predicated by object IDs, not object names.
Up to 63 characters can be used in an object name.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-81
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Power-on sequence
Fibre Channel
switches

Disk
Enclosures

Storage
Systems

SAN Volume
UPSs Controllers

Host
* The power off sequence is the reverse of the arrow.
Systems

© Copyright IBM Corporation 2011, 2014

Figure 2-70. Power-on sequence SNV13.0

Notes:
As a reminder for power up and power down sequences in a server room, the power-on sequence
is shown in the visual. The power off sequence is the reverse of the arrow. Depending on the
storage system, powering up disk enclosures and storage system might be a single step.

2-82 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC supported environments


Net SGI Citrix
Win AIX Linux Sun HP VMware Ware Blade VIOS IRIX Xen
Tru64 iSCSI hosts using
Apple
etc… SVC IP ports

Fabric 1 Fabric 2 Switches


BladeCenter
Brocade
Cisco
McDATA
QLogic
SVC Node 1 SVC Node 2
Juniper Networks

VendorX
DSxK Still growing
more over time
DS5000 Storwize DS8000
NetApp And so
DS4000 family DS6000 HDS HPQ EMC Sun
DS3000 XIV ESS
N series on...
FlashSystem
© Copyright IBM Corporation 2011, 2014

Figure 2-71. SVC supported environments SNV13.0

Notes:
Please consult the SVC product support website for current information, including:
• Supported host platforms and additional solutions such as BladeCenter models and intercluster
distance extenders
• Supported host bus adapters (HBAs)
• Supported Fibre Channel switches
• Supported heterogeneous storage systems
• Supported software
• SVC code level
• SDD/other multipath driver coexistence, native OS multipath drivers
• OS system levels including clustering support

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-83
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

SVC planning and implementation topics (6 of 6)


• SVC planning and implementation

• SVC physical planning


– Hardware requirement
– Cabling requirement

• SVC logical planning


– SVC management IP
– FC connectivity
– iSCSI connectivity
– SVC SAN zoning
– Storage systems and LUN assignments
– SVC cluster initialization
• Technical Service port
• SVC Assistant Tool
• SVC GUI

© Copyright IBM Corporation 2011, 2014

Figure 2-72. SVC planning and implementation topics (6 of 6) SNV13.0

Notes:
The topics discusses the SVC clustered initiation using the Technical port and the SVC Service
Assistant to and the setup configuration using the SVC GUI.

2-84 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Initialize SVC 2145-DH8 using Technician port


• Initialization must be directly configured using the
Technician port
• Connect cable to the Technician port
– Port run a dedicated DHCP server
• Configure an Ethernet port on the personal computer to
enable DHCP
– If DHCP needs to be configured:
• Static IPv4: 192.168.0.2
• Subnet mask: 255.255.255.0
• Gateway: 192.168.0.1
• DNS: 192.168.0.1
• Connect an Ethernet cable between the ports (personal
computer port and technician port)
• Open a supported browser and browse to address
http://install. Technician port is
– The browser is automatically directed to the marked with a T
initialization tool (Ethernet port 4)

© Copyright IBM Corporation 2011, 2014

Figure 2-73. Initialize SVC 2145-DH8 using Technician port SNV13.0

Notes:
To initialize an SVC 2145-DH8 system you must connect a personal computer to the technician port
(Ethernet port4) on the rear of a node canister and run the initialization tool. This port can be
identified by the letter “T”. The technician port is designed to simplify and ease the initial basic
configuration of the SVC system by the local administrator or by service personnel. It eliminates the
need for the LCD front panel as presented on all previous models. This process requires the user to
be physically at the hardware site in order to create a cluster using one node. The remaining
candidate nodes can then be added using the SVC GUI.
A few moments after the connection is made the node uses DHCP to configure IP and DNS settings
of the personal computer. Therefore, you need to make sure that your computer has DHCP
enabled. If you do not have DHCP then configure static IPv4 address 192.168.0.2, mask to
255.255.255.0, gateway to 192.168.0.1, and DNS to 192.168.0.1. After the Ethernet port of the
personal computer is connected to the technician port, open a supported browser and browse to
address http://install.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-85
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Welcome configuration wizard


• Specify the desired node cluster management IP address and
click Finish to initialize the node cluster.

Once IP information is provided,


the system will initialize

Once IP information is provided,


the system will initialize

© Copyright IBM Corporation 2011, 2014

Figure 2-74. Welcome configuration wizard SNV13.0

Notes:
The browser is automatically directed to the initialization tool welcome wizard panel. Follow the
instructions that are presented by the initialization tool to configure the system with a management
IP address. Select if you are using an IPv4 or IPv6 management IP address and then type in the
address (you can use DHCP or statically assign one). The subnet mask and gateway will be a listed
by default, but can be changed, if required. Click Finish to set the management IP address for the
system. System initialization begins and might take several minutes to complete.
If you experience a problem during the process due to a change in system states, wait 5 to 10
seconds and then either reopen the SSH connection or reload the service assistant.

2-86 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

SVC initialization complete


• Disconnect cables between the personnel computer and
technician port
• The system can now be reached by opening a supported web
browser and pointing it to http://management_IP_address.

© Copyright IBM Corporation 2011, 2014

Figure 2-75. SVC initialization complete SNV13.0

Notes:
When system initialization is complete, disconnect the cable between the personal computer and
the technician port. The system can now be reached by opening a supported web browser and
pointing it to http://management_IP_address.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-87
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Set SVC 2145-CG8 Service Assistant IP address

Service IP4?

Perform for each new node from factory*

*Alternative methods to set SA IP available

© Copyright IBM Corporation 2011, 2014

Figure 2-76. Set SVC 2145-CG8 Service Assistant IP address SNV13.0

Notes:
When initializing the SVC 2145-CG8 and CF8 models, one way to set the service IP address for a
node is through the node’s front panel interface.
An alternative would have been to first define the cluster IP address, create the cluster, and then set
the service IP address for each node of the cluster. SVC 2145-CG8/CF8 supports both IP version 4
as well as IP version 6 addressing.
Service IP addresses are configured from factory.

2-88 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Cluster creation using SVC 2145-CG8 front panel


New Cluster IP4?

Cluster:
Cluster_10.6.5.60

Cluster IP set
© Copyright IBM Corporation 2011, 2014

Figure 2-77. Cluster creation using SVC 2145-CG8 front panel SNV13.0

Notes:
As an alternative to assigning each SVC 2145-CG8 or CF8 node its service IP address first,
another option is to pick a node and use its front panel interface to set the cluster IP address and
create the cluster first.
After the cluster has been initially created using one node the node front panel will display the
default system name (Cluster_10.xx.xx.xx) with the specified cluster IP address. The status of this
node is no longer candidate. It is now an active member of a cluster. You can use the Service
Assistant Tool GUI using the cluster IP address to complete the cluster setup and add the remaining
nodes into the cluster.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-89
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Student Notebook

Create cluster using Service Assistant interface

http://node SA IP

© Copyright IBM Corporation 2011, 2014

Figure 2-78. Create cluster using Service Assistant interface SNV13.0

Notes:
Instead of using the SVC 2145-CG8/CF8 front panel interface to create the SVC cluster, another
option to create the cluster is from the Service Assistant interface. You can access the Service
Assistant GUI using the service IP address of the node and the default passw0rd. The Service
Assistant interface of the node you are currently logged in to will identify the node panel ID. All
nodes will be presented in candidate status, as they are unconfigured. The first node is selected by
default.

2-90 SVC Implementation Workshop © Copyright IBM Corp. 2011, 2014


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V10.0
Student Notebook

Uempty

Service Assistant: Manage System

(Bypasses front panel)

SVC system IP

© Copyright IBM Corporation 2011, 2014

Figure 2-79. Service Assistant: Manage System SNV13.0

Notes:
Click Manage System in the navigation tree of the Service Assistant to open the System
Information pane. Within the system edit boxes specify the cluster name (also known as system
name) and the cluster IP network information.
Next, click the Create System button to cause the cluster to be created.
The terms SVC cluster and SVC system are used interchangeably. SVC system is favored during
system setup.

© Copyright IBM Corp. 2011, 2014 Unit 2. SVC planning and cluster initialization 2-91
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.

You might also like