Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Hints and Tips for implementing Storwize and IBM


FlashSystems in an IBM i environment

Version 3: July 2015

Alison Pate
IBM Advanced Technical Sales Support
patea@us.ibm.com

Jana Jamsek
IBM Advanced Technical Skills, Europe
jana.jamsek@si.ibm.com

WP102305 Copyright IBM Corporation - July 22nd 2015 1


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Table of Contents
Table of Contents ................................................................................................................ 2
Introduction ......................................................................................................................... 3
IBM i external storage options............................................................................................ 3
IBM i Storage Management ................................................................................................ 4
Single level storage and object orientated architecture................................................... 4
Translation from 520 byte blocks to 512 byte blocks ..................................................... 5
Virtual I/O Server (VIOS) Support ..................................................................................... 5
VIOS vSCSI support ...................................................................................................... 6
Requirements .............................................................................................................. 6
Implementation Considerations .................................................................................. 6
NPIV Support.................................................................................................................. 7
Requirements for VIOS_NPIV connection ................................................................ 8
Implementation considerations ................................................................................... 9
Native Support of Storwize V7000 ..................................................................................... 9
Requirements for native connection ............................................................................... 9
Implementation Considerations .................................................................................... 10
Direct Connection of Storwize V7000 to IBM i (without a switch) ......................... 10
Sizing for performance ..................................................................................................... 10
Storwize V7000 Configuration options ............................................................................ 11
Host Attachment ............................................................................................................... 12
Multipath ........................................................................................................................... 13
Description of IBM i Multipath .................................................................................... 13
Insight to Multipath with Native and VIOS NPIV connection ..................................... 14
Insight to Multipath with VIOS VSCSI connection ..................................................... 15
Zoning SAN switches ....................................................................................................... 16
Boot from SAN ................................................................................................................. 17
IBM i mirroring for V7000 LUNs .................................................................................... 17
Thin Provisioning.............................................................................................................. 18
Real-time Compression ..................................................................................................... 18
Solid State Drives (SSD) .................................................................................................. 19
Data layout ........................................................................................................................ 20
LUN compared to Disk arm .......................................................................................... 20
LUN Size ...................................................................................................................... 21
Adding LUNs to ASP ................................................................................................... 21
Disk unit Serial number, type, model and resource name ............................................ 22
Identify which V7000 LUN is which disk unit in IBM i .............................................. 22
Software ............................................................................................................................ 26
Performance Monitoring ................................................................................................... 26
Copy Services Considerations .......................................................................................... 26
IBM FlashSystem.............................................................................................................. 28
IBM FlashSystems and IBM i........................................................................................... 28
Requirements for connecting IBM FlashSystems to IBM i .............................................. 28

WP102305 Copyright IBM Corporation - July 22nd 2015 2


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Minimum Requirements for connecting FlashSystem 900 and FlashSystem 840 to IBM
i ..................................................................................................................................... 28
Minimum Requirements for connecting FlashSystem V840 and V9000 to IBM i ...... 29
Performance estimation tools for FlashSystem with IBM i .............................................. 30
FLiP tool ................................................................................................................... 30
iDoctor ...................................................................................................................... 31
Implementation guidelines for FlashSystems with IBM i ................................................ 31
Creating LUNs on “background” FlashSystem of V840 / V9000 ........................... 31
Disk pools in V840 / V9000 ..................................................................................... 31
Number of paths to IBM i ......................................................................................... 32
Size of LUNs............................................................................................................. 32
Capacity and number of LUNs per port in FC adapter ............................................. 32
Maximum number of 16Gb adapters per EMX0 expansion drawer ......................... 33
Zoning the switches to connect FlashSystem V840 and V9000 ............................... 33
Creating LUNs in FlashSystems 840 and 900 for IBM i .......................................... 33
Zoning the switches to connect FlashSystems 840 or 900 ....................................... 33
Further References ............................................................................................................ 34

Introduction
Midrange and big IBM i customers are extensively implementing IBM Storwize Systems
as the external storage for their IBM i workloads. The Storwize family of Storage
systems not only provides variety of disk drives, RAID levels and connection types for an
IBM i installation, it also offers the options for flexible and well managed High
availability and Disaster recovery solutions for IBM i.

This document provides Hints and Tips for implementing the Storwize V7000 with IBM
i. The Storwize software is consistent across the entire Storwize family including the
SAN Volume Controller (SVC), V7000, V5000 and V3700 and therefore the content is
applicable to all the products in the family.

The July 2015 version of this document also includes recommendation for implementing
the IBM FlashSystem V840 or V9000 with IBM I, as well as FlashSystem models
virtualized behind a Storwize or SVC system.

IBM i external storage options


More than ever before we have a choice of storage solutions for IBM i:

Details of supported configurations and software levels are provided by the System
Storage Interoperation Center: http://www-

WP102305 Copyright IBM Corporation - July 22nd 2015 3


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_ove
r=yes

Sue Baker maintains a useful reference of supported servers, adapters, and storage
systems on Techdocs for IBMers and Business Partners

IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4563
Business Partners: https://www-
304.ibm.com/partnerworld/wps/servlet/ContentHandler/tech_PRS4563

IBM i Storage Management


Many computer systems require you to take responsibility for how information is stored
and retrieved from the disk units, along with providing the management environment to
balance disk utilization, enable disk protection and maintain balanced data spread for
optimum performance.

Single level storage and object orientated architecture


When you create a new file in a UNIX system, you must tell the system where to put the
file and how big to make it. You must balance files across different disk units to provide
good system performance. If you discover later that a file needs to be larger, you need to
copy it to a location on disk that has enough space for the new, larger file. You may need
to move files between disk units to maintain system performance.

WP102305 Copyright IBM Corporation - July 22nd 2015 4


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

The IBM i server is different in that it takes responsibility for managing the information
in auxiliary storage pools (also called disk pools or ASPs).

When you create a file, you estimate how many records it should have. You do not assign
it to a storage location; instead, the system places the file in the best location that ensures
the best performance. In fact, it normally spreads the data in the file across multiple disk
units. When you add more records to the file, the system automatically assigns additional
space on one or more disk units. Therefore it makes sense to use disk copy functions to
operate on either the entire disk space or the iASP. Power HA supports only an iASP-
based copy.

IBM i uses a single level storage, object orientated architecture. It sees all disk space and
the main memory as one storage area and uses the same set of virtual addresses to cover
both main memory and disk space. Paging of the objests in this virtual address space is
performed in 4 KB pages. However, data is usually blocked and transferred to storage
devices in bigger than 4 KB blocks. Blocking of transferred data is based on many
factors, for example, expert cache usage.

Translation from 520 byte blocks to 512 byte blocks


IBM i disks have a block size of 520 bytes. Most fixed block (FB) storage devices are
formatted with a block size of 512 bytes so a translation or mapping is required to attach
these to IBM i. (The DS8000 supports IBM i with a native disk format of 520 bytes).

IBM i performs the following change of the data layout to support 512 byte blocks
(sectors) in external storage: for every page (8 * 520 byte sectors) it uses additional 9th
sector; it stores the 8-byte headers of the 520 byte sectors in the 9th sector, and therefore
changes the previous 8* 520-byte blocks to 9* 512-byte blocks. The data that was
previously stored in 8 * sectors is now spread across 9 * sectors, so the required disk
capacity on V7000 is 9/8 of the IBM i usable capacity. Vice versa, the usable capacity in
IBM i is 8/9 of the allocated capacity in V7000.

Therefore, when attaching a Storwize V7000 to IBM i, whether through vSCSI, NPIV or
native attachment this mapping of 520:512 byte blocks means that you will have a
capacity ‘overhead’ of being able to use only 8/9ths of the effective capacity.

The impact of this translation to IBM i disk performance is negligible.

Virtual I/O Server (VIOS) Support


The Virtual I/O Server is part of the IBM PowerVM editions hardware feature on IBM
Power Systems. The Virtual I/O Server technology facilitates the consolidation of
network and disk I/O resources and minimizes the number of required physical adapters
in the IBM Power Systems server. It is a special-purpose partition which provides virtual
I/O resources to its client partitions. The Virtual I/O Server actually owns the physical
resources that are shared with clients. A physical adapter assigned to the VIOS partition
can be used by one or more other partitions.

WP102305 Copyright IBM Corporation - July 22nd 2015 5


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

The Virtual I/O Server can provide virtualized storage devices, storage adapters and
network adapters to client partitions running an AIX, IBM i, or Linux operating
environment. The core I/O virtualization capabilities of the Virtual I/O server are shown
below:
_ Virtual SCSI
_ Virtual Fibre Channel using NPIV (Node Port ID Virtualization)
_ Virtual Ethernet bridge using Shared Ethernet Adapter (SEA)

VIOS vSCSI support


Requirements
Following are the minimum requirements for connecting V7000 to IBM i in VIOS vscsi:
Hardware:
POWER6 or higher
Minimum software and microcode levels
IBM i level V6.1.1
IBM i v 7.1 is required for Power HA support
VIOS level 2.2.
V7000 level 6.1.x

Implementation Considerations
The storage virtualization capabilities by PowerVM and the Virtual I/O Server are
supported by the Storwize V7000 series as VSCSI backing devices in the Virtual I/O
Server. Remember if you use VSCSI devices that 8/9 of the configured LUN capacity
will be usable for IBM i data (8x520B -> 9x512B translation). STORWIZE V7000
LUNs are surfaced as generic 6B22 devices to IBM i.

LUN size is defined on IBM i in decimal gigabytes (GB) and on the V7000 in binary
gigabytes (GiB). Decimal kilobytes have 1000 bytes and binary kilobytes have 1025
bytes so you have fewer binary gigabytes to make up the quantity. Remember that the

WP102305 Copyright IBM Corporation - July 22nd 2015 6


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

minimum allocation is going to be your grain size recommended at 1 GiB so a 140GB


LUN will require 131 GiB defined on the V7000.

When using VIOS to virtualize storage make sure that you have 2 VIOS servers to
provide alternate pathing in the event of a failure.

Multi-pathing across two Virtual I/O Servers is supported with IBM i 6.1.1 or later. More
VIOS servers may be needed to support multiple i/OS partitions. Make sure that you size
the VIOS servers and IO adapters to support the anticipated throughput.

Optionally, it is recommended to implement AIX-level multipath from the VIOS server


using either SDD PCM or base multipath IO driver (MPIO), to provide multiple paths to
the disks from each VIOS. You should configure alternate paths to the disk from each
VIOS zoned to provide access to each node canister.

FC adapter attributes
Specify the following attributes for each SCSI I/O Controller Protocol Device (fscsi)
device that connects a V7000 LUN for IBM i.
 The attribute fc_err_recov should be set to fast_fail.
 The attribute dyntrk should be set to yes.
Setting of these values for the two attributes is related to how AIX FC adapter driver or
AIX disk driver handle the certain type of fabric related errors. Without setting these
values for these 2 attributes, the way to handle the errors will be different, and it will
cause unnecessary retries.

We recommend setting the same attribute values with either VIOS VSCSI connection or
VIOS_NPIV connection.

Disk device attributes


If V7000 is connected in VIOS VSCSI specify the following attributes for each hdisk
device that represents a V7000 LUN connected to IBM i.
 The attribute reserve_policy should be set to no_reserve.
 The attribute queue_depth should be set to 32
 The attribute algorithm should be set as follows:
• If the driver SDDPCM is used in VIOS, the attribute algorithm should be
set to load_balance
• If AIX PCM is used the attribute algorithm should be set to round_robin
Setting reserve_policy to no_reserve is required be set in each VIOS if Multipath with
two or more VIOS is implemented, to remove SCSI reservation on the hdisk device. The
specified values of other attributes are recommended for performance reasons.

NPIV Support
N-port Virtualization (NPIV) virtualizes the fibre channel adapters on the Power Server
allowing the same physical IOA port to be shared by multiple LPARs. NPIV requires

WP102305 Copyright IBM Corporation - July 22nd 2015 7


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

VIOS to support the virtualization and LUNs are mapped directly to the IBM i partition
through a virtual fibre adapter.

Requirements for VIOS_NPIV connection


Following are the minimum requirements for NPIV connection of V7000 to IBM i:
Hardware:
POWER6 or higher
8-Gb or 16-Gb adapters in VIOS
NPIV enabled switches
Minimum software and microcode levels
IBM i V7.1 TR6
VIOS 2.2.2.1
V7000 6.4.1.4
PowerHA group PTF SF99706 level 3
Note: PowerHA group PTF SF99706 level 4 is required for managing PowerHA with
V7000 in GUI, and for LUN level switching with V7000.

NPIV connection requires SAN switches that must be NPIV enabled.

IBM i 7.1 TR6 is required for NPIV support of Storwize V7000. You will also need
NPIV capable switches to attach the Storwize V7000 to the VIOS server.

WP102305 Copyright IBM Corporation - July 22nd 2015 8


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Implementation considerations
NPIV attachment of Storwize V7000 provides full support for Power HA and LUN level
switching.

The default qdepth for IBM i LUNs is 16 (also known as SNUM).

LUNs are only presented to the host from the owning IO group port.
NPIV can support up to 64 active and 64 passive LUNs/virtual path, although 32 is
recommended as a guideline for optimum performance. Remember that the physical path
must still be sized to provide adequate throughput.

It is possible to combine NPIV and native connections for a host connection, but it is not
possible to combine NPIV and vSCSI connections.

Rules for VIOS_NPIV mapping


Following are the rules for mapping Server virtual FC adapters to the ports in VIOS when
implementing VPIV connection:
 Map maximum one virtual FC adapter from an IBM i LPAR to a port in VIOS.
 You can map up to 64 virtual FC adapters each from another IBM i LPAR to the
same port in VIOS.
 When implementing solutions with IASP use different FC adapters for Sysbas
than for IASP and map them to different ports in VIOS.
 You can use the same port in VIOS for both NPIV mapping and connection with
VIOS VSCSI.

Native Support of Storwize V7000


Support is now available for a native connection of Storwize V7000 to IBM i either with,
or without a switch. Power HA and LUN level switching supported.

Requirements for native connection


Following are the requirements for native connection of V7000 to IBM i:
Hardware:
POWER7
Minimum software and microcode levels
IBM i V7.1 TR6 and PTFs MF56600, MF56753, MF57854
or IBM i V7.1 TR6 Resave 710-H
V7000 code 6.4.1.4
Fabric attach
FC EN0A/EN0B - 16Gb adapters
FC5735/5273 - 8Gb adapters
FC 5774/5276 - 4Gb adapters
Direct attach

WP102305 Copyright IBM Corporation - July 22nd 2015 9


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

FC 5774/5276 - 4Gb adapters connected to 8Gb adapters in SVC / Storwize


FC EN0A/EN0B – 16Gb adapters connected to 16Gb adapters in SVC / Storwize

Implementation Considerations

 2145 Device type


– Flexible LUN sizes supported
 Boot from SAN supported
 Default qdepth (SNUM) of 16
 V7000 Compression is supported by IBM i

It is possible to combine NPIV and native connections for a host connection, but it is not
possible to combine NPIV and vSCSI connections. Migrating from vSCSI to a native
connection requires that the system be powered off.

Direct Connection of Storwize V7000 to IBM i (without a switch)


 No switch required
– One port in IBM i to 1 port in Storwize V7000/V3700
– Good for smaller systems with few LPARs
 4GB connection only supported
– FC5774/5276
 No NPIV support
 Storwize V7000, SVC, V3700
– SVC Code 6.4.1.4 or later required
 Power 7 only
 Minimum level of IBM i 7.1 TR6 plus PTFs
– MF56600, MF56753, MF57854

Sizing for performance


It’s important to size a storage subsystem based on I/O activity rather than capacity
requirements alone. This is particularly true of an IBM i environment because of the
sensitivity to I/O performance. IBM has excellent tools for modeling the expected
performance of your workload and configuration. We provide some guidelines and
general words of wisdom in this paper; however, these provide a starting point only for
sizing with the appropriate tools.

The LUN size is flexible; choose the LUN size that gives you enough LUNs for good
performance according to Disk Magic. A good recommended size to start modeling is
80GB. Typically IBM i LUN sizes on Storwize are 100-200GB.

It is equally important to ensure that the sizing requirements for your SAN configuration
also take into account the additional resources required when enabling Copy Services.

WP102305 Copyright IBM Corporation - July 22nd 2015 10


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Use Disk Magic to model the overheads of replication (Global Mirror and Metro Mirror),
particularly if you are planning to enable Metro Mirror.

A Bandwidth Sizing should be conducted for Global Mirror and Metro Mirror.

Note that Disk Magic does not support modeling FlashCopy (aka Point-in-Time Copy
functions), so make sure you do not size the system to maximum recommended
utilizations if you want to also exploit FlashCopy snapshots for backups. It is not
recommended to use large nearline drives as FlashCopy targets; a better option is to share
a pool of larger disk between the FlashCopy source and Target LUNs to avoid creating a
bottleneck.

If you are evaluating an Easy Tier configuration use Disk Magic to determine an
optimum hardware combination. For best performance avoid using high capacity, slower
nearline drives for performance sensitive IBM i environments. A FlashSystem tier 0 with
a Nearline tier 1 is likely to result in unpredictable performance for a production
environment.

You will need to collect IBM i performance data. Generally, you will collect performance
data for a week’s worth of performance for each system/lpar and send the resulting
reports for the sizing.

Each set of reports should include print files for the following:

System Report - Disk Utilization Required


Component Report - Disk Activity Required
Resource Interval Report - Disk Utilization Detail Required
System Report - Storage Pool Utilization Optional

Send the report print files as indicated below (send reports as .txt file format type). If you
are collecting from more than one IBM i or LPAR, the reports need to be for the same
time period for each system/lpar, if possible.

Storwize V7000 Configuration options


Different hardware and RAID options are available for the Storwize V7000 and can be
validated by Disk Magic. You should configure the RAID level and array width
according to the solution that you modeled in Disk Magic. As always, it’s recommended
to follow the default configuration options as recommended in the Storwize V7000 GUI
configuration wizard.

The default configuration option in the GUI is RAID 5 with a default array width of 7+P
for SAS HDDs, RAID 6 for Nearline HDDs with a default array width of 10+P+Q and
RAID 1 with a default array width of 2 for SSDs.

The recommendation is to create a dedicated storage pool for IBM i with enough
managed disks backed by a sufficient number of spindles to handle the expected IBM i

WP102305 Copyright IBM Corporation - July 22nd 2015 11


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

workload. Modeling with Disk Magic using actual customer performance data should be
performed to size the storage system properly.

Host Attachment
IBM i will log into a Storwize V7000 node only once from an IO adapter port on the IBM
i LPAR.

Multiple paths between the switch and the Storwize V7000 provide some level of
redundancy: if the path in use (active) fails, IBM i will automatically start using the other
path. However there is no way to force an IBM i partition to use a specific port and if
multiple partitions are all configured to use multiple paths between the switch and the
Storwize V7000 the result is typically that all partitions will use the same port on the
Storwize V7000. The recommended option is to provide multipath support by using two
VIOS partitions each with a path to the Storwize V7000.

The same connection considerations apply when connecting using the Native Connection
option without VIOS.
Best practices guidelines:

 Isolate host connections from remote copy connections (Metro Mirror or Global
Mirror) where possible.
 Isolate other host connections from IBM i host connections on a host port basis.
 Always have symmetric pathing by connection type (i.e., use the same number of
paths on all host adapters used by each connection type)

WP102305 Copyright IBM Corporation - July 22nd 2015 12


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

 Size the number of host adapters needed based on expected aggregate maximum
bandwidth and maximum IOPS (use Disk Magic or other common sizing methods
based on actual or expected workload).

Multipath
Multipath provides greater resiliency for SAN attached storage. The IBM i supports up to
8 paths to each LUN. In addition to the availability considerations, lab performance
testing has shown that 2 or more paths provide performance improvements when
compared to a single path. Typically 2 paths to a LUN is the ideal balance of price and
performance. The Disk Magic tool supports only multipathing over 2 paths.

You might want to consider more than 2 paths for workloads where there is high wait
time, or where high IO rates are expected to LUNs.

Description of IBM i Multipath


Multipath for a LUN is achieved with connecting the LUN to two or more ports that
belong to different adapters in IBM i partition. With native connection, the ports for
Multipath must be in different physical adapters in IBM i. With VIOS_NPIV the virtual
Fibre channel adapters for Multipath must be assigned to different VIOS. With VIOS
vSCSI connection the virtual SCSI adapters for Multipath must be assigned to different
VIOS.

Following pictures show high-level view of Multipath in different ways of connection,


while the detailed view of all paths is presented further in this section.

IBM i Multipath provides resiliency in case the hardware for one of the path fails. It also
provides performance improvement since Multipath uses IO load balancing in Round-
Robin mode among the paths.

WP102305 Copyright IBM Corporation - July 22nd 2015 13


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Every LUN in Storwize V7000 uses one V7000 node as preferred node: the IO traffic to /
from the particular LUN normally goes through the preferred node; if that node fails the
IO is transferred to the remaining node. With IBM i Multipath, all the paths to a LUN
through the preferred node are active and the path through the non-preferred node are
passive. Multipath employs the load balancing among the paths to a LUN that go through
the node which is preferred for that LUN.

Insight to Multipath with Native and VIOS NPIV connection

In native and VIOS_NPIV connection Multipath is achieved by assigning the same LUN
to multiple physical or virtual FC adapters in IBM i. To put more precisely: the same
LUN is assigned to multiple WWPNs each from one port in physical or virtual FC
adapter, each virtual FC adapter assigned to different VIOS. For better explaining we
limit our further discussion to the multipath with two WWPNs. With using the
recommended switch zoning we achieve that 4 paths are established from a LUN to the
IBM i: two of the paths go through adapter 1 (in NPIV also through VIOS 1) and two of
the paths go through adapter 2 (in NPIV also through VIOS 2); from the two paths that
go through each adapter one goes through the preferred node, and one goes through the
non-preferred node. Therefore two of the 4 paths are active, each of them going through
different adapter, and different VIOS if NPIV is used; two of the path are passive, each of
them going through different adapter, and different VIOS if NPIV is used. IBM i
Multipathing uses Round Robin algorithm to balance the IO among the paths that are
active.

Picture 1 presents the detailed view of paths in Multipath with natively or VIOS_NPIV
connected V7000. The solid lines refer to active paths, and the dotted lines refer to
passive paths. Red lines present one switch zone and green lines present the other switch
zone. The screen-shot below presents the IBM i view of paths to the LUN connected in
VIOS_NPIV. As can be seen on the screen-shot, two active and two passive paths are
established to each LUN.

WP102305 Copyright IBM Corporation - July 22nd 2015 14


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Insight to Multipath with VIOS VSCSI connection


In this type of connection the LUN in V7000 is assigned to multiple VIOS, IBM i
establishes one path to the LUN through each VIOS. For better explaining we limit our
further discussion to the multipath with two VIOS. The LUN reports as device hdisk in
each VIOS. The IO rate from VIOS (device hdisk) to the LUN uses all the paths that are
established from VIOS to V7000. Multipath across these paths as well as load balancing
and IO through preferred node, are handled by VIOS Multipath driver. The two hdisks
that represent the LUN in each VIOS, are mapped to IBM i through different virtual SCSI
adapters. Each of them reports in IBM i as different path to the same LUN (disk unit).
IBM i establishes Multipath to the LUN, using both paths, both path are active and the
load balancing in Round Robin algorithm is used for the IO traffic.

Picture 2 presents the detailed view of paths in Multipath with VIOS VSCSI connected
V7000. IBM I uses two paths to the same LUN, each path through one VIOS to the
relevant hdisk connected with virtual SCSI adapter. Both paths are active and IBM i load
balancing algorithm is used for IO traffic. Each VIOS has 8 connections to V7000;
therefore 8 paths are established from each VIOS to the LUN. The IO traffic through
these paths is handled by VIOS Multipath driver.
The screen-shots show both paths for the LUN in IBM I, and the paths in each VIOS for
the LUN

WP102305 Copyright IBM Corporation - July 22nd 2015 15


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Zoning SAN switches


With native connection and the connection with VIOS_NPIV we recommend to zone the
switches so that one WWPN of one IBM i port is in zone with two ports of V7000, each
port from one node canister. This way we ensure resiliency for the IO to / from a LUN
assigned to that WWPN: if the preferred node for that LUN fails the IO rate will continue
using the non-preferred node.

Note: A port in physical or virtual FC adapter in IBM i has two WWPNs. For connecting
external storage we use the first WWPN, while the second WWPN is used for Live
Partition Mobility (LPM). Therefore: it is a good idea to zone both WWPNs if you plan
to use LPM; otherwise, zone just the first WWPN.

When connection with VIOS virtual SCSI is used we recommend to zone one physical
port in VIOS with all available ports in V7000, or with as many ports as possible to allow
load balancing, keeping in mind that there are maximum 8 paths available from VIOS to
V7000. V7000 ports zoned with one VIOS port should be evenly spread between V7000
node canisters.

Examples of zoning for VIOS vSCSI connection


Example 1: We use one port in VIOS, and we zone it with 8 * ports in V7000, 4 * ports
from each canister. This way we use all available ports, spread evenly between the
canisters, and we don’t exceed 8 paths from VIOS to V7000.

Example 2: We use two adapters in VIOS, we zone one port from each adapter with 4 *
V7000 ports, 2 ports from each canister. This way we achieve that the V7000 are
balanced between VIOS ports and between the V7000 node canisters and we don’t
exceed the 8 path from VIOS to V7000.

WP102305 Copyright IBM Corporation - July 22nd 2015 16


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Following pictures show recommended switch zoning in different ways of connection

Boot from SAN


All connection options: Native, VIOS_NPIV, and VIOS Virtual SCSI support Boot from
SAN. LoadSource resides on a V7000 LUN which is connected the same way as the
other LUNs; there aren't any special requirements for LoadSource connection.

When installing IBM i operating system with disk capacity on V7000, the installation
prompt to select one of the available V7000 LUNs for the LoadSource.

When migrating from internal disk drives or from another Storage system to V7000 we
can use IBM i ASP balancing to migrate all the disk capacity except LoadSource. After
the non LoadSource data are migrated to V7000 with load balancing, we migrate
LoadSource by copying it from the previous disk unit to the LUN in V7000. The V7000
LUN must be of equal or greater size than the disk unit previously used for LoadSource.
This way of migration can be done with all ways of V7000 connection Native,
VIOS_NPIV and VIOS vscsi.

IBM i mirroring for V7000 LUNs


Some customers prefer to use IBM i mirroring for resiliency. For example, they use IBM
i mirroring between two V7000 systems, each connected with one VIOS. When starting
IBM i mirroring with VIOS connected V7000, you should add the LUNs to the mirrored
ASP is steps: first you add the LUNs from two virtual adapters each adapter connecting
one to-be mirrored half of LUNs. After mirroring is started for those LUNs add the LUNs
from two new virtual adapters, each adapter connecting one to-be mirrored half, etc. This

WP102305 Copyright IBM Corporation - July 22nd 2015 17


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

way you ensure that the mirroring is started between the two V7000 and not among the
LUNs in the same V7000.

Thin Provisioning
IBM i can take advantage of thin provisioning as it is transparent to the server. However,
firstly, you need to provide adequate HDDs to sustain the required performance
regardless of whether the capacity is actually required. Secondly, while IBM i 7.1 and
later do not pre-format LUNs so that initial allocations can be thin provisioned, there is
no space reclamation thus the effectiveness of the thin provisioning may decline over
time.

You still need to ensure that you have sufficient disks configured to maintain
performance of the IBM i workload.

Thin provisioning may be more applicable to test or development environments. You


may also consider the use of thin provisioned FlashCopies in a Global Mirror Change
Volumes environment.

Real-time Compression
IBM i can take advantage of Real-time Compression (RtC) in either SVC, Storwize or
FlashSystems V840 and V9000, as it is transparent to the server. Real-time Compression
allows the use of less physical space on disk than is presented to the IBM i host; capacity
needed on the storage system is reduced due to both Compression and Thin provisioning.
On the other hand, RtC typically has a latency impact on IO service times, as well as a
throughput impact.

With RtC we distinguish among the following types of capacities:


Virtual capacity - Capacity that is available to a host
Real capacity (Allocated capacity) - Capacity allocated on the disk
Physical capacity - Available capacity on disk
Compressed size (Used capacity) - Capacity that the data is actually using; subset of Real
capacity
Uncompressed size - What the Used capacity would be if the data were not compressed

The capacity savings provided by RtC are as follows:


Compression savings: Capacity savings from compression (Uncompressed size –
Compressed size)
Thin provisioned savings: Savings due to Thin provisioning (Virtual capacity – capacity
without compression)
Total capacity savings: Capacity savings + Thin provisioned savings

Compression ratio is the ratio between Compressed size and Uncompressed size
(Compression ratio = Compressed size/Uncompressed size).

WP102305 Copyright IBM Corporation - July 22nd 2015 18


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

We recommend careful planning when considering RtC with IBM i. The Comprestimator
tool, which helps to estimate the needed capacity on Storage system with RtC, is
presently not available for IBM i. However Comprestimator is available in VIOS
therefore the customers running IBM i LPARs with VIOS, can take advantage of it.

Our current experiences with IBM i and RtC in Storwize and FlashSystems show
Compression ratios about 0.3; so we typically use 3 times less capacity than needed for
uncompressed data. If you cannot run Comprestimator for your IBM I workload, we
recommend a conservative Compression ratio of about 0.5 – 2 times.

Performance impact of RtC in SVC and Storwize can be estimated by Disk Magic
modelling.

You can download the Comprestimator tool from:


http://www-304.ibm.com/support/customercare/sas/f/comprestimator/home.html

Solid State Drives (SSD)


Solid-state storage technology can have the following benefits:

• Significantly improved performance for hard-to-tune, I/O bound applications. No


code changes required.
• Reduced floor space. Can be filled near 100% without performance degradation
• Greater IOPS
• Faster access times
• Reduced energy use

Exploitation of SSDs with the Storwize V7000 is through Easy Tier. Even if you don’t
plan to install SSDs you can still use Easy Tier to evaluate your workload and provide
information on the benefit you might gain by adding SSDs in the future.

When using Easy Tier automated management it’s important to allow Easy Tier some
‘space’ to move data. You should not allocate 100% of the pool capacity but leave some
capacity unallocated to allow Easy Tier migrations. As a minimum leave one extent free
per tier in each storage pool, however for optimum exploitation of future functions plan
to leave 10 extents free total per pool.

There is also an option to create a disk pool of SSD in V7000, and create an IBM i ASP
that uses disk capacity form SSD pool. The applications running in that ASP will
experience performance boost.

IBM i data relocation methods such as ASP balancing and Media preference are not
available to use with SSD in V7000.

WP102305 Copyright IBM Corporation - July 22nd 2015 19


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Data layout
Selecting an appropriate data layout strategy depends on your primary objectives:

Spreading workloads across all components maximizes the utilization of the hardware
components. This includes spreading workloads across all the available resources.
However, it is always possible when sharing resources that performance problems may
arise due to contention on these resources.

To protect critical workloads you should isolate them minimizing the chance that non-
critical workloads can impact the performance of critical workloads.

A storage pool is a collection of managed disks from which volumes are created and
presented to the IBM i system as LUNs. The primary property of a storage pool is the
extent size which by default is 1GB with Storwize V7000 release 7.1 and 256MB in
earlier versions. This extent size is the smallest unit of allocation from the pool.

When you add managed disks to a pool they should have similar performance
characteristics:
– Same RAID level
– Roughly the same number of drives per array
– Same drive type (SAS, NL_SAS, SSD except if using Easy Tier)
This is because data from each volume will be spread across all MDisks in the pool, so
the volume will perform approximately at the speed of the slowest MDisk in the pool
– The exception to this rule is that if using Easy Tier you can have 2
different tiers of storage in the same pool – but the MDisks within the tiers
should still have the same performance characteristics

Isolation of workloads is most easily accomplished where each ASP or LPAR has its own
managed storage pool. This ensures that you can place data where you intend. I/O
activity should be balanced between the two nodes or controllers on the Storwize V7000.

Make sure that you isolate critical workloads – We strongly recommend only IBM i
LUNs on any storage pool (rather than mixed with non-IBM i). If you mix production
and development workloads in storage pools make sure that the customer understands
that this may impact production performance.

LUN compared to Disk arm


The V7000 LUN connected to IBM i reports in IBM i as a disk unit. IBM i storage
management employs the management and performance functions as if the LUN was a
disk arm. In fact the LUN is typically spread across multiple physical disk arms in the
V7000 disk pool; the LUN uses some capacity from each disk arm. All disk arms the
disk pools are shared among all the LUNs that are defined in that disk pool. The
following picture shows an example of V7000 disk pool with three disk arrays of V7000

WP102305 Copyright IBM Corporation - July 22nd 2015 20


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

internal disk arms (mdisks) and a LUN created in the disk pool, LUN using an extent
from each disk array in turn.

LUN Size
LUNs can be configured up to 2000GB. The number of the LUNs defined is typically
related to the wait time component of the response time. If there are insufficient LUNs,
wait time typically increases. The sizing process determines the correct number of LUNs
required to access the needed capacity while meeting performance objectives.

The number of LUNs drives the requirement for more FC adapters on the IBM i due to
the addressing restrictions of IBM i if you are using native attachment. Remember that
each path to a LUN will count towards the maximum addressable LUNs on each IBM i
IOA.

For any ASP define all the LUNs to be the same size. 80GB is the recommended
minimum LUN size. A minimum of 6 LUNs for each ASP or LPAR is recommended. Be
very cautious about significantly reducing the number of LUNs in an IBM i environment.

It is recommended that load source devices be created at least 80GB. A smaller number
of larger LUNs will reduce the number of IO ports required on both the IBM i and the
Storwize V7000. Remember that in an iASP environment, you may exploit larger LUNs
in the iASPs, but SYSBAS may require more, smaller LUNs to maintain performance;

Disk Magic does not always accurately predict the effective capacity of the ranks
depending on the DDM size selected and the number of spares assigned. The IBM tool
Capacity Magic can be used to verify capacity and space utilization plans.

Adding LUNs to ASP


Adding a LUN to an ASP generates I/O activity on the rank as the LUN is formatted. If
there is production work sharing the same rank you may see a performance impact. For
this reason it is recommended that you schedule adding LUNs to ASPs outside peak
intervals.

WP102305 Copyright IBM Corporation - July 22nd 2015 21


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Disk unit Serial number, type, model and resource name


Each of the IBM i disk units representing V7000 LUNs has a unique Serial number. With
native or NPIV connected V7000, the type of the disk unit is 2145, while with VIOS
VSCSI connection the type is 6B22. Model of disk units in either connection is 050.
Resource name starting with DMP indicated that the disk unit is connected in Multipath.
If it is connected it single path the resource name starts with DD. The pictures below
show IBM I disk units with native or NPIV connection, and with VIOS VSCSI
connection.

Picture Disk units in native or NPIV connection

Picture Disk units in VIOS_VSCSI connection

Identify which V7000 LUN is which disk unit in IBM i


There are many instances when we want to identify which IBM i disk unit is which LUN
in V7000. For example: we need to identify which LUNs are the disk units in a particular
IBM i Auxiliary Storage Pool (ASP) to migrate that ASP to another disk pool in V7000.

For natively or VIOS_NPIV connected V7000 use the following way to identify the
LUNs:
a. In IBM i Dedicated Service Tools (DST) or System Service Tools (SST) look for
the Serial number of a disk unit. In the picture Disk units in native or NPIV
connection we see Serial numbers Y11C490001DC, Y11C490001DA, etc.
b. The last 6 characters of the Serial number are the last 6 characters of the LUN IDs
in V7000. The picture below shows the corresponding LUN id of the disk unit
with serial number Y11C490001DA.

WP102305 Copyright IBM Corporation - July 22nd 2015 22


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

c. The first 6 characters of the disk unit Serial number is a hash of the V7000 cluster
ID.

For VIOS VSCSI connected V7000 use the following steps to identify which disk unit is
which LUN in V7000:

In IBM i issue command WRKHDWRSC (*STG).


Work with disk unit you are interested in using option 9 at the Storage controller to which
the disk unit is connected, as is shown below:

At the disk use option 7 for Resource details:

WP102305 Copyright IBM Corporation - July 22nd 2015 23


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Note the LUN number (in hexadecimal) shown in the location field:

Scroll down the screen to see the Controller number (decimal). Note that LUN number is
a sum of System board number (decimal) and Controller number. In our example: board
number 128 + controller number 2 = 130 = hex 82.

Log-in to VIOS command line and enter command lsmap –all. Look for the LUN id that
corresponds to the LUN number in IBM I (In our example L82). Note the Physical
location value in hex for the SAN LUN number, In our example this is L1. This is shown
below.

WP102305 Copyright IBM Corporation - July 22nd 2015 24


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

The SAN LUN number is in hexadecimal and corresponds to the SCSI number of the
LUN in the Storwize V7000. In our example, the LUN with SCSI ID 1 is the LUN with
the name Sysbas_6, is as shown below:

Note: you may also use VIOS command


lsdev –dev vtscsi_name –vpd
for the virtual SCSI name of the LUN you are tracking. In our example the name is
vtscsi2. The Location values (in our example L2) matches the controller number at this
disk unit in IBM i. Output of this VIOS command is shown below:

WP102305 Copyright IBM Corporation - July 22nd 2015 25


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Software
It is essential that you ensure that you have all up to date software levels installed. There
are fixes that provide performance enhancements, correct performance reporting, and
support for new functions. As always, call the support center before installation to verify
that you are current with fixes for the hardware that you are installing. It is also
important to maintain current software levels to make sure that you get the benefit from
new fixes that are developed.

When updating storage subsystem LIC it is also important to check whether there are any
server software updates required. Details of supported configurations and software levels
are provided by the System Storage Interoperation Center: http://www-
03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_ove
r=yes

Performance Monitoring
Once your storage subsystem is installed it is essential that you continue to monitor the
performance of the subsystem. IBM i Performance Tools reports provide information on
I/O rates, and on response times to the server. This allows you to track trends in
increased workload and changes in response time. You should review these trends to
ensure that your storage subsystem continues to meet your performance and capacity
requirements. Make sure that you are current on fixes to ensure that Performance Tools
reports are reporting your external storage correctly.

If you have multiple servers attached to a storage subsystem, particularly if you have
other platforms attached in addition to IBM i, it is essential that you have a performance
tool that enables you to monitor the performance from the storage subsystem perspective.

IBM TPC provides a comprehensive tool for managing the performance of Storwize
V7000s. You should collect data from all attached storage subsystems in 15 minute
intervals; in the event of a performance problem IBM will ask for this data – without it
resolution of any problem may be prolonged.

There is a simple performance management reporting interface available through the


Storwize V7000 GUI. This provides a subset of the performance metrics available from
TPC.

Copy Services Considerations


The Storwize V7000 has 2 options for Global Mirror: the Classic Global Mirror, and the
Change Volumes enhancement which allows for a flexible and configurable RPO
allowing GM to be maintained during peak periods of bandwidth constraint.

WP102305 Copyright IBM Corporation - July 22nd 2015 26


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

If you are not planning to use the Change Volumes option of Global Mirror it is essential
that you size the bandwidth to accommodate the peaks or else risk impact to production
performance.

There is currently a limit of 256 Global Mirror with change volume relationships per
system. Storwize does not support a single LUN being in multiple replication
relationships. Therefore if you are planning on using Hyperswap these same LUNs
cannot also be in a Global Mirror relationship.

The current zoning guidelines for mirroring installations advise that a maximum of two
ports on each SVC node/Storwize V7000 node canister be used for mirroring. The
remaining two ports on the node/canister should not have any visibility to any other
cluster. If you have been experiencing performance issues when mirroring is in operation,
implementing zoning in this fashion might help to alleviate this situation.

Consulting services are available from the IBM STG Lab Services to assist in the
planning and implementation of Storwize V7000 Copy Services in an IBM i
environment:
http://www-03.ibm.com/systems/services/labservices/platforms/labservices_i.html

WP102305 Copyright IBM Corporation - July 22nd 2015 27


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

IBM FlashSystem
IBM FlashSystem leverages Solid State storage with FlashCore technology, including use
of Field-programmable gate array (FPGA) and Two-dimensional Flash RAID. These
advanced architectural concepts provide among others:
 FPGA enabled access to data on Flash chips, eliminating the need for CPU and
operating system in the data path
 Two levels of storage protection with Variable Stripe RAID across chips in flash
module, and with RAID-5 protection across flash modules
FlashCore technology thus enables FlashSystems to deliver extreme performance and
advanced Flash management.

IBM FlashSystem family includes various models. The modes with the first character V
in their name, join FlashCore technology and proven SVC virtualization capabilities: In
addition to superior FlashSystem performance they deliver functions such as Real-time
Compression, FlashCopy, Remote copy, Thin provisioning, data migration, and
virtualizing external storage.

IBM FlashSystems and IBM i


Following FlashSystem models are supported for IBM i connection:
IBM FlashSystem 900
IBM FlashSystem 840
IBM FlashSystem V9000
IBM FlashSystem V840

Note: the other models of FlashSystem family can connect to IBM i when virtualized
with IBM San Volume Controller (SVC)..

FlashSystems V9000 and V840 connect to IBM i in either native direct connection,
native connection with SAN fabric, with VIOS_NPIV, or with VIOS virtual SCSI.

FlashSystem 900 and FlashSystem 840 connect to IBM i in either native direct
connection, native connection with SAN fabric, or with VIOS_NPIV.

Requirements for connecting IBM FlashSystems to IBM i


Minimum Requirements for connecting FlashSystem 900 and
FlashSystem 840 to IBM i

POWER systems:
POWER7 with firmware level FW780 or later
POWER8 with firmware level FW810 or later

WP102305 Copyright IBM Corporation - July 22nd 2015 28


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Note: Some of the POWER models do not support the required Firmware levels.
Connecting FlashSystem 840 or FlashSystem 900 to IBM i LPAR in such POWER
model is not supported.

IBM i level:
Release 7.2 Technology Refresh 2 or later

VIOS level for NPIV connection


2.2.3.4 or later

FC adapters:
2-port 8Gb adapters feature numbers 5735 and 5273
4-port 8Gb adapter feature number 5729
2-port 16Gb adapters feature numbers EN0A and EN0B

FCoE adapters for NPIV connection:


4-port adapters with two 10 Gb SPF+ optical ports and two 1 Gb Ethernet ports, feature
numbers EN0H and EN0J
4-port adapters with two 10 Gb Long Range optical ports and two 1 Gb Ethernet ports,
feature numbers EN0M and EN0N
4-port adapters with two 10 Gb FCoE Copper ports and two 1 Gb Ethernet ports, feature
numbers EN0K and EN0L

Supported switches:
Brocade and CISCO based switches

Native direct connection without SAN fabric requires 16Gb or 8GB adapters

Minimum Requirements for connecting FlashSystem V840 and


V9000 to IBM i

Requirements for Native connection

POWER systems:
POWER7, POWER8

IBM i level:
In POWER7: IBM i V7.1 TR7 or later
In POWER8: IBM I V7.1 TR8 or later

FC adapters for attach with SAN Fabric:


2-port 4Gb adapters feature number 5774 and 5276
2-port 8Gb adapters feature numbers 5735 and 5273
4-port 8Gb adapter feature number 5729
2-port 16Gb adapters feature numbers EN0A and EN0B

WP102305 Copyright IBM Corporation - July 22nd 2015 29


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

FC adapters for direct attach:


2-port 4Gb adapters feature number 5774 and 5276
16Gb adapters EN0A and EN0B connected to 16Gb adapters in V840 or V9000

Requirements for VIOS_NPIV connection

POWER systems:
POWER7, POWER8

IBM i level:
In POWER7: IBM i V7.1 TR6 or later
In POWER8: IBM I V71. TR8 or later

FC adapters:
2-port 8Gb adapters feature numbers 5735 and 5273
4-port 8Gb adapter feature number 5729
2-port 16Gb adapters feature numbers EN0A and EN0B

Please note: NPIV connection of FlashSystem V840 is supported also on POWER6 with
IBM i V7.1 TR6 or later.

Requirements for VIOS_VSCSI connection

POWER systems:
POWER7, POWER8

IBM i level:
In POWER7: IBM i V6.1 or later
In POWER8: IBM I V71. TR8 or later

Please note: VIOS_VSCSI connection of FlashSystem V840 is supported also on


POWER6 with IBM i V6.1 or later.

Performance estimation tool for FlashSystem with IBM i


FLiP tool
The FLiP tool, instructions on how to use the tool, and presentation on the functions it
provides can be found at the following websites:
IBMers - http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5291
Business Partners - https://www-
304.ibm.com/partnerworld/wps/servlet/mem/ContentHandler/tech_PRS5291

WP102305 Copyright IBM Corporation - July 22nd 2015 30


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

The tool estimates the duration of IBM i jobs running on FlashSystem. It also compares
the present job duration to estimated duration on FlashSystem.

The tool requires IBM i Collection Services data. The query that is provided as a part of
the tool extracts the relevant values from IBM I data and places them in a spreadsheet
from where we copy and paste the data to the FLiP. Performance Spreadsheet. In this
spreadsheet we specify several values regarding the workload characteristics and capacity
on FlashSystem, we can also select only the jobs in which we are interested. FLiP then
generates a presentation with results.

iDoctor
iDoctor is an IBM i performance investigation tool, it can be downloaded from the
following Web page:
https://www-912.ibm.com/i_dir/idoctor.nsf
Note that some of iDoctor components require a license key.

iDoctor can be used for in-depth investigation in which parts of IBM i data and workload
can we expect performance improvement with FlashSystem. Its components Collection
Services Investigator and Job Watcher, used with IBM i Collection Services data, provide
different types of information such as workload characteristics, which jobs experience the
highest number of reads, which objects experience the highest read service time, that help
to evaluate usage of FlashSystem for the customer’s workload.

IBMers can refer to the document FlashSystems Guide for IBM I performance on the
following Web site for guidelines how to use both FLiP tool and iDoctor to evaluate
FlashSystem for IBM i.
IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106347
Business Partners: https://www-
304.ibm.com/partnerworld/wps/servlet/mem/ContentHandler/tech_TD106347

Implementation guidelines for FlashSystems with IBM i


Creating LUNs in “background” FlashSystem of V840 / V9000
On V840 create 16 LUNs of the same size. The guideline of 16 LUN's is based on the
idea that the IO load is distributed over all controller ports equally. Therefore the 16
LUN's should be the same size. You can assign 100% of the FlashSystem capacity to all
16 LUN's.

In V9000 one mdisk from all Flash modules in one FlashSystem is automatically created
at system setup.

Disk pools in V840 / V9000


In V840 create one disk pool containing the 16 FlashSystem LUNs (mdisks). The 16
LUN's will then be used in one pool.

WP102305 Copyright IBM Corporation - July 22nd 2015 31


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

In V9000 a disk pool is create one disk pool containing the mdisk from one FlashSystem.
If multiple FlashSystems are implemented in V9000 you have the choice to create
multiple disk pools one from each FlashSystem or to create one disk pool from all
FlashSystems. For example: with many IBM I LPARs it may be good idea to setup
multiple disk pools each from one FlashSystem to keep other LPARs running in case one
FlashSystem fails. With one IBM i LPAR might be good idea to create one disk pool
from all FlashSystems to provide the best performance.

All the data will be distributed over all FlashCards regardless of the pool definition at the
controller side, so there is no need to split the production and FlashCopy data into
different pools. If there are multiple IBM i LPARs we recommend sharing the disk pool
between them for best performance, however, in case one disk pool is created from one
FlashSystem you may consider separating the pools among IBM i LPARs for resiliency.
If FlashCopy is used, we recommend sharing the disk pool between production and
FlashCopy LUNs.

Number of paths to IBM i


Connect the LUNs to IBM i with 4 active paths to provide sufficient bandwidth for the
increased IO rate that is possible with Flash technology. However, with smaller
customers you may consider implementing 2 active paths per LUN on FlashSystem
configurations without experiencing a performance bottleneck.

Size of LUNs
We recommend creating LUNs in the size range of 70 GB – 150 GB on FlashSystem to
connect to IBM i. The guidelines listed in the section LUN size apply to FlashSystem too.

Capacity and number of LUNs per port in FC adapter


As our most recent measurements of 16Gb adapters with IBM i, SVC and FlashSystem
show, a port in a 16 Gb adapter can handle about 70000 IO/sec at 16 KB blocksize.
Applying 70% utilization gives a recommended maximum throughput of 49000 IO/sec.

One port in a 16 Gb adapter can sustain a maximum bandwidth of about 1300 MB/sec at
16 KB blocksize. Applying 70% utilization gives 910 MB/sec.

From these measurements we may roughly estimate the maximal capacity per port for
good performance by using the following calculation and assuming IBM i Access
Density = 4 on FlashSystems: We divide max IO/sec at 70% utilization by Access
Density, then we apply 40% for LUN utilization. (49000 IO/sec / 4 IO/sec/GB) * 0.4 =
4900 GB
Note: In Multipath, calculate 2 * 4900 GB = 9800 GB per 2 active ports, or 4 * 4900 GB
per 4 active ports

After deciding the size of LUNs divide the maximal capacity per port by the size of LUN
to get the number of LUNs per port. Note the rule of maximal 64 LUNs per port.

WP102305 Copyright IBM Corporation - July 22nd 2015 32


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Maximum number of 16 Gb adapters per EMX0 expansion drawer


Most recent measurements indicate that an EMX0 drawer can sustain about 22 GB/sec of
sequential throughput at 1 MB blocksize, or about 15 GB/sec sequential throughput at
smaller blocksizes. For one port in 16 Gb adapter we estimate 1300 MB/sec per port at 16
KB blocksizes. Therefore, the Rule of Thumb calculation for number of adapters in the
drawer is as follows:
15 * 1024 MB/sec / (2* 1300 MB/sec) = 15360 MB/sec / 2600 MB/sec = approx 6
adapters per drawer

Zoning the switches to connect FlashSystem V840 and V9000


Zone the switches so that one port in IBM i adapter is in zone with 2 ports from V9000,
one port from each node canister. In Multipath with 2 paths, the two IBM i ports that
have the set of LUNs assigned, should be long to different adapters. In Multipath with 4
paths, the ports may belong to 2 adapters.

When applicable: If 2 * IO groups in SVC cluster are used, zone ports from half of the
IBM i adapters with one IO group, and ports from the other half with the other IO group.
In Multipath assign a LUN to two ports from different adapters, each port zoned with the
caching IO group of the LUN. In some cases, such as preparing for LUN migration you
may consider to zone each IBM I port with both IO groups. IBM i will establish paths to
a LUN through the caching IO group of the LUN.

Creating LUNs in FlashSystems 840 and 900 for IBM i


When attaching FlashSystem to IBM i without SVC, keep in mind that IBM i attachment
supports only LUNs with 4 KB sectors. Therefore make sure to define the LUNs for IBM
i with 4 KB (4096 byte) sectors. The exact capacity of a LUN on FlashSystem will report
to IBM i, there isn’t any capacity loss.

Note: With attached FlashSystem, IBM i doesn’t perform block translation with adding
9th sector to 8 sectors which is the case with attached SVC, Storwize or V840 / V9000.

Zoning the switches to connect FlashSystems 840 or 900


Zone the switches so that one or more ports in IBM i are zoned with one port in
FlashSystem.

When using Multipath with 2 paths, zone one port in IBM i with a port in one
FlashSystem node, and zone a port in another adapter in IBM i with a port in the other
FlashSystem node. Map the LUNs to both IBM i ports to achieve Multipath. IBM i will
establish two active paths for each LUN mapped to a different IO group.

WP102305 Copyright IBM Corporation - July 22nd 2015 33


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

Further References
For further detailed information on implementing Storwize V7000 in an IBM i
environment refer to the following redbooks. These can be downloaded from
www.redbooks.ibm.com

Power HA references:

 PowerHA Website
– www.ibm.com/systems/power/software/availability/
 Lab Services
– http://www-03.ibm.com/systems/services/labservices
 PowerHA SystemMirror for IBM i Cookbook
– http://www.redbooks.ibm.com/abstracts/sg247994.html?Open
 Implementing PowerHA for IBM i
– http://www.redbooks.ibm.com/abstracts/sg247405.html?Open
 IBM System Storage Copy Services and IBM i: A Guide to Planning and
Implementation
– http://www.redbooks.ibm.com/abstracts/sg247103.html?Open
 Is your ISV solution registered as ready for PowerHA?
– http://www-304.ibm.com/isv/tech/validation/power/index.html

STORWIZE V7000 references:

 Introducing the Storwise STORWIZE V7000


– http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4391
 External storage solutions for IBM I
– http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4605
 Power HA options for IBM I
– http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4021
 Simple Configuration Example for Storwize Storwize V7000 FlashCopy and
PowerHA SystemMirror for i
– http://www.redbooks.ibm.com/abstracts/redp4923.html?Open

VIOS references:
 IBM i Virtualization and Open Storage
– http://www-
03.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.p
df
 IBM PowerVM Best Practice
– http://www.redbooks.ibm.com/abstracts/sg248062.html?Open
– IBM PowerVM Virtualization Introduction and Configuration
http://www.redbooks.ibm.com/abstracts/sg247940.html
 IBM PowerVM Virtualization Managing and Monitoring
– http://www.redbooks.ibm.com/abstracts/sg247940.html

 IBM i and Midrange storage redbook

WP102305 Copyright IBM Corporation - July 22nd 2015 34


Hints and tips for implementing Storwize and IBM FlashSystems in a IBM i environment

– http://www.redbooks.ibm.com/abstracts/sg247668.html?Open
 Fibre Channel (FC) adapters supported by VIOS
– http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/da
tasheet.html
 Disk Zoning White paper
– http://www-
03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101914

Acknowledgements

Thanks to Lamar Reavis, Byron Grossnickle, and William Wiegand (Storage ATS), Sue
Baker (Power ATS), Kris Whitney (Rochester Development) and Selwyn Dickey and
Brandon Rao (STG Lab Services).

WP102305 Copyright IBM Corporation - July 22nd 2015 35

You might also like