Professional Documents
Culture Documents
Hints and Tips For Implementing Storwize V7000 V3.1 30 - July
Hints and Tips For Implementing Storwize V7000 V3.1 30 - July
Alison Pate
IBM Advanced Technical Sales Support
patea@us.ibm.com
Jana Jamsek
IBM Advanced Technical Skills, Europe
jana.jamsek@si.ibm.com
Table of Contents
Table of Contents ................................................................................................................ 2
Introduction ......................................................................................................................... 3
IBM i external storage options............................................................................................ 3
IBM i Storage Management ................................................................................................ 4
Single level storage and object orientated architecture................................................... 4
Translation from 520 byte blocks to 512 byte blocks ..................................................... 5
Virtual I/O Server (VIOS) Support ..................................................................................... 5
VIOS vSCSI support ...................................................................................................... 6
Requirements .............................................................................................................. 6
Implementation Considerations .................................................................................. 6
NPIV Support.................................................................................................................. 7
Requirements for VIOS_NPIV connection ................................................................ 8
Implementation considerations ................................................................................... 9
Native Support of Storwize V7000 ..................................................................................... 9
Requirements for native connection ............................................................................... 9
Implementation Considerations .................................................................................... 10
Direct Connection of Storwize V7000 to IBM i (without a switch) ......................... 10
Sizing for performance ..................................................................................................... 10
Storwize V7000 Configuration options ............................................................................ 11
Host Attachment ............................................................................................................... 12
Multipath ........................................................................................................................... 13
Description of IBM i Multipath .................................................................................... 13
Insight to Multipath with Native and VIOS NPIV connection ..................................... 14
Insight to Multipath with VIOS VSCSI connection ..................................................... 15
Zoning SAN switches ....................................................................................................... 16
Boot from SAN ................................................................................................................. 17
IBM i mirroring for V7000 LUNs .................................................................................... 17
Thin Provisioning.............................................................................................................. 18
Real-time Compression ..................................................................................................... 18
Solid State Drives (SSD) .................................................................................................. 19
Data layout ........................................................................................................................ 20
LUN compared to Disk arm .......................................................................................... 20
LUN Size ...................................................................................................................... 21
Adding LUNs to ASP ................................................................................................... 21
Disk unit Serial number, type, model and resource name ............................................ 22
Identify which V7000 LUN is which disk unit in IBM i .............................................. 22
Software ............................................................................................................................ 26
Performance Monitoring ................................................................................................... 26
Copy Services Considerations .......................................................................................... 26
IBM FlashSystem.............................................................................................................. 28
IBM FlashSystems and IBM i........................................................................................... 28
Requirements for connecting IBM FlashSystems to IBM i .............................................. 28
Minimum Requirements for connecting FlashSystem 900 and FlashSystem 840 to IBM
i ..................................................................................................................................... 28
Minimum Requirements for connecting FlashSystem V840 and V9000 to IBM i ...... 29
Performance estimation tools for FlashSystem with IBM i .............................................. 30
FLiP tool ................................................................................................................... 30
iDoctor ...................................................................................................................... 31
Implementation guidelines for FlashSystems with IBM i ................................................ 31
Creating LUNs on “background” FlashSystem of V840 / V9000 ........................... 31
Disk pools in V840 / V9000 ..................................................................................... 31
Number of paths to IBM i ......................................................................................... 32
Size of LUNs............................................................................................................. 32
Capacity and number of LUNs per port in FC adapter ............................................. 32
Maximum number of 16Gb adapters per EMX0 expansion drawer ......................... 33
Zoning the switches to connect FlashSystem V840 and V9000 ............................... 33
Creating LUNs in FlashSystems 840 and 900 for IBM i .......................................... 33
Zoning the switches to connect FlashSystems 840 or 900 ....................................... 33
Further References ............................................................................................................ 34
Introduction
Midrange and big IBM i customers are extensively implementing IBM Storwize Systems
as the external storage for their IBM i workloads. The Storwize family of Storage
systems not only provides variety of disk drives, RAID levels and connection types for an
IBM i installation, it also offers the options for flexible and well managed High
availability and Disaster recovery solutions for IBM i.
This document provides Hints and Tips for implementing the Storwize V7000 with IBM
i. The Storwize software is consistent across the entire Storwize family including the
SAN Volume Controller (SVC), V7000, V5000 and V3700 and therefore the content is
applicable to all the products in the family.
The July 2015 version of this document also includes recommendation for implementing
the IBM FlashSystem V840 or V9000 with IBM I, as well as FlashSystem models
virtualized behind a Storwize or SVC system.
Details of supported configurations and software levels are provided by the System
Storage Interoperation Center: http://www-
03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_ove
r=yes
Sue Baker maintains a useful reference of supported servers, adapters, and storage
systems on Techdocs for IBMers and Business Partners
IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4563
Business Partners: https://www-
304.ibm.com/partnerworld/wps/servlet/ContentHandler/tech_PRS4563
The IBM i server is different in that it takes responsibility for managing the information
in auxiliary storage pools (also called disk pools or ASPs).
When you create a file, you estimate how many records it should have. You do not assign
it to a storage location; instead, the system places the file in the best location that ensures
the best performance. In fact, it normally spreads the data in the file across multiple disk
units. When you add more records to the file, the system automatically assigns additional
space on one or more disk units. Therefore it makes sense to use disk copy functions to
operate on either the entire disk space or the iASP. Power HA supports only an iASP-
based copy.
IBM i uses a single level storage, object orientated architecture. It sees all disk space and
the main memory as one storage area and uses the same set of virtual addresses to cover
both main memory and disk space. Paging of the objests in this virtual address space is
performed in 4 KB pages. However, data is usually blocked and transferred to storage
devices in bigger than 4 KB blocks. Blocking of transferred data is based on many
factors, for example, expert cache usage.
IBM i performs the following change of the data layout to support 512 byte blocks
(sectors) in external storage: for every page (8 * 520 byte sectors) it uses additional 9th
sector; it stores the 8-byte headers of the 520 byte sectors in the 9th sector, and therefore
changes the previous 8* 520-byte blocks to 9* 512-byte blocks. The data that was
previously stored in 8 * sectors is now spread across 9 * sectors, so the required disk
capacity on V7000 is 9/8 of the IBM i usable capacity. Vice versa, the usable capacity in
IBM i is 8/9 of the allocated capacity in V7000.
Therefore, when attaching a Storwize V7000 to IBM i, whether through vSCSI, NPIV or
native attachment this mapping of 520:512 byte blocks means that you will have a
capacity ‘overhead’ of being able to use only 8/9ths of the effective capacity.
The Virtual I/O Server can provide virtualized storage devices, storage adapters and
network adapters to client partitions running an AIX, IBM i, or Linux operating
environment. The core I/O virtualization capabilities of the Virtual I/O server are shown
below:
_ Virtual SCSI
_ Virtual Fibre Channel using NPIV (Node Port ID Virtualization)
_ Virtual Ethernet bridge using Shared Ethernet Adapter (SEA)
Implementation Considerations
The storage virtualization capabilities by PowerVM and the Virtual I/O Server are
supported by the Storwize V7000 series as VSCSI backing devices in the Virtual I/O
Server. Remember if you use VSCSI devices that 8/9 of the configured LUN capacity
will be usable for IBM i data (8x520B -> 9x512B translation). STORWIZE V7000
LUNs are surfaced as generic 6B22 devices to IBM i.
LUN size is defined on IBM i in decimal gigabytes (GB) and on the V7000 in binary
gigabytes (GiB). Decimal kilobytes have 1000 bytes and binary kilobytes have 1025
bytes so you have fewer binary gigabytes to make up the quantity. Remember that the
When using VIOS to virtualize storage make sure that you have 2 VIOS servers to
provide alternate pathing in the event of a failure.
Multi-pathing across two Virtual I/O Servers is supported with IBM i 6.1.1 or later. More
VIOS servers may be needed to support multiple i/OS partitions. Make sure that you size
the VIOS servers and IO adapters to support the anticipated throughput.
FC adapter attributes
Specify the following attributes for each SCSI I/O Controller Protocol Device (fscsi)
device that connects a V7000 LUN for IBM i.
The attribute fc_err_recov should be set to fast_fail.
The attribute dyntrk should be set to yes.
Setting of these values for the two attributes is related to how AIX FC adapter driver or
AIX disk driver handle the certain type of fabric related errors. Without setting these
values for these 2 attributes, the way to handle the errors will be different, and it will
cause unnecessary retries.
We recommend setting the same attribute values with either VIOS VSCSI connection or
VIOS_NPIV connection.
NPIV Support
N-port Virtualization (NPIV) virtualizes the fibre channel adapters on the Power Server
allowing the same physical IOA port to be shared by multiple LPARs. NPIV requires
VIOS to support the virtualization and LUNs are mapped directly to the IBM i partition
through a virtual fibre adapter.
IBM i 7.1 TR6 is required for NPIV support of Storwize V7000. You will also need
NPIV capable switches to attach the Storwize V7000 to the VIOS server.
Implementation considerations
NPIV attachment of Storwize V7000 provides full support for Power HA and LUN level
switching.
LUNs are only presented to the host from the owning IO group port.
NPIV can support up to 64 active and 64 passive LUNs/virtual path, although 32 is
recommended as a guideline for optimum performance. Remember that the physical path
must still be sized to provide adequate throughput.
It is possible to combine NPIV and native connections for a host connection, but it is not
possible to combine NPIV and vSCSI connections.
Implementation Considerations
It is possible to combine NPIV and native connections for a host connection, but it is not
possible to combine NPIV and vSCSI connections. Migrating from vSCSI to a native
connection requires that the system be powered off.
The LUN size is flexible; choose the LUN size that gives you enough LUNs for good
performance according to Disk Magic. A good recommended size to start modeling is
80GB. Typically IBM i LUN sizes on Storwize are 100-200GB.
It is equally important to ensure that the sizing requirements for your SAN configuration
also take into account the additional resources required when enabling Copy Services.
Use Disk Magic to model the overheads of replication (Global Mirror and Metro Mirror),
particularly if you are planning to enable Metro Mirror.
A Bandwidth Sizing should be conducted for Global Mirror and Metro Mirror.
Note that Disk Magic does not support modeling FlashCopy (aka Point-in-Time Copy
functions), so make sure you do not size the system to maximum recommended
utilizations if you want to also exploit FlashCopy snapshots for backups. It is not
recommended to use large nearline drives as FlashCopy targets; a better option is to share
a pool of larger disk between the FlashCopy source and Target LUNs to avoid creating a
bottleneck.
If you are evaluating an Easy Tier configuration use Disk Magic to determine an
optimum hardware combination. For best performance avoid using high capacity, slower
nearline drives for performance sensitive IBM i environments. A FlashSystem tier 0 with
a Nearline tier 1 is likely to result in unpredictable performance for a production
environment.
You will need to collect IBM i performance data. Generally, you will collect performance
data for a week’s worth of performance for each system/lpar and send the resulting
reports for the sizing.
Each set of reports should include print files for the following:
Send the report print files as indicated below (send reports as .txt file format type). If you
are collecting from more than one IBM i or LPAR, the reports need to be for the same
time period for each system/lpar, if possible.
The default configuration option in the GUI is RAID 5 with a default array width of 7+P
for SAS HDDs, RAID 6 for Nearline HDDs with a default array width of 10+P+Q and
RAID 1 with a default array width of 2 for SSDs.
The recommendation is to create a dedicated storage pool for IBM i with enough
managed disks backed by a sufficient number of spindles to handle the expected IBM i
workload. Modeling with Disk Magic using actual customer performance data should be
performed to size the storage system properly.
Host Attachment
IBM i will log into a Storwize V7000 node only once from an IO adapter port on the IBM
i LPAR.
Multiple paths between the switch and the Storwize V7000 provide some level of
redundancy: if the path in use (active) fails, IBM i will automatically start using the other
path. However there is no way to force an IBM i partition to use a specific port and if
multiple partitions are all configured to use multiple paths between the switch and the
Storwize V7000 the result is typically that all partitions will use the same port on the
Storwize V7000. The recommended option is to provide multipath support by using two
VIOS partitions each with a path to the Storwize V7000.
The same connection considerations apply when connecting using the Native Connection
option without VIOS.
Best practices guidelines:
Isolate host connections from remote copy connections (Metro Mirror or Global
Mirror) where possible.
Isolate other host connections from IBM i host connections on a host port basis.
Always have symmetric pathing by connection type (i.e., use the same number of
paths on all host adapters used by each connection type)
Size the number of host adapters needed based on expected aggregate maximum
bandwidth and maximum IOPS (use Disk Magic or other common sizing methods
based on actual or expected workload).
Multipath
Multipath provides greater resiliency for SAN attached storage. The IBM i supports up to
8 paths to each LUN. In addition to the availability considerations, lab performance
testing has shown that 2 or more paths provide performance improvements when
compared to a single path. Typically 2 paths to a LUN is the ideal balance of price and
performance. The Disk Magic tool supports only multipathing over 2 paths.
You might want to consider more than 2 paths for workloads where there is high wait
time, or where high IO rates are expected to LUNs.
IBM i Multipath provides resiliency in case the hardware for one of the path fails. It also
provides performance improvement since Multipath uses IO load balancing in Round-
Robin mode among the paths.
Every LUN in Storwize V7000 uses one V7000 node as preferred node: the IO traffic to /
from the particular LUN normally goes through the preferred node; if that node fails the
IO is transferred to the remaining node. With IBM i Multipath, all the paths to a LUN
through the preferred node are active and the path through the non-preferred node are
passive. Multipath employs the load balancing among the paths to a LUN that go through
the node which is preferred for that LUN.
In native and VIOS_NPIV connection Multipath is achieved by assigning the same LUN
to multiple physical or virtual FC adapters in IBM i. To put more precisely: the same
LUN is assigned to multiple WWPNs each from one port in physical or virtual FC
adapter, each virtual FC adapter assigned to different VIOS. For better explaining we
limit our further discussion to the multipath with two WWPNs. With using the
recommended switch zoning we achieve that 4 paths are established from a LUN to the
IBM i: two of the paths go through adapter 1 (in NPIV also through VIOS 1) and two of
the paths go through adapter 2 (in NPIV also through VIOS 2); from the two paths that
go through each adapter one goes through the preferred node, and one goes through the
non-preferred node. Therefore two of the 4 paths are active, each of them going through
different adapter, and different VIOS if NPIV is used; two of the path are passive, each of
them going through different adapter, and different VIOS if NPIV is used. IBM i
Multipathing uses Round Robin algorithm to balance the IO among the paths that are
active.
Picture 1 presents the detailed view of paths in Multipath with natively or VIOS_NPIV
connected V7000. The solid lines refer to active paths, and the dotted lines refer to
passive paths. Red lines present one switch zone and green lines present the other switch
zone. The screen-shot below presents the IBM i view of paths to the LUN connected in
VIOS_NPIV. As can be seen on the screen-shot, two active and two passive paths are
established to each LUN.
Picture 2 presents the detailed view of paths in Multipath with VIOS VSCSI connected
V7000. IBM I uses two paths to the same LUN, each path through one VIOS to the
relevant hdisk connected with virtual SCSI adapter. Both paths are active and IBM i load
balancing algorithm is used for IO traffic. Each VIOS has 8 connections to V7000;
therefore 8 paths are established from each VIOS to the LUN. The IO traffic through
these paths is handled by VIOS Multipath driver.
The screen-shots show both paths for the LUN in IBM I, and the paths in each VIOS for
the LUN
Note: A port in physical or virtual FC adapter in IBM i has two WWPNs. For connecting
external storage we use the first WWPN, while the second WWPN is used for Live
Partition Mobility (LPM). Therefore: it is a good idea to zone both WWPNs if you plan
to use LPM; otherwise, zone just the first WWPN.
When connection with VIOS virtual SCSI is used we recommend to zone one physical
port in VIOS with all available ports in V7000, or with as many ports as possible to allow
load balancing, keeping in mind that there are maximum 8 paths available from VIOS to
V7000. V7000 ports zoned with one VIOS port should be evenly spread between V7000
node canisters.
Example 2: We use two adapters in VIOS, we zone one port from each adapter with 4 *
V7000 ports, 2 ports from each canister. This way we achieve that the V7000 are
balanced between VIOS ports and between the V7000 node canisters and we don’t
exceed the 8 path from VIOS to V7000.
When installing IBM i operating system with disk capacity on V7000, the installation
prompt to select one of the available V7000 LUNs for the LoadSource.
When migrating from internal disk drives or from another Storage system to V7000 we
can use IBM i ASP balancing to migrate all the disk capacity except LoadSource. After
the non LoadSource data are migrated to V7000 with load balancing, we migrate
LoadSource by copying it from the previous disk unit to the LUN in V7000. The V7000
LUN must be of equal or greater size than the disk unit previously used for LoadSource.
This way of migration can be done with all ways of V7000 connection Native,
VIOS_NPIV and VIOS vscsi.
way you ensure that the mirroring is started between the two V7000 and not among the
LUNs in the same V7000.
Thin Provisioning
IBM i can take advantage of thin provisioning as it is transparent to the server. However,
firstly, you need to provide adequate HDDs to sustain the required performance
regardless of whether the capacity is actually required. Secondly, while IBM i 7.1 and
later do not pre-format LUNs so that initial allocations can be thin provisioned, there is
no space reclamation thus the effectiveness of the thin provisioning may decline over
time.
You still need to ensure that you have sufficient disks configured to maintain
performance of the IBM i workload.
Real-time Compression
IBM i can take advantage of Real-time Compression (RtC) in either SVC, Storwize or
FlashSystems V840 and V9000, as it is transparent to the server. Real-time Compression
allows the use of less physical space on disk than is presented to the IBM i host; capacity
needed on the storage system is reduced due to both Compression and Thin provisioning.
On the other hand, RtC typically has a latency impact on IO service times, as well as a
throughput impact.
Compression ratio is the ratio between Compressed size and Uncompressed size
(Compression ratio = Compressed size/Uncompressed size).
We recommend careful planning when considering RtC with IBM i. The Comprestimator
tool, which helps to estimate the needed capacity on Storage system with RtC, is
presently not available for IBM i. However Comprestimator is available in VIOS
therefore the customers running IBM i LPARs with VIOS, can take advantage of it.
Our current experiences with IBM i and RtC in Storwize and FlashSystems show
Compression ratios about 0.3; so we typically use 3 times less capacity than needed for
uncompressed data. If you cannot run Comprestimator for your IBM I workload, we
recommend a conservative Compression ratio of about 0.5 – 2 times.
Performance impact of RtC in SVC and Storwize can be estimated by Disk Magic
modelling.
Exploitation of SSDs with the Storwize V7000 is through Easy Tier. Even if you don’t
plan to install SSDs you can still use Easy Tier to evaluate your workload and provide
information on the benefit you might gain by adding SSDs in the future.
When using Easy Tier automated management it’s important to allow Easy Tier some
‘space’ to move data. You should not allocate 100% of the pool capacity but leave some
capacity unallocated to allow Easy Tier migrations. As a minimum leave one extent free
per tier in each storage pool, however for optimum exploitation of future functions plan
to leave 10 extents free total per pool.
There is also an option to create a disk pool of SSD in V7000, and create an IBM i ASP
that uses disk capacity form SSD pool. The applications running in that ASP will
experience performance boost.
IBM i data relocation methods such as ASP balancing and Media preference are not
available to use with SSD in V7000.
Data layout
Selecting an appropriate data layout strategy depends on your primary objectives:
Spreading workloads across all components maximizes the utilization of the hardware
components. This includes spreading workloads across all the available resources.
However, it is always possible when sharing resources that performance problems may
arise due to contention on these resources.
To protect critical workloads you should isolate them minimizing the chance that non-
critical workloads can impact the performance of critical workloads.
A storage pool is a collection of managed disks from which volumes are created and
presented to the IBM i system as LUNs. The primary property of a storage pool is the
extent size which by default is 1GB with Storwize V7000 release 7.1 and 256MB in
earlier versions. This extent size is the smallest unit of allocation from the pool.
When you add managed disks to a pool they should have similar performance
characteristics:
– Same RAID level
– Roughly the same number of drives per array
– Same drive type (SAS, NL_SAS, SSD except if using Easy Tier)
This is because data from each volume will be spread across all MDisks in the pool, so
the volume will perform approximately at the speed of the slowest MDisk in the pool
– The exception to this rule is that if using Easy Tier you can have 2
different tiers of storage in the same pool – but the MDisks within the tiers
should still have the same performance characteristics
Isolation of workloads is most easily accomplished where each ASP or LPAR has its own
managed storage pool. This ensures that you can place data where you intend. I/O
activity should be balanced between the two nodes or controllers on the Storwize V7000.
Make sure that you isolate critical workloads – We strongly recommend only IBM i
LUNs on any storage pool (rather than mixed with non-IBM i). If you mix production
and development workloads in storage pools make sure that the customer understands
that this may impact production performance.
internal disk arms (mdisks) and a LUN created in the disk pool, LUN using an extent
from each disk array in turn.
LUN Size
LUNs can be configured up to 2000GB. The number of the LUNs defined is typically
related to the wait time component of the response time. If there are insufficient LUNs,
wait time typically increases. The sizing process determines the correct number of LUNs
required to access the needed capacity while meeting performance objectives.
The number of LUNs drives the requirement for more FC adapters on the IBM i due to
the addressing restrictions of IBM i if you are using native attachment. Remember that
each path to a LUN will count towards the maximum addressable LUNs on each IBM i
IOA.
For any ASP define all the LUNs to be the same size. 80GB is the recommended
minimum LUN size. A minimum of 6 LUNs for each ASP or LPAR is recommended. Be
very cautious about significantly reducing the number of LUNs in an IBM i environment.
It is recommended that load source devices be created at least 80GB. A smaller number
of larger LUNs will reduce the number of IO ports required on both the IBM i and the
Storwize V7000. Remember that in an iASP environment, you may exploit larger LUNs
in the iASPs, but SYSBAS may require more, smaller LUNs to maintain performance;
Disk Magic does not always accurately predict the effective capacity of the ranks
depending on the DDM size selected and the number of spares assigned. The IBM tool
Capacity Magic can be used to verify capacity and space utilization plans.
For natively or VIOS_NPIV connected V7000 use the following way to identify the
LUNs:
a. In IBM i Dedicated Service Tools (DST) or System Service Tools (SST) look for
the Serial number of a disk unit. In the picture Disk units in native or NPIV
connection we see Serial numbers Y11C490001DC, Y11C490001DA, etc.
b. The last 6 characters of the Serial number are the last 6 characters of the LUN IDs
in V7000. The picture below shows the corresponding LUN id of the disk unit
with serial number Y11C490001DA.
c. The first 6 characters of the disk unit Serial number is a hash of the V7000 cluster
ID.
For VIOS VSCSI connected V7000 use the following steps to identify which disk unit is
which LUN in V7000:
Note the LUN number (in hexadecimal) shown in the location field:
Scroll down the screen to see the Controller number (decimal). Note that LUN number is
a sum of System board number (decimal) and Controller number. In our example: board
number 128 + controller number 2 = 130 = hex 82.
Log-in to VIOS command line and enter command lsmap –all. Look for the LUN id that
corresponds to the LUN number in IBM I (In our example L82). Note the Physical
location value in hex for the SAN LUN number, In our example this is L1. This is shown
below.
The SAN LUN number is in hexadecimal and corresponds to the SCSI number of the
LUN in the Storwize V7000. In our example, the LUN with SCSI ID 1 is the LUN with
the name Sysbas_6, is as shown below:
Software
It is essential that you ensure that you have all up to date software levels installed. There
are fixes that provide performance enhancements, correct performance reporting, and
support for new functions. As always, call the support center before installation to verify
that you are current with fixes for the hardware that you are installing. It is also
important to maintain current software levels to make sure that you get the benefit from
new fixes that are developed.
When updating storage subsystem LIC it is also important to check whether there are any
server software updates required. Details of supported configurations and software levels
are provided by the System Storage Interoperation Center: http://www-
03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_ove
r=yes
Performance Monitoring
Once your storage subsystem is installed it is essential that you continue to monitor the
performance of the subsystem. IBM i Performance Tools reports provide information on
I/O rates, and on response times to the server. This allows you to track trends in
increased workload and changes in response time. You should review these trends to
ensure that your storage subsystem continues to meet your performance and capacity
requirements. Make sure that you are current on fixes to ensure that Performance Tools
reports are reporting your external storage correctly.
If you have multiple servers attached to a storage subsystem, particularly if you have
other platforms attached in addition to IBM i, it is essential that you have a performance
tool that enables you to monitor the performance from the storage subsystem perspective.
IBM TPC provides a comprehensive tool for managing the performance of Storwize
V7000s. You should collect data from all attached storage subsystems in 15 minute
intervals; in the event of a performance problem IBM will ask for this data – without it
resolution of any problem may be prolonged.
If you are not planning to use the Change Volumes option of Global Mirror it is essential
that you size the bandwidth to accommodate the peaks or else risk impact to production
performance.
There is currently a limit of 256 Global Mirror with change volume relationships per
system. Storwize does not support a single LUN being in multiple replication
relationships. Therefore if you are planning on using Hyperswap these same LUNs
cannot also be in a Global Mirror relationship.
The current zoning guidelines for mirroring installations advise that a maximum of two
ports on each SVC node/Storwize V7000 node canister be used for mirroring. The
remaining two ports on the node/canister should not have any visibility to any other
cluster. If you have been experiencing performance issues when mirroring is in operation,
implementing zoning in this fashion might help to alleviate this situation.
Consulting services are available from the IBM STG Lab Services to assist in the
planning and implementation of Storwize V7000 Copy Services in an IBM i
environment:
http://www-03.ibm.com/systems/services/labservices/platforms/labservices_i.html
IBM FlashSystem
IBM FlashSystem leverages Solid State storage with FlashCore technology, including use
of Field-programmable gate array (FPGA) and Two-dimensional Flash RAID. These
advanced architectural concepts provide among others:
FPGA enabled access to data on Flash chips, eliminating the need for CPU and
operating system in the data path
Two levels of storage protection with Variable Stripe RAID across chips in flash
module, and with RAID-5 protection across flash modules
FlashCore technology thus enables FlashSystems to deliver extreme performance and
advanced Flash management.
IBM FlashSystem family includes various models. The modes with the first character V
in their name, join FlashCore technology and proven SVC virtualization capabilities: In
addition to superior FlashSystem performance they deliver functions such as Real-time
Compression, FlashCopy, Remote copy, Thin provisioning, data migration, and
virtualizing external storage.
Note: the other models of FlashSystem family can connect to IBM i when virtualized
with IBM San Volume Controller (SVC)..
FlashSystems V9000 and V840 connect to IBM i in either native direct connection,
native connection with SAN fabric, with VIOS_NPIV, or with VIOS virtual SCSI.
FlashSystem 900 and FlashSystem 840 connect to IBM i in either native direct
connection, native connection with SAN fabric, or with VIOS_NPIV.
POWER systems:
POWER7 with firmware level FW780 or later
POWER8 with firmware level FW810 or later
Note: Some of the POWER models do not support the required Firmware levels.
Connecting FlashSystem 840 or FlashSystem 900 to IBM i LPAR in such POWER
model is not supported.
IBM i level:
Release 7.2 Technology Refresh 2 or later
FC adapters:
2-port 8Gb adapters feature numbers 5735 and 5273
4-port 8Gb adapter feature number 5729
2-port 16Gb adapters feature numbers EN0A and EN0B
Supported switches:
Brocade and CISCO based switches
Native direct connection without SAN fabric requires 16Gb or 8GB adapters
POWER systems:
POWER7, POWER8
IBM i level:
In POWER7: IBM i V7.1 TR7 or later
In POWER8: IBM I V7.1 TR8 or later
POWER systems:
POWER7, POWER8
IBM i level:
In POWER7: IBM i V7.1 TR6 or later
In POWER8: IBM I V71. TR8 or later
FC adapters:
2-port 8Gb adapters feature numbers 5735 and 5273
4-port 8Gb adapter feature number 5729
2-port 16Gb adapters feature numbers EN0A and EN0B
Please note: NPIV connection of FlashSystem V840 is supported also on POWER6 with
IBM i V7.1 TR6 or later.
POWER systems:
POWER7, POWER8
IBM i level:
In POWER7: IBM i V6.1 or later
In POWER8: IBM I V71. TR8 or later
The tool estimates the duration of IBM i jobs running on FlashSystem. It also compares
the present job duration to estimated duration on FlashSystem.
The tool requires IBM i Collection Services data. The query that is provided as a part of
the tool extracts the relevant values from IBM I data and places them in a spreadsheet
from where we copy and paste the data to the FLiP. Performance Spreadsheet. In this
spreadsheet we specify several values regarding the workload characteristics and capacity
on FlashSystem, we can also select only the jobs in which we are interested. FLiP then
generates a presentation with results.
iDoctor
iDoctor is an IBM i performance investigation tool, it can be downloaded from the
following Web page:
https://www-912.ibm.com/i_dir/idoctor.nsf
Note that some of iDoctor components require a license key.
iDoctor can be used for in-depth investigation in which parts of IBM i data and workload
can we expect performance improvement with FlashSystem. Its components Collection
Services Investigator and Job Watcher, used with IBM i Collection Services data, provide
different types of information such as workload characteristics, which jobs experience the
highest number of reads, which objects experience the highest read service time, that help
to evaluate usage of FlashSystem for the customer’s workload.
IBMers can refer to the document FlashSystems Guide for IBM I performance on the
following Web site for guidelines how to use both FLiP tool and iDoctor to evaluate
FlashSystem for IBM i.
IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106347
Business Partners: https://www-
304.ibm.com/partnerworld/wps/servlet/mem/ContentHandler/tech_TD106347
In V9000 one mdisk from all Flash modules in one FlashSystem is automatically created
at system setup.
In V9000 a disk pool is create one disk pool containing the mdisk from one FlashSystem.
If multiple FlashSystems are implemented in V9000 you have the choice to create
multiple disk pools one from each FlashSystem or to create one disk pool from all
FlashSystems. For example: with many IBM I LPARs it may be good idea to setup
multiple disk pools each from one FlashSystem to keep other LPARs running in case one
FlashSystem fails. With one IBM i LPAR might be good idea to create one disk pool
from all FlashSystems to provide the best performance.
All the data will be distributed over all FlashCards regardless of the pool definition at the
controller side, so there is no need to split the production and FlashCopy data into
different pools. If there are multiple IBM i LPARs we recommend sharing the disk pool
between them for best performance, however, in case one disk pool is created from one
FlashSystem you may consider separating the pools among IBM i LPARs for resiliency.
If FlashCopy is used, we recommend sharing the disk pool between production and
FlashCopy LUNs.
Size of LUNs
We recommend creating LUNs in the size range of 70 GB – 150 GB on FlashSystem to
connect to IBM i. The guidelines listed in the section LUN size apply to FlashSystem too.
One port in a 16 Gb adapter can sustain a maximum bandwidth of about 1300 MB/sec at
16 KB blocksize. Applying 70% utilization gives 910 MB/sec.
From these measurements we may roughly estimate the maximal capacity per port for
good performance by using the following calculation and assuming IBM i Access
Density = 4 on FlashSystems: We divide max IO/sec at 70% utilization by Access
Density, then we apply 40% for LUN utilization. (49000 IO/sec / 4 IO/sec/GB) * 0.4 =
4900 GB
Note: In Multipath, calculate 2 * 4900 GB = 9800 GB per 2 active ports, or 4 * 4900 GB
per 4 active ports
After deciding the size of LUNs divide the maximal capacity per port by the size of LUN
to get the number of LUNs per port. Note the rule of maximal 64 LUNs per port.
When applicable: If 2 * IO groups in SVC cluster are used, zone ports from half of the
IBM i adapters with one IO group, and ports from the other half with the other IO group.
In Multipath assign a LUN to two ports from different adapters, each port zoned with the
caching IO group of the LUN. In some cases, such as preparing for LUN migration you
may consider to zone each IBM I port with both IO groups. IBM i will establish paths to
a LUN through the caching IO group of the LUN.
Note: With attached FlashSystem, IBM i doesn’t perform block translation with adding
9th sector to 8 sectors which is the case with attached SVC, Storwize or V840 / V9000.
When using Multipath with 2 paths, zone one port in IBM i with a port in one
FlashSystem node, and zone a port in another adapter in IBM i with a port in the other
FlashSystem node. Map the LUNs to both IBM i ports to achieve Multipath. IBM i will
establish two active paths for each LUN mapped to a different IO group.
Further References
For further detailed information on implementing Storwize V7000 in an IBM i
environment refer to the following redbooks. These can be downloaded from
www.redbooks.ibm.com
Power HA references:
PowerHA Website
– www.ibm.com/systems/power/software/availability/
Lab Services
– http://www-03.ibm.com/systems/services/labservices
PowerHA SystemMirror for IBM i Cookbook
– http://www.redbooks.ibm.com/abstracts/sg247994.html?Open
Implementing PowerHA for IBM i
– http://www.redbooks.ibm.com/abstracts/sg247405.html?Open
IBM System Storage Copy Services and IBM i: A Guide to Planning and
Implementation
– http://www.redbooks.ibm.com/abstracts/sg247103.html?Open
Is your ISV solution registered as ready for PowerHA?
– http://www-304.ibm.com/isv/tech/validation/power/index.html
VIOS references:
IBM i Virtualization and Open Storage
– http://www-
03.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.p
df
IBM PowerVM Best Practice
– http://www.redbooks.ibm.com/abstracts/sg248062.html?Open
– IBM PowerVM Virtualization Introduction and Configuration
http://www.redbooks.ibm.com/abstracts/sg247940.html
IBM PowerVM Virtualization Managing and Monitoring
– http://www.redbooks.ibm.com/abstracts/sg247940.html
– http://www.redbooks.ibm.com/abstracts/sg247668.html?Open
Fibre Channel (FC) adapters supported by VIOS
– http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/da
tasheet.html
Disk Zoning White paper
– http://www-
03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101914
Acknowledgements
Thanks to Lamar Reavis, Byron Grossnickle, and William Wiegand (Storage ATS), Sue
Baker (Power ATS), Kris Whitney (Rochester Development) and Selwyn Dickey and
Brandon Rao (STG Lab Services).