Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

Technical white paper

Check if the document is available


in the language of your choice.

DATA MIGRATION FROM IBM XIV


STORAGE TO HPE 3PAR STORAGE
Best practices
Technical white paper

CONTENTS
Executive summary .............................................................................................................................................................................................................................................................................................................. 3
Assumptions ........................................................................................................................................................................................................................................................................................................................ 3
Terminology and definitions ................................................................................................................................................................................................................................................................................. 4
IBM XIV Storage Systems ............................................................................................................................................................................................................................................................................................... 4
HPE 3PAR StoreServ Management Console ............................................................................................................................................................................................................................................... 5
HPE 3PAR Online Import overview....................................................................................................................................................................................................................................................................... 5
Installing the HPE 3PAR Online Import software..................................................................................................................................................................................................................................... 6
Features .......................................................................................................................................................................................................................................................................................................................................... 6
Peer links ................................................................................................................................................................................................................................................................................................................................. 6
Volumes ................................................................................................................................................................................................................................................................................................................................... 6
Consistency groups ....................................................................................................................................................................................................................................................................................................... 6
Data reduction.................................................................................................................................................................................................................................................................................................................... 7
Encryption .............................................................................................................................................................................................................................................................................................................................. 7
Requirements ............................................................................................................................................................................................................................................................................................................................ 7
Data migration phases ...................................................................................................................................................................................................................................................................................................... 8
Premigration phase ....................................................................................................................................................................................................................................................................................................... 8
Admit phase ...................................................................................................................................................................................................................................................................................................................... 12
Import phase .....................................................................................................................................................................................................................................................................................................................14
Post-migration phase ...............................................................................................................................................................................................................................................................................................15
Migration workflow in OIU .......................................................................................................................................................................................................................................................................................... 15
Migration workflow in the SSMC ...........................................................................................................................................................................................................................................................................16
Best practices .........................................................................................................................................................................................................................................................................................................................21
Migration preparation...............................................................................................................................................................................................................................................................................................21
Host and volumes ........................................................................................................................................................................................................................................................................................................25
Migrations............................................................................................................................................................................................................................................................................................................................ 27
Peer links and Peer volumes .............................................................................................................................................................................................................................................................................. 29
Managing Peer link throughput ......................................................................................................................................................................................................................................................................29
Monitoring and reporting......................................................................................................................................................................................................................................................................................29
Post-migration ................................................................................................................................................................................................................................................................................................................ 34
Licensing .....................................................................................................................................................................................................................................................................................................................................35
Delivery model ....................................................................................................................................................................................................................................................................................................................... 35
Troubleshooting ..................................................................................................................................................................................................................................................................................................................35
Rollback .................................................................................................................................................................................................................................................................................................................................35
Information to collect before contacting HPE Support ............................................................................................................................................................................................................. 36
Technical white paper Page 3

EXECUTIVE SUMMARY
Many of the challenges in IT today are a direct result of how data centers were set up back in the 1970s. Although the hardware and
software have changed, many IT departments have held on to the model of one application per physical server with local storage. The
consolidation wave starting around the year 2000 brought virtualization and shared disks on SAN-based storage arrays. This step reduced
the number of physical components in the data center but increased the complexity of managing them. With reduced operational budgets
and headcount, the typical IT department focuses on the stability of its mission-critical applications, leaving little or no time to explore and
implement major shifts in infrastructure and mode of operation.

With a technical and financial lifetime of three to five years for data center equipment, the migration to newer generations of technology is
inevitable. This migration is also desirable from a cost perspective as the expense of maintaining older equipment grows every year.
Consequently, even medium-sized companies go through migrations year-round.

Every major initiative for optimizing data center performance, decreasing total cost of ownership (TCO), increasing return on investment
(ROI), or maximizing productivity involves data migration. For most companies, data has incalculable value and its loss could potentially ruin
the business. Migrating an organization’s data to a new storage technology is complex, costly, time-consuming, and technically risky.

Complicating the process further, business application owners demand proven data migration procedures with no or minimal downtime,
minimal migration duration, and validated rollback procedures. Only the largest organizations have headcount dedicated to the data
migration process, which could result in inexperienced or insufficient staff executing the migration activities at small and medium-sized
companies.

Hewlett Packard Enterprise (HPE) has a solution to overcome these data migration challenges. HPE 3PAR Online Import orchestrates block
data migration in a simple and concise way from supported source systems to an HPE 3PAR array. This software is available in the stand-
alone HPE Online Import Utility (OIU) or as the Online Import version integrated into HPE StoreServ Management Console (SSMC). OIU is a
command line based console used to stage and execute the migration. OIU was released in 2014 and is available for a range of Dell EMC,
Hitachi, and IBM storage systems. Online Import by SSMC leverages predefined workflows to enable less-experienced administrators to
move volumes from a source system other than an HPE 3PAR array to an HPE 3PAR array with a few mouse clicks.

Data can be migrated in an online, minimally disruptive, and offline mode, depending on the operating system of the host to which the
volumes under migration are exported. The data being migrated is selected by volume or host. In an online or minimally disruptive
migration, the data is available for access while the transfer is ongoing. Rollback occurs in minutes with no data loss.
This paper expands on topics discussed in the HPE 3PAR SSMC 3.6 User Guide and the HPE 3PAR Peer Motion and HPE 3PAR Online
Import User Guide for migrating hosts and volumes from an IBM XIV storage system to an HPE 3PAR array. It covers additional concepts,
features, monitoring, and reporting tools, and troubleshooting. It also covers best practices recommended by HPE for data migration.

Assumptions
An IBM XIV Gen3 running Storage System 11.6.2.a was used as the source system in the data migration for this paper. Screenshots of the
IBM XIV in this paper show GUI and CLI. version 4.8.0.6 The destination system was an HPE 3PAR system running HPE 3PAR OS 3.3.1
MU3. HPE 3PAR Online Import functionality used in the paper is embedded in HPE 3PAR SSMC 3.6. The OIU version used is 2.3. It is
expected that you have read and understood the information in the HPE 3PAR SSMC 3.6 User Guide and the HPE 3PAR Peer Motion and
HPE 3PAR Online Import User Guide.
In this paper, the term “Online Import” refers to both the SSMC and the OIU version of the software unless specified to cover only one of
them.
Technical white paper Page 4

Terminology and definitions


The following terms, acronyms, and abbreviations are used in this white paper.
TABLE 1. Terms, acronyms, and definitions

Abbreviation Definition

API Application program interface

ASIC Application-specific integrated circuit: An implementation of an algorithm in silicon hardware

CNA Converged network adapter: An adapter that carries both network and storage traffic

Consistency Group A concept of grouping volumes for migration by Online Import on HPE 3PAR arrays

CPG Common provisioning group: A template to create HPE 3PAR volumes

Destination system The storage system that receives the data that is migrated

DIF Data integrity field: An industry-standard approach to protecting data integrity from corruption in computer data storage

HBA Host bus adapter: An expansion card in a server or storage system that implements connectivity to the outside network

Host The server with volumes under migration from the source to the destination system

Link The logical interconnection between the source and the destination systems under migration

LUN Logical unit number: A number used to identify a device addressed by a host or a storage array

MDM Minimally disruptive migration: One of the three migration types in HPE 3PAR Online Import

PGR Persistent group reservations: A way to coordinate access to shared volumes between multiple hosts

REST Representational State Transfer

SED Self-encrypting disk: A hard drive with a circuit built into the disk drive controller to encrypt and decrypt all data stored on the media

SMI-S Storage Management Initiative Specification: A standardized tool for storage management maintained by the Storage Networking Industry
Association (SNIA)

Source system The storage system that contains the data to be migrated

SSMC HPE 3PAR StoreServ Management Console: A GUI for managing HPE 3PAR arrays

SAN Storage area network: A high-speed network that interconnects storage systems with servers

SFP Small form-factor pluggable: A transceiver serving as an endpoint for data communication over Fibre Channel or Fibre Channel over Ethernet (FCoE)

VSS Volume Shadow Storage: A Microsoft® framework for taking snapshots

WWN World Wide Name: A 48-bit quantity the uniquely defines an HBA interface

XCLI The command line interface of an IBM XIV storage system

Zoning operation A process that creates, modifies, or deletes a logical connection in the SAN between a storage system and a server

IBM XIV STORAGE SYSTEMS


The IBM XIV Storage System is the IBM offering for block storage in the midrange arena. The IBM XIV design incorporates between six and
15 data modules that are classic servers equipped with locally attached large form factor (LFF) disk drives and run a proprietary operating
system. Each module is self-contained and interlinks to others through Ethernet (XIV Gen2) or InfiniBand (XIV Gen3) connections. With 12
self-encrypting SATA (Gen2) or SAS (Gen3) 7200 rpm disks of 4 TB or 6 TB per module, the array scales to 180 disks for a usable capacity
of 970 TB based on a 2x compression ratio and a mirrored disk setup. Up to 12 TB of flash cache on Gen3 systems can accelerate the
performance of the SAS disks.
Technical white paper Page 5

Stored data is divided into 1 MB partitions that are distributed automatically across all disks in the system. This practice removes disk hot
spots, guaranteeing even disk utilization. Connectivity to hosts runs over 4 Gbps (Gen2) or 8 Gbps (Gen3) Fibre Channel ports (24
maximum), or 1 Gbps (22 maximum) or 10 Gbps (12 maximum) iSCSI ports.

Volumes on an IBM XIV system can be replicated synchronously or asynchronously between sites. Three-site mirroring is possible on the
IBM XIV Gen3 system. It allows a three-way replication scheme using one synchronous and one asynchronous connection in a concurrent
topology.
A GUI, CLI (IBM XIV XCLI), REST API, and SMI-S interface are available to manage the system. Volumes are created inside pools of dedicated
disk space. Pools can be grown and shrunk. The breakthrough ease of management of the system stems from the inflexibility of the disk
protection that is fixed to RAID 6.

Disk blocks are 512 bytes on the IBM XIV Gen2 and 520 bytes on the IBM XIV Gen3 system. Volumes on IBM XIV arrays can be created in
GB, GiB, and disk blocks. Volumes consume chunks in 17 GB (16 TiB) increments. Although you can create them with an arbitrary size by
specifying a block count, they consume disk slices of 17 GB. Volume sizes can be grown but not reduced in size.

HPE 3PAR STORESERV MANAGEMENT CONSOLE


The SSMC is the intuitive administration and reporting console that offers converged management of file and block storage on HPE 3PAR
storage systems. It offers a modern and consistent look and feel and uses the latest API and UI technologies. The SSMC manages up to 32
HPE 3PAR systems. It is implemented as a service installed on a secured Microsoft Windows® or Linux® server. Accessible through a web-
based UI, operations in the SSMC execute as individual tasks, resulting in an improved user experience and better responsiveness. All the
information needed on HPE 3PAR systems is obtained at a glance from a customizable dashboard, eliminating the need for add-on software
tools or diagnostics and troubleshooting that require professional services. Assessing the status of storage across an entire data center is
accomplished in seconds and collecting configuration and health information on any resource is only a few mouse clicks away.

HPE 3PAR System Reporter is fully integrated into the SSMC and offers real-time and historical reporting templates, scheduled reports, and
threshold alerts. Customized reports can be composed and saved with a few mouse clicks.

HPE 3PAR ONLINE IMPORT OVERVIEW


The Online Import functionality in the SSMC orchestrates the unidirectional migration of block data from selected IBM XIV to HPE 3PAR
storage systems. The data transfer path is array-to-array; host-based mirroring or an external hardware appliance is not used. Online Import
is a pull-type migration method where the destination array autonomically requests the data from the source IBM XIV system. The data
transfers from source to destination arrays over Peer links, which are the dedicated dual-path Fibre Channel interconnects between the
source and destination array. The same concept is used in the OIU for Dell EMC and Hitachi systems and in HPE 3PAR Peer Motion.

OIU offers a set of 21 intuitively named commands to add one or more source and destination storage systems to the utility and to define
and execute a migration between any two of them. Hosts and volumes for migration are selected by enumerating them, although implicitly
added hosts and volumes can complement the explicit ones. The OIU CLI commands are not case-sensitive; storage objects should be
entered in the case they appear on the arrays.

The SSMC offers a wizard interface for Online Import. In the first step of a migration, the IBM XIV system is added as a migration source to
one or more HPE 3PAR systems embedded in a storage federation. Next, from the list of hosts or volumes to migrate, the destination
HPE 3PAR array and the details for the landing volumes are selected. Objects that appear in the SSMC are case-sensitive.
Online Import creates an import task for every volume under migration. Up to nine volumes transfer simultaneously. The import tasks for
volumes beyond nine are pushed to a queue in the standby state and start execution automatically when one of the nine running tasks
completes. The actual data migration is managed by the destination HPE 3PAR system independent of OIU or the SSMC. The destination
HPE 3PAR system automatically schedules the volume import tasks that are waiting in the queue for execution. Online Import acts as a
stateless engine that controls the migration from the source to the destination storage systems. It can be closed and reopened during any
phase of a migration without adverse effects. Reopening the tool shows the current state of the migration.

Supported migration types are online, minimally disruptive, and offline. Host I/O is not interrupted during an online migration. During a
minimally disruptive migration, the I/O disruption occurs during the reboot of the host for some necessary rezoning work and possibly for
the change of the multi-pathing stack; during the actual data transfer, the host serves I/O. Throughout an offline migration, the volumes
under migration remain unpresented during the entire migration operation. Four IBM XIV source to HPE 3PAR destination array
configurations are supported: 1:1, N:1, 1:N, and N:N. Concurrent migrations from multiple hosts can be set up and executed from within the
SSMC or OIU instance.
Technical white paper Page 6

During the actual migration of a volume, write I/Os by applications on the hosts are written to the source volume in the IBM XIV system. If
the target disk block for a write I/O was already migrated to an HPE 3PAR array, the I/O is written to the HPE 3PAR array as well. These
double writes provide resilience for the source volume in the case the migration does not complete successfully. This I/O mirroring stops
when migration for the volume is complete. Read I/O is taken from the destination HPE 3PAR array if the disk block was already migrated,
and from the source IBM XIV array if not yet migrated. This scheme saves read traffic over the Peer links.

Online Import manages the required changes in pathing, masking, and unmasking of the migrating volumes on the source IBM XIV array by
sending SMI-S commands to the SMI-S provider integrated into the IBM XIV system.

INSTALLING THE HPE 3PAR ONLINE IMPORT SOFTWARE


The OIU software is free and downloadable from HPE Software Depot. An HPE Passport ID is required to download it. Check HPE Single
Point of Connectivity Knowledge (SPOCK) and the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide for more information.
Always be sure to use the latest version of OIU.

The Online Import by SSMC functionality is integrated in the SSMC; no additional software needs to be installed. You can download the
SSMC for free from HPE Software Depot. A free HPE Passport ID is required for this download. Consult the HPE 3PAR SSMC 3.6 User Guide
for more information on using OIU within the SSMC. Make sure to use the latest version of the SSMC.

FEATURES
Online Import orchestrates the migration of data from an IBM XIV source storage system to a destination HPE 3PAR storage system.

Peer links
Online Import uses in-array technology for data migration—no hardware appliance or external server software is deployed. The source and
destination arrays are logically interconnected by two Peer links over Fibre Channel SAN switches; direct Fibre Channel interconnection is
not supported. The Peer link endpoints on the IBM XIV system are standard host ports that can be shared with other hosts. On the
HPE 3PAR array, the Peer links end in the Peer ports that are dedicated to their function. The nominal throughput of SFP modules in use as
endpoints for the Peer links can be different on the source and destination systems.

Volumes
The size of a volume on an IBM XIV system is either limited by the soft limit of the storage pool it is created in or by the maximum volume
size of 161 TiB. With Online Import, volumes of up to 16 TiB can be migrated to an HPE 3PAR array on HPE 3PAR OS 3.2.2 or earlier. On
systems using HPE 3PAR OS 3.3.1, the size limit is 64 TiB for full and thin volumes and 16 TiB for compressed, deduped, and compressed
and deduped volumes.

Although snapshots can be migrated with their base volumes by using Online Import, the parent/child relationship is destroyed on the 3PAR
destination system, and the snapshot content is unreadable to HPE 3PAR OS. Volumes on the IBM XIV system that are engaged in any type
of replication cannot be migrated using Online Import. Volumes that are member of a pool for replication are not eligible for migration. The
provisioning type of the migrating volumes on the source is unimportant for selecting a landing volume type. Deduped or compressed
volumes can be a target for Online Import if they are supported on the destination HPE 3PAR system.

Consistency groups
Smaller volumes are transferred in less time than larger volumes. Because of this, it is possible, for example during a database migration, that
the small database control, archive, and log files are completely migrated to the destination array while migration of the large database
record files is ongoing. Volumes that are migrated completely get application writes only to the HPE 3PAR array but volumes under
migration still get writes to the source volume and possibly to the destination volume. This means migrated volumes become stale on the
source IBM XIV within seconds.

If the migration fails due to a hardware or software error in this situation, the smaller database files are stale on the source system but the
database records are current as a result of the ongoing double writes. Rolling back by restarting the database from its volumes on the source
system using the stale control, archive, and log files is suboptimal. Copying them manually from the destination back over to the source
system before restarting the database is required.
Technical white paper Page 7

To remove the need for this copy operation, Online Import implements the concept of consistency groups. Application I/O issued during a
migration to volumes that are members of a consistency group continues to be mirrored to the source array until all members of the
consistency groups are migrated to the destination array. This way all volumes of a database under migration, for example, stay current on
the source system during the entire duration of the migration, making rollback in the case of an incomplete migration straightforward and
simple. Online Import by SSMC supports the use of one consistency group that holds all volumes under migration. OIU supports more than
one consistency group.

NOTE
HPE 3PAR consistency groups bear no relationship to the concept with the same name used on the IBM XIV to group local volumes that are
in replication.

Data reduction
Data reduction technologies such as thin provisioning, deduplication, and compression offer efficiency benefits for primary storage that can
significantly reduce capital and operational costs. Thin provisioning and compression are supported on select IBM XIV systems. The actual
implementation differs from the way data reduction is realized on an HPE 3PAR array, but the important point is that data leaves the
storage systems undeduped and uncompressed. As a result, volumes with any combination of compaction can transparently migrate from an
IBM XIV system to an HPE 3PAR array. Note that the compaction ratio on the HPE 3PAR array can differ from the ratio on the IBM XIV.

Encryption
IBM XIV Gen3 systems only support SED drives that offer data-at-rest encryption. All HPE 3PAR storage systems support classic and SED
drives. On both platforms, an SED drive contains an ASIC that handles the encryption and decryption of the data entering and leaving the
disk. These operations are not executed in the IBM XIV firmware or HPE 3PAR OS, avoiding a performance impact on the storage
controllers.

Encryption of plain text data happens when the data reaches the hard drives in the storage system. Decryption takes place when a host
requests the data, resulting in plain text leaving the storage system. The net effect of this process is that data is unencrypted outside of the
storage system—the data-at-rest encryption only occurs on the hard drives inside the system. This means Online Import is transparently
compatible with an IBM XIV source system with encryption and an HPE 3PAR destination system with or without encryption enabled.

Migration traffic over the Peer links is not encrypted unless external encrypting devices are in the data path between the source and
destination array.

REQUIREMENTS
The hardware environment to execute an Online Import migration from an IBM XIV source system to an HPE 3PAR system must contain:
• A supported model of the IBM XIV system with a supported firmware version; consult HPE SPOCK for more details.
• One or more volumes on the IBM XIV system. The volumes can be presented to a server running a supported host operating system with
a supported multipathing solution or they can be unpresented. The volumes under migration should not be a snapshot, a pool volume, or
in replication with a second IBM XIV system.
• A supported model of the HPE 3PAR array with a supported HPE 3PAR OS version. For Online Import by SSMC, the HPE 3PAR system
must be capable of entering into a storage federation. Consult HPE SPOCK for more details. Any currently supported HPE 3PAR system
can be the destination of an Online Import migration.
• For Online Import Utility, a server running a supported operating system, consult HPE SPOCK for more details.
• For Online Import by SSMC, a server running either a supported version of VMware ESXi™ or Microsoft Hyper-V for the HPE 3PAR SSMC
3.6 virtual appliance.
• Two host ports on the IBM XIV system, possibly shared with other hosts.
• Two unused host ports on partner controllers on the HPE 3PAR array to serve as Peer ports in the Peer link setup.
• A valid Online Import or Peer Motion license on the destination HPE 3PAR array.
• A single or dual Fibre Channel fabric to create the Peer links, a dual-zone SAN interconnection between the source IBM XIV, and a
destination HPE 3PAR array. Exactly two Peer links are supported.
Technical white paper Page 8

The host to which the IBM XIV LUNs are exported stays the same throughout the migration, meaning its operating system, Fibre Channel
HBA or CNA brand, and their firmware version must be supported by the destination HPE 3PAR system. This should be verified during the
planning phase of the OIU migration.
For Online Import by SSMC, the destination HPE 3PAR storage system must be added to the SSMC and configured as part of a storage
federation. The IBM XIV system must be added as a migration source to the federation of the destination HPE 3PAR. An OIU-based
migration can be executed while the source and destination are configured for SSMC based migrations.

The OIU software is made up of a client and a server portion that can be installed on the same or a different Windows server. The OIU client
communicates to the server portion using a REST API to the OIU server. If the OIU server is on a system different from the OIU client, TCP
ports 2370 and 2371 must be open between them.
The OIU server sends SMI-S commands over TCP port 5988 or 5989 to the Common Information Model (CIM) agent built into the IBM XIV
operating system. The commands complete actions such as present and unpresent LUNs and create and modify host groups on the IBM XIV
source array. One of the two ports must be open between the OIU server and the IBM XIV system. The CIM agent is enabled by default. A
watchdog process on the IBM XIV system monitors the CIM agent to make sure that it is always running. There are no options to enable,
disable, or restart the CIM agent and its subprocesses through the XCLI or IBM XIV Storage Management GUI.

Fibre Channel and FCoE are backed by Online Import as connectivity for hosts under migration. In an FCoE configuration, a switch for
converting FCoE to Fibre Channel for array connectivity is required for migrating the host’s LUNs to an HPE 3PAR system. You can use
Online Import to migrate a volume presented from the IBM XIV to the host over a single path.

Online Import is not agnostic regarding the operating system of the host from which the volumes are migrated. Consult the HPE SPOCK
website for the list of host operating systems and cluster software supported with Online Import for HPE 3PAR SSMC. Online Import is
agnostic regarding the applications executed on the volumes that are under migration. For clustered hosts, the clustering software must be
supported. Consult the HPE SPOCK website for the list.
Online Import is designed for migrations with inter-array distances within a typical data center, but it operates with a source and destination
over any distance. There is no maximum supported geographical distance or latency between federation members and their migration
sources. Although the Peer links are dedicated to migration data transfer, routing this traffic over shared WAN links between distant sites
means the transfer must compete with other contenders for bandwidth, potentially slowing down the migration. As a result, available
bandwidth in WAN setups becomes more of an issue than latency. HPE recommends studying the bandwidth usage of an inter-site WAN
Fibre Channel link over time to find periods of low traffic and low latency before starting an Online Import migration. The host from which the
LUNs were migrated accesses the destination HPE 3PAR array over the WAN link, which might create performance issues for the
applications it executes, especially if they were located on SSDs. Physically moving the application host to the same data center as the
destination HPE 3PAR array helps alleviate this problem.

DATA MIGRATION PHASES


Data migration using Online Import takes four phases: the premigration phase, the admit phase, the import phase, and the post-migration
phase. Each phase is described in this section for OIU and Online Import by SSMC.

NOTE
OIU requires a good understanding of the migration mechanism in use because the tool does not inform you if a SAN zone change, a
multipathing reconfiguration, a rescan, or a short host shut down is needed. Online Import by SSMC provides guidance in the GUI when an
action is required. The HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide contains more information on the actions needed
for each type of migration.

Premigration phase
Follow the actions described in this section during the premigration phase. You can execute these steps a few days ahead of the actual data
transfer.

Zone the host to the HPE 3PAR storage system


For online and minimally disruptive migrations, zone the HBA WWNs of the migrating hosts to the destination HPE 3PAR storage system.
This connectivity ensures the host can access the volumes on the source array over the destination HPE 3PAR array during the data import
phase. For offline migrations, this zoning can be done now or any time later.
Technical white paper Page 9

Set up the Peer links


For Online Import, the IBM XIV system must be connected to one or more HPE 3PAR systems in a storage federation by exactly two Peer
links. These paths are created on a single or redundant pair (preferred) of Fibre Channel fabrics between the source IBM XIV and a
destination HPE 3PAR system. Direct Fibre Channel connectivity between the source and the destination systems is not supported. Peer
links over FCoE and iSCSI, either directly or over switches, are unsupported as well.

On the source IBM XIV system, the Peer links start out from two Fibre Channel ports in host mode, selected on different modules. The host
ports selected are preferably not used for connectivity to other hosts. On the destination HPE 3PAR system, two unused Fibre Channel host
ports must be configured in Peer connection mode with the point connection type. These ports are dedicated to the Peer links; they cannot
serve host I/O. The physical Fibre Channel ports for the Peer links on the destination HPE 3PAR array must be on an HBA located in partner
controller nodes (nodes 0/1, 2/3, 4/5, or 6/7). An existing set of Peer ports with or without virtual Peer ports must be reused; configuring
more than two Peer ports on a destination HPE 3PAR system is unsupported. No virtual (NPIV) Peer ports are needed on the destination
HPE 3PAR array, but Online Import is compatible with their presence. No dedicated HBA is required for the Peer ports on the destination
array.
OIU requires the Peer ports to be present on the destination HPE 3PAR array and the SAN zones for the Peer links to be created and
enabled before you issue the createmigration command. Use the controlport config peer command to convert a host port to a Peer port on
the destination HPE 3PAR array. Alternatively, you can use the SSMC to create the Peer ports with a few mouse clicks.

Peer link connectivity between the two storage systems is established by creating two SAN zones. These SAN zones are set up with an
appropriate SAN switch management tool. Each zone comprises an IBM XIV host port and an HPE 3PAR Peer port located in the same
fabric. Note that after a host port becomes a Peer port, the WWN of the host port changes from xx:xx:x0:xx:xx:xx:xx:xx to
xx:xx:x2:xx:xx:xx:xx:xx. You must use this new WWN when performing SAN zoning for the Peer links.

You can use the following CLI commands on the destination HPE 3PAR array to test the status of the Peer ports and Peer links:
showpeer
showtarget
showportdev ns n:s:p (with n:s:p the node:slot:port identification for each of the Peer ports)

The following sample output shows these commands used for Peer ports on 0:2:1 and 1:2:1:
cli% showpeer

----Name/WWN---- Type Vendor ------WWN------- Port

5001738002410000 - Non-InServ IBM 5001738002410160 1:2:1

5001738002410162 0:2:1

cli% showtarget

Port ----Node_WWN---- ----Port_WWN---- ------Description------

1:2:1 5001738002410000 5001738002410160 reported_as_scsi_target

0:2:1 5001738002410000 5001738002410162 reported_as_scsi_target


Technical white paper Page 10

cli% showportdev ns 0:2:1

PtId LpID Hadr ----Node_WWN---- ----Port_WWN---- ftrs svpm bbct flen -----vp_WWN----- -SNN-

0xa2700 0x1f 0x00 5001738002410000 5001738002410162 0x0000 0x0000 0x0000 0x0000 20210202AC005F8D n/a

0xa2300 0x01 0x00 2FF70202AC005F8D 20210202AC005F8D 0x8800 0x0032 n/a 0x0800 20210202AC005F8D n/a

cli% showportdev ns 1:2:1

PtId LpID Hadr ----Node_WWN---- ----Port_WWN---- ftrs svpm bbct flen -----vp_WWN----- -SNN-

0xb2700 0x1f 0x00 5001738002410000 5001738002410160 0x0000 0x0000 0x0000 0x0000 21210202AC005F8D n/a

0xb2300 0x01 0x00 2FF70202AC005F8D 21210202AC005F8D 0x8800 0x0032 n/a 0x0800 21210202AC005F8D n/a

In this sample output, 500173800241016x are Fibre Channel port WWNs on the IBM XIV source system. WWNs 2x210202AC005F8D are
for Peer ports 0:2:1 and 1:2:1 on the destination HPE 3PAR system.

The Peer ports and the SAN zones for the Peer links should stay in place until the data transfer has finished. You can unzone the Peer links
and convert the Peer ports back to host ports after completing all migrations to the destination HPE 3PAR array.
Configure the IBM XIV in an OIU-based migration
In an OIU-based migration, the IBM XIV must be added as a source array in the migration. The OIU addsource command with its appropriate
options adds the array in the OIU environment. After adding the HPE 3PAR array by using the adddestination command, issue the
showconnection command to verify the operational status of the Peer links between the two storage systems. Figure 1 shows the successful
execution of these commands. Consult the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide for the various options for the
commands.

FIGURE 1. Adding a source IBM XIV and HPE 3PAR destination storage system to OIU and testing the Peer link connectivity

Online Import Utility supports adding up to four source IBM XIV arrays and up to four HPE 3PAR arrays.
Technical white paper Page 11

Configure the IBM XIV as a migration source for an SSMC-based migration


When using the SSMC for migration, you must add the IBM XIV system as a migration source to a storage federation that holds the intended
destination HPE 3PAR array. A single-system federation is viable. You can complete the steps to create the Peer ports and interlink the IBM
XIV with the HPE 3PAR array in a few mouse clicks. The procedure outlined here assumes the destination HPE 3PAR array is not yet a
member of a storage federation.

Figure 2 shows an SSMC federation setup with two HPE 3PAR systems and an IBM XIV migration source connected to one of them.

FIGURE 2. An IBM XIV system connected to a two-system HPE 3PAR storage federation

The line between the IBM XIV and the HPE 3PAR array in Figure 2 does not mean correct SAN zoning is present between the two systems.
It only points to the logical connection between the two systems in the framework of a federation. The HPE 3PAR StoreServ Management
Console 3.6 User Guide describes in detail the steps to create a storage federation and add an IBM XIV storage system as a migration source
to an HPE 3PAR storage system in the federation. The WWNs that go in the SAN zones are listed on the SSMC page for adding the IBM XIV
to the federation. You can create the logical connection between the systems without the SAN zones in place. In this situation, the Health
state of the federation setup shows missing SAN links. After you create the SAN zones for the Peer links, the Health state changes to Normal
within seconds. Alternatively, you can configure the Peer ports and enable the SAN zones upfront before starting the premigration work in
the SSMC. The SSMC recognizes and uses the Peer ports present; the proposed host ports on the XIV might still need adjustment.

You can connect up to eight IBM XIV migration sources to a single-system federation. Each migration source can connect to up to four
HPE 3PAR systems, which must all be in the same federation. HPE does not support using a second instance of the SSMC to connect the
same IBM XIV as a migration source to a different storage federation.

Set up multipathing
You need to install and configure the multipathing solution for HPE 3PAR on the host before starting the Online Import operation. All
modern host operating systems include a native multipathing solution. Storage array vendors often complement these solutions with
software that delivers additional array-specific features. The IBM Storage Host Attachment Kit (HAK) is a free software package for
registered users that helps a host’s administrator perform various host-side tasks on volumes presented from an IBM XIV array. The HAK
discovers storage systems and volumes presented to the host, runs diagnostics, rescans for new volumes, and applies best practice
multipath connectivity on the host. Versions of HAK exist for all host operating systems supported by Online Import except HP-UX 11iv3
and VMware® operating systems where IBM relies on the multipathing software native to the operating system.

The OIU and Online Import by SSMC do not support the presence of an IBM HAK instance during volume migration. You must detach the
IBM XIV array from the host and uninstall the HAK before initiating the migration. When migrating a boot-from-SAN volume, ensure that the
host can boot after the HAK is removed.

HPE 3PAR integrates with multipathing that is built into the supported host operating systems. Exceptions are IBM AIX®, which uses a
proprietary HPE 3PAR Object Data Manager (ODM), and Windows 2003, which uses an HPE 3PAR MPIO package. The installation and
configuration of Microsoft Multipath I/O (MPIO) on Windows requires a reboot of the host. Addition of a new multipathing context for
VMware, Linux, or UNIX® systems is handled gracefully and does not require a reboot. You can find details on configuring multipathing
software in the HPE 3PAR implementation guides for the operating system in use.
Technical white paper Page 12

A volume’s SCSI Device Inquiry string changes when the volume is migrated from an IBM XIV to an HPE 3PAR system. For a Windows host,
this change requires a reboot, which forces Online Import for this operating system to be the minimally disruptive migration (MDM) or offline
type. This reboot can be included in the one required when adding the HPE 3PAR SCSI Device Inquiry string to MPIO. For operating systems
that support online migration, the change of the SCSI Device Inquiry string is handled without I/O interruption.

Checking host health


If you select a minimally disruptive or offline migration, you must shut down the host that has volumes under migration before the actual
data transfer can begin. The subsequent boot activates the changes made to the Fibre Channel zoning and MPIO. A failure of this reboot is
not necessarily caused by these changes. Other subsystems such as an inaccessible boot LUN, an unknown hardware problem, an interrupt
conflict, or issues with a recent update of the operating system or device drivers may be the root cause of the failure. HPE recommends
executing a reboot of the host under migration before any changes are made for an Online Import migration. A clean reboot confirms the
healthy state of the host and its applications, making it ready to undergo the migration.

Admit phase
The second stage in the data migration is the admit phase. During the admit phase, the Online Import subsystem performs the following
tasks in order:
1. Validate the host or volumes selected for the migration. A subset of all volumes presented to a host cannot be migrated.
2. Apply the implicit addition algorithm to include additional hosts or volumes to the migration when needed.
3. Create one or two hosts of the default type on the IBM XIV system. These so-called Peer hosts represent the destination HPE 3PAR
array on the IBM XIV array. For an IBM XIV Gen2 array, Online Import creates one Peer host on the IBM XIV array by the name of the
Node WWN of the destination HPE 3PAR system. The WWNs for this Peer host are the two WWNs of the Peer ports on the destination
HPE 3PAR array.
For an IBM XIV Gen3 array, two Peer hosts are created, each with one WWN. The name of these hosts is “hostxxxx” with “xxxx” a unique
four-digit decimal number. Peer hosts are dedicated to a specific destination HPE 3PAR array and are reused at every migration to this
HPE 3PAR array. Peer hosts for multiple HPE 3PAR arrays can co-exist on one IBM XIV. Refer to Figure 3 for the Peer host created for
the IBM XIV Gen2 array and Figure 4 for the Peer hosts created for the IBM XIV Gen3 array.
4. Export the volumes under migration to the Peer hosts on the IBM XIV array.
5. Create the Peer volumes on the destination HPE 3PAR array.
6. For an online migration, export the Peer volumes to the host under migration.

NOTE
The Peer volumes in Step 5 are not the same as the identically named Peer volumes used in replication on the IBM XIV.

When you select one or more volumes for migration, the implicit addition algorithm built into OIU and the SSMC automatically adds all
volumes for migration that are presented to the same host or one in its cluster. All clustered hosts are added to the migration as well. When
you select a host as the migration object, the same algorithm adds all volumes presented to that host as well as all hosts in the cluster it
belongs to and all other volumes presented to these hosts. This association is based on additional exports for an explicitly specified volume
that Online Import discovers on the source IBM XIV array. You cannot deselect any volume added by the algorithm.

For offline migration, only volumes (no hosts) can be specified for migration. All volumes intended for migration must be listed explicitly. The
implicit addition algorithm will not launch. The volumes selected must be unexported before the start of the admit phase and remain
unexported during the data import. This means the applications using the migrating volumes must be stopped during the entire migration.
There is no need to shut down the host to which the volumes were exported right before the start of the offline migration.
Technical white paper Page 13

FIGURE 3. The first two hosts in the list are Peer hosts, created on an IBM XIV Gen2 array for an HPE 3PAR array

FIGURE 4. Hosts with name “hostxxxx” in the list are Peer hosts, created on an IBM XIV Gen3 array for two HPE 3PAR arrays

For each volume under migration, Online Import creates a Peer volume on the destination HPE 3PAR array. This volume has the same name,
size, LUN number, and WWN ID as its parent volume on the IBM XIV source system. Its provisioning type is Peer and it is created with RAID
0 as a protection level. You can verify this in the SSMC and HPE 3PAR CLI at the end of the admit phase. RAID 0 protection is acceptable
because no HPE 3PAR logical disks are behind them in the admit phase; as a result, the Peer volumes do not contain any data. During this
phase, application I/O keeps flowing directly from the host to the source IBM XIV.

For an online and minimally disruptive migration, OIU creates the host definition on the HPE 3PAR array if it does not yet exist. The
hostname and WWNs on the IBM XIV are automatically passed to a createhost command executed in background on the destination HPE
3PAR array. Its persona value is taken from the -persona option in the createmigration command. Because the host was already zoned to the
HPE 3PAR array following best practices, the host appears with its WWNs and HPE 3PAR ports in the showhost output. When supplying the
-vvset and -hostset options to createmigration, OIU automatically creates a VVset and a host set with the specified name. When the host
already exists on the destination HPE 3PAR array, it is moved into the specified host set. The destination CPG in -destcpg must exist on the
HPE 3PAR; it is not created by OIU.

By default OIU creates a host in the default domain on the destination HPE 3PAR array. The option -domain for the createmigration
command creates the host in the specified domain. You can use the options -vvset and -hostset with the -domain command; the specified
VVset and host set will be created in the domain if they do not exist.
Technical white paper Page 14

Online Import by SSMC does not create the migrating host or host set on the destination HPE 3PAR system. You must create the host with
the correct WWNs and persona value in the SSMC or from the CLI before starting the migration. Alternatively, you can copy the parameters
for the hosts from the Import configuration option in the Actions menu on the Federations Configuration page in the SSMC. Online Import by
SSMC can only migrate volumes to the default domain on the destination HPE 3PAR array, even when their host was already created in that
domain. For a new host, the workaround is to migrate the volumes from the IBM XIV array to a newly created CPG on the HPE 3PAR array
and use the movetodomain CLI command to transfer the host with its CPG and volumes to the intended domain. If the host already exists on
the non-default domain, you can migrate the volumes offline to a temporary CPG, transfer the CPG and its volumes to the host’s domain,
tune the volumes to the final CPG in the domain, remove the temporary CPG, and export the migrated volumes to the host. Alternatively,
you can take the existing host to the default domain, execute the migration (online, MDM, offline), and then return the host, its CPGs, and its
volumes to the intended domain.

For offline migration, no host definition is created by OIU of Online Import by SSMC. For OIU, the -vvset, -hostset, and -domain options can
be used with an offline migration.

For an online migration, the Peer volumes are exported to the migrating host near the end of the admit phase. Host operating systems that
support online migrations handle the necessary multipathing reconfiguration and path changes without disruption. When this
reconfiguration is complete, you need to execute a SCSI bus rescan on the host or hosts under migration. For Online Import by SSMC, the
SSMC informs you of this rescan requirement. This rescan discovers the paths from the host to the destination HPE 3PAR array, causing half
of the host I/O to flow through the destination HPE 3PAR array and over the Peer links to the source volumes on the IBM XIV array.

In the case of an MDM, you need to install and configure multipathing for the HPE 3PAR on the host if it is not yet present after completion
of the admit phase. The required reboot for the changes in multipathing is integrated into the host shutdown to activate the SAN zone and
SCSI Inquiry string changes. For Online Import by SSMC, the SSMC informs you of this reboot requirement. When ready for the import, stop
all applications on the hosts and unzone them from the source IBM XIV array. The export of the Peer volumes to the host takes place at the
start of the import phase.

For offline migration, OIU stops at the end of the admit phase. The import must be started explicitly. Online Import by SSMC does not stop; it
completes the import phase without stopping.

A second admit job can be submitted while a first one is executed and while others are in the preparation completed state.

The situation created at the end of the admit phase is 100% reversible; every object and export created by Online Import can be undone
manually without interfering with the applications that use the volumes intended for migration. The section on rollback in this white paper
describes the steps to revert to the original situation.

Import phase
During the import phase, Online Import performs the following tasks in order:
1. Place a SCSI reservation on the volumes under migration on the IBM XIV.
2. For MDM only, export the Peer volumes to the host under migration.
3. Unexport the volumes under migration from the host on the IBM XIV array.
4. Start the data import from the IBM XIV to the HPE 3PAR array.
5. Remove the SCSI reservation on the volumes after the migration finishes.
6. Unexport the migrated volumes from the Peer hosts on the IBM XIV array.
For an MDM, you can power on the host after the individual import tasks for volume migration were created on the destination HPE 3PAR
array. You can look for the ID of the import tasks by looking in the SSMC or by using the showtask –active -type import_vv CLI command.
After the reboot of the host, you can start and validate the applications while the migration is ongoing.

The import phase ends when all volumes in the migration definition have been transferred. Multiple import jobs can execute simultaneously
along with ongoing admit jobs.

You can view the SCSI reservation OIU placed on every migrating volume on the IBM XIV array from any host that has the IBM XIV XCLI
software installed. The command to use is xcli reservation_list [vol=<vol_name>]. Figure 5 shows the output of this command for three
volumes under migration.
Technical white paper Page 15

FIGURE 5. Viewing the SCSI reservations on the volumes under migration from the IBM XIV array

You should not release these reservations manually. After the successful migration of the volume to an HPE 3PAR system, its reservation is
cleared automatically.

Post-migration phase
At the completion of a migration, details in OIU about the status and the list of migrated volumes of all previous migrations remain available
with the showmigration and showmigrationdetails commands until you delete them with the removemigration command. The migration
details for Online Import by SSMC are available on the Peer Motions page in the Federations menu on the SSMC. You can remove the
migration details from this page or the Activity page. The configuration of the migration source attached to the federation can remain in
place indefinitely on the SSMC or it can be removed after the migration. The Peer hosts on the IBM XIV are not removed by Online Import at
the end of a migration—you can do that manually. To recover disk space on the IBM XIV, volumes migrated can be deleted right after the
migration or any time later.
If you plan no more migrations from the source IBM XIV system, complete these steps for the cleanup:
1. Remove migration details:
a. For Online Import by SSMC, remove all migration entries from the Activity page or the Peer Motions menu on the Federations page
on the SSMC.
b. For OIU, remove the migration details from OIU with the removemigration command. For unsuccessful migrations, you must execute
the command twice.
2. Remote Peer link connectivity:
a. For Online Import by SSMC:
I. Remove the Peer hosts on the source IBM XIV system using the XIV GUI or the XCLI.
II. Remove the IBM XIV array as a migration source from the federation.
III. Remove the SAN zones for the Peer links between the IBM XIV and HPE 3PAR array.
b. For OIU:
I. Remove the Peer hosts on the source IBM XIV system using the XIV GUI or the XCLI.
II. Remove the source and destination arrays from OIU.
III. Remove the SAN zones for the Peer links between the IBM XIV and HPE 3PAR arrays.
3. Remove all migrated volumes from the IBM XIV array.
4. Reconfigure the Peer ports into host ports on the HPE 3PAR array.

MIGRATION WORKFLOW IN OIU


The first step in an OIU migration is to add the source and destination storage system and verify the presence and status of the Peer links.
This is part of the premigration phase discussed here. You can add multiple source and destination systems for N:M migrations.
In the second step, you define a migration’s details by executing the createmigration command. Its mandatory options include the migration
type (online, MDM, offline), the list of hosts or volumes to be migrated, the volume’s landing CPG and provisioning type on the HPE 3PAR
array, and the persona value for the host under migration. The persona value is a set of characteristics for the host on the HPE 3PAR array
and defines the extent to which the host deviates from the default Fibre Channel standard behavior. Optional parameters for the
createmigration command include defining one or more volume consistency groups and a priority that deviates from the default medium
priority. The createmigration step can take a few minutes to complete. Use the showmigration and showmigrationdetails commands to
monitor the process.
Technical white paper Page 16

OIU supports concurrent migrations of the types N:1, 1:N, and N:M from the same OIU console. You can execute multiple createmigration
commands in parallel and while one or more import operations are ongoing. To migrate more than one host in a single migration definition,
use the option -srchost host1,host2,… in the createmigration command. Importing volumes from an IBM XIV array, a Dell EMC, and a Hitachi
system concurrently to one or multiple HPE 3PAR arrays is supported. A Peer Motion, an Online Import, and a federation-based migration
can take place at the same time to the same HPE 3PAR array. Note that the limit of nine ongoing import tasks on an HPE 3PAR array still
holds. Submitted import tasks beyond nine are queued and executed on a first-come-first-served basis unless their priority is higher than of
other queued tasks, in which case they take precedence.

For the online migration, you can import a subset of the admitted volumes by using the -subsetvolmap option for startmigration. You have
to explicitly select the volumes in this subset by the option –subsetvolmap. The selected volumes are separated by a comma like in
{“vol1”,”vol4”,”vol5”}.

MIGRATION WORKFLOW IN THE SSMC


It is assumed that the IBM XIV array is already configured as a migration source to one or more federated HPE 3PAR systems in the SSMC.
The SSMC uses wizards to facilitate the migration volumes from an IBM XIV array to an HPE 3PAR array. These wizards present a series of
screens to collect information about the intended source and destination system, the hosts or volumes under migration, and the landing
details of the volumes on the destination HPE 3PAR array, among others. Two premigration wizards are available in the Actions menu on
the Federations page in the SSMC. They are highlighted in Figure 6.

FIGURE 6. The wizards to import host details from the IBM XIV array and to refresh the host and LUN cache for the IBM XIV on the HPE 3PAR array are shown in gray

The SSMC does not automatically create the host or hosts to which the volumes under migration are exported on the IBM XIV system. The
Import configuration wizard in the SSMC shown in Figure 6 conveniently imports host definitions from the IBM XIV array.

In the admit phase, Online Import in the SSMC operates from a cached list of host and volume names. HPE recommends updating this cache
content before starting the migration. For this update, select the Refresh external systems option in the Actions menu on the right in
Figure 6 and follow the wizard. The completion time for this refresh depends on the number of objects on the source system and its level of
activity. Table 2 shows the refresh times for volumes on an IBM XIV Gen2 and Gen3 array without load.
Technical white paper Page 17

TABLE 2. SSMC refresh times for volumes on IBM XIV Gen2 and Gen3 arrays

Number of volumes on the IBM XIV array Refresh time of the IBM XIV Gen2 array (seconds) Refresh time of the IBM XIV Gen3 array (seconds)

2 1 1
10
2 1

50
7 1

100
13 1

200
24 1

400
50 2

800
99 4

1600
204 10

3200
426 27

4000
557 39

FIGURE 7. Completion times for cache refresh for an IBM XIV array as shown in the SSMC for Online Import

Figure 7 presents a graphical representation of Table 1. It shows that the time for a refresh is nearly linear with the number of volumes
present in the array.

You can verify the progress of the cache refresh on the Activity page in the SSMC. Wait for the cache refresh to complete and then select the
objects to be migrated from the list of hosts or volumes, as shown in Figure 8. You do not need to wait till the cache refresh has finished—
you can select the Add Objects radio button shown in Figure 8 and manually enter the names of objects to migrate. For an online and
minimally disruptive migration, all related objects are added to the migration definition following the implicit addition algorithm. This way, the
migration can be started before the refresh cycle is complete.
Technical white paper Page 18

FIGURE 8. Selecting objects for migration from the SSMC cache by host name (left) and by volume name (right)

After you have selected the migration objects, the wizard takes you to the page for defining the migration details. From the drop-down
menus, you can select the destination HPE 3PAR array, the landing CPG, the provisioning type (thin by default), the migration type (online,
MDM, offline), and more. You can also decide to place all volumes in a consistency group (none by default), whether the target volumes
should be compressed (no by default) if unsupported by the destination array, and change the import task priority from its default of
medium. You can select only a single consistency group and all volumes in the migration will be part of the group. You can specify a host set
and a vvset to group the migrating objects on the HPE 3PAR array. Note that you need to specify either none or both.
Clicking the Start button on the bottom right of the page initiates the admit phase. Any missing mandatory values are highlighted in red and
must be added. Help is available as popup text to the right of each item that needs to be completed. The Overview pane on the Federation
page then shows in the Peer Motion Summary section the status of the migration preparation (Figures 9 and 10).

FIGURE 9. The admit phase is ongoing

FIGURE 10. Pause at the end of the admit phase


Technical white paper Page 19

Clicking the link in the Peer Motion Summary shown in Figure 10 reveals the reason for the pause. For an online migration, a rescan on the
host is needed. For a minimally disruptive migration, a host shutdown is required during which zone changes must be executed. Figure 11
shows the Peer Motion screen at the end of the admit phase for a minimally disruptive migration. The Progress section on the right shows
that the Peer Motion operation is paused because of a required host shutdown.

FIGURE 11. Screen with the host shutdown request in a minimally disruptive migration

Clicking Actions → Delete in Figure 11 would remove the migration definition from the SSMC, effectively canceling the migration. Cleanup
upon cancellation involves removing the Peer volumes that were created on the destination HPE 3PAR array at the end of the admit phase
and unpresenting the migrating volumes from the Peer hosts on the IBM XIV array (refer to Figures 3 and 4).
After executing the required shutdown (for MDM) or host rescan (in case the migration was online), click Actions → Resume in Figure 11
to start the actual data transfer. You can view details about the progress for the entire transfer and per volume in the Peer Motions →
Virtual volumes pane, as shown in Figure 12.

FIGURE 12. Progress of the entire migration (left pane) and per volume (right pane)
Technical white paper Page 20

Every volume is migrated by an individual HPE 3PAR task. The task ID can be retrieved from the HPE 3PAR CLI by using the command
showtask -active -type import_vv.

The LUN ID of a volume on the IBM XIV system is maintained when it is migrated to an HPE 3PAR array by using Online Import. A conflict
results if the LUN ID of a migrating volume is already allocated for the host on the HPE 3PAR array. Online Import by SSMC handles LUN ID
conflicts automatically and silently by assigning the next available ID to the LUN, starting from zero. The change is recorded in the SSMC
federation.log file:
2020-05-04 11:35:41.254+0200 INFO c.t.i.a.i.u.TPDMigrationWorkflow - Conflicting LUN: LUNtest for
the host: valerian with lunID: 1

2020-05-04 11:35:41.254+0200 INFO c.t.i.a.i.u.TPDMigrationWorkflow - [LUNtestPM&MDM] Execute


createVLUN({"name":"LUNtest","arrayUID":null,"forget":null,"keepalua":false,"count":1,"lunID":"0+",
"portLpid":null,"hostName":"valerian","override":false,"noVCN":true})

Figure 13 is an SSMC screenshot showing volume LUNtest that had LUN ID 1 on the IBM XIV array was assigned LUN ID 4 because IDs 0 to
3 were already used on the host.

FIGURE 13. Migration details for volume LUNtest showing the change in LUN ID in the lower right

In OIU you can abort the createmigration operation if the LUN ID of the volume selected for migration is already in use on the HPE 3PAR
array for that particular host by using the -autoresolve false option.

Extensive parallelism is possible for Online Import by SSMC. Any number of migrations can be in the admit or import phase. Only nine import
tasks are executed. The others are queued and executed on a first-come-first-served basis unless their priority is higher than of other
queued tasks, in which case they take precedence.
Technical white paper Page 21

BEST PRACTICES
This section lists techniques and methods for Online Import that will lead to a reliable outcome for migrations from IBM XIV to HPE 3PAR
arrays.

Migration preparation
Online Import does not support the presence of the IBM XIV HAK multipathing software on a host under migration for any host operating
system. The volumes exported to the host must be managed by the multipathing software native to the host operating system.
Consequently, you must remove the HAK before starting an Online Import migration.
The number of paths from the host to the destination HPE 3PAR array can be different from the number to the source IBM XIV system. As
an example, a volume presented to the host over a single path to the IBM XIV storage can be migrated using Online Import to an HPE 3PAR
array connecting the host by two paths. Peer Motion migration traffic always flows over both Peer links.

Before issuing the createmigration command in OIU or starting the Peer Motion wizard in the SSMC, verify that the CPG intended for the
migrating volumes on the HPE 3PAR storage system exists. The CPG is not created if it is not present in the destination HPE 3PAR array.

Some types of host clustering software use SCSI-3 PGR to coordinate access to shared volumes. A reservation governs the path between an
HBA in the host and a volume on the storage array. The presence of this type of reservation causes the admit phase to fail. You must release
the reservations on the clustered disks before starting the Online Import operation. For this reason, you should stop applications using
shared disks, and then stop the cluster service on each member node of the cluster.
The destination HPE 3PAR array must have the free space to accommodate the migrating volumes. The space for the volumes is allocated
on the destination HPE 3PAR during their import phase after they were unpresented from the host on the IBM XIV system. For volumes
landing in full provisioning on the HPE 3PAR array, the required space is immediately and entirely allocated. For thin, thin-deduped, and thin
compressed volumes, space is allocated as needed during the migration. The provisioning type of the volumes on the IBM XIV system does
not matter in this space allocation operation on the HPE 3PAR system.

IBM XIV thin provisioning pools carry a Pool Hard Size and Pool Soft Size capacity limit. The first limit is the maximum physical capacity that
can be used by all volumes and snapshots in the pool. The Pool Soft Size is the maximum logical capacity that can be assigned to all
volumes and snapshots in the pool. During the admit phase of the migration and the import of the volumes under migration, application
writes to thin volumes continue to consume space from the thin pool in which the volume was created. If either thin pool limit is reached,
application writes will fail, causing host disruption. Although the volumes experiencing the write failure are not locked for reads, the situation
is suboptimal for Online Import. Take care that thin pools on the IBM XIV array do not fill up while the migration is ongoing. To reduce this
risk, you can execute the migration during a period of low application write traffic, halt the application temporarily, execute the migration
offline, or increase the thin pool size upfront to arrange a safe buffer of free space. A thin pool can be decreased in size on the IBM XIV after
the migration.

Online Import identifies the migrating host by its WWNs, not by name. This means the name of the migrating host created manually on the
destination HPE 3PAR array does not need to match its name on the IBM XIV array. HPE recommends using identical names for the host on
the IBM XIV and the HPE 3PAR arrays to avoid confusion during and after the migration.

Selecting an exported and an unexported volume inside a single migration definition in Online Import is unsupported. OIU shows this error
message in the output of the showmigration command:
>showmigration

MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME


START_TIME END_TIME STATUS(PROGRESS)(MESSAGE)

1567523679968 MDM IBM.2810-7833618 aperol


Tue Sep 03 17:14:39 CEST 2019 -NA- preparationfailed(-NA-)(OIUERRPREP1024:
Migration with mixmode volumes (presented and unpresented volumes) is not supported.;)
Technical white paper Page 22

Under this condition, the SSMC will fail the Peer Motion migration with the message shown in Figure 14.

FIGURE 14. Error message when configuring an exported and unexported volume in one migration definition

Clicking the Details link shown in Figure 14 or looking for the Task Details on the Activity page for the failed Peer Motion (shown in Figure
15) does not reveal the root cause of the problem.

FIGURE 15. Error message in Task Details entry in the Activity log when configuring an exported and unexported volume in one migration definition

The error message in the federation.log file of the SSMC discloses the issue. The last two lines indicate the cause and the solution for the
problem.
2019-09-03 17:21:22,804 ERROR TPDOiuManager:37 - Unable to fetch source volume details for array
UID:1300577 Trace #9f93b333942b527f490e368a48d38176#
com.tpd.inform.oiu.utils.OIUException: Cannot migrate both exported and unexported volumes
simultaneously. Migrate exported volumes separately from unexported volumes.

Online Import retains the name of the volume migrating from the IBM XIV array on the destination HPE 3PAR array, within the constraints
imposed by HPE 3PAR OS. A volume name on an IBM XIV system can have up to 63 characters. If the volume name is longer than 31
characters, the Peer volume and hence the final name of the migrated volume on the HPE 3PAR array is automatically truncated to its first
31 characters by OIU. The following entries for this truncation are written to the OIU log file:
services.SubmittedMigrationMonitorTask - Converting the Volume name:
abcdefghijklmnopqrstuvwxyz0123456789 to make it 3par compatible name...
2019-09-03 18:03:58,411 WARN com.hp.oiu.services.SubmittedMigrationMonitorTask -
Name:abcdefghijklmnopqrstuvwxyz0123456789is truncated to:abcdefghijklmnopqrstuvwxyz01234 to make it
3par compatible name
Technical white paper Page 23

Online Import by SSMC intercepts an illegally long name when you click Start on the migration definition page. The error message and
proposed remedy are shown in the Activity page.

FIGURE 16. Failure message when migrating a volume with a name longer than 31 characters to an HPE 3PAR array

On the IBM XIV system, the tilde (~) and the space character can be used in the volume name but are unsupported by HPE 3PAR OS. When
present in a volume name on the IBM XIV system, the tilde and space characters are converted by Online Import into an underscore (_) for
the name of the Peer volume on the HPE 3PAR system. This conversion is executed silently; neither OIU nor Online Import by SSMC hint for
the name change in the GUI. OIU logs the conversion for volume aaa~bbb ccc as:
2019-09-03 18:35:09,196 INFO com.hp.oiu.services.SubmittedMigrationMonitorTask - Converting the
Volume name: aaa~bbb ccc to make it 3 par compatible name...

However, the new name for it is not shown until the entry for admitvv:
2019-09-03 18:35:45,321 INFO com.tpd.inform.comm.tcl.socket.TPDCliSocket - SENDING (aperol):
{admitvv aaa_bbb_ccc:00173800621E2EE1}

You can see the conversion of the tilde and blank character to an underscore. Online Import by SSMC logs the volume name conversion in
the SSMC federation.log as:
2019-09-13 20:09:51.799+0200 WARN c.t.i.a.d.TPDOiuManager - Source volume name: aaa~bbb ccc, is
converted to TPARCompatible name: aaa_bbb_ccc
Technical white paper Page 24

When two volumes on the IBM XIV system that share the same first 31 characters in their name are selected for migration, createmigration
in OIU fails with this output for showmigration:
>showmigration
MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME
START_TIME END_TIME STATUS(PROGRESS)(MESSAGE)

1567529646698 MDM IBM.2810-7825118 aperol Tue Sep 03


18:54:06 CEST 2019 -NA- preparationfailed(-NA-) (:OIUERRDST0026:Duplicate volume name(s).
Cannot proceed, try again after modifying volume name(s) as per 3PAR naming standards.;)

The OIU log contains this entry for the problem:


ERROR com.hp.oiu.destination.tpar.TParDestinationArrayImpl - volume names are duplicate . Please
change the names and retry migration:[abcdefghijklmnopqrstuvwxyz01234]

The same volumes fail in Online Import by SSMC with the popup message shown in Figure 17.

FIGURE 17. Error message when selecting two volumes for migration on the IBM XIV array with the same first 31 characters

Clicking the Details link shown in Figure 17 does not reveal additional information. The first line of the message contains the name of the
Peer Motion migration with a volume name of 36 characters (ending in 788) followed by a date and timestamp. The second line contains
the volume name truncated by HPE 3PAR OS to 31 characters (ending in 234). The error message indirectly points to the presence of
another volume with the same first 31 characters. Because of this, no Peer volume was created for the volume with name ending on 788. All
other volumes in the migration definition with unambiguous names got their Peer volume created and admitted. The import of the volumes
will not start because the volume with the illegal name is missing. Fixing its name on the IBM XIV array does not help. Cleanup on the
destination HPE 3PAR system is needed to remove all Peer volumes created. HPE recommends verifying that the names of all volumes
intended to be migrated are unique in their initial 31 characters before starting the migration.

The Import configuration option of the SSMC in Figure 6 creates hosts present on the IBM XIV array on the HPE 3PAR array. Host names
longer than 31 characters cannot be imported; you must first truncate them on the IBM XIV array. You need to remove any space and tilde
characters from a host name before the import can succeed. The same rules for host names apply for OIU but no host import facility is
available.
You can create a cluster with a name and cluster members in the IBM XIV GUI or XIV CLI. Figure 18 shows an example in the IBM XIV GUI.
The setup of the njam cluster in Figure 18 is unsupported for Online Import. You need to move each cluster member out of the cluster
definition, which reverts them to stand-alone hosts for the IBM XIV GUI, before starting Online Import. You can execute this operation from
the IBM XIV GUI or by using XCLI. The change does not impact the composition of the cluster on the operating system level, the application
host I/O, or the LUN mapping of the volumes. Next, in the OIU console or the SSMC migration definition screen, specify one of the cluster
hosts or a volume that is shared between the cluster hosts. The implicit addition algorithm in Online Import will add the other cluster
members and all volumes exported to them.

The Online Import software issues sequential reads to the source IBM XIV system over the Peer links to request the data for import. The
impact on application performance during an online and minimally disruptive migration as a result of these intensive reads is hard to predict.
If the host ports of the Peer links on the IBM XIV side are shared with other hosts, the impact of the migration might be non-negligible on
applications on the other hosts. HPE recommends having no hosts mapped to the host ports in use by the Peer links on the IBM XIV array.
Technical white paper Page 25

HPE recommends recording a performance baseline for the storage systems, SAN switches, and hosts involved in the migration for a week
before the migration. This activity collects data points on IOPS, throughput, and application latency for the IBM XIV host ports, the HPE
3PAR array, and the SAN switches in use. The IBM XIV Reporter and the SAN switch performance collectors can generate this baseline. With
this information, you can determine periods of low activity for an application to decide the best time to migrate it. An increase in I/O, system,
and SAN load over the recorded baseline during the period of migration can be attributed to the Online Import activities.

FIGURE 18. Unsupported setup for OIU migration of an IBM XIV cluster named “njam” with two cluster members

Host and volumes


For an online or minimally disruptive migration, specifying the name of a host on the IBM XIV system as the migration object implicitly adds
to the migration all volumes exported to that host and any additional host that is in a cluster with the specified host. Specifying one or more
volumes as migration objects can implicitly add volumes and hosts to the selected volumes. The implicit addition algorithm is explained in
the HPE 3PAR Peer Motion and Online Import User Guide. The implicit addition algorithm removes the need to explicitly list all volumes
exported to a particular host. For offline migrations, only unexported volumes can be selected for migration. There is no implicit addition of
other volumes or hosts in an offline migration; all volumes under migration must be specified explicitly.

You can inspect the complete list of explicit and implicit volumes selected for the migration by Online Import at the end of the admit phase
when Online Import pauses for a host rescan or host shutdown. In the SSMC, you can view the list in the Peer Motion pane on the
Federations page by clicking the triangle left of the migration name. For OIU, the output of the showmigrationdetails –migrationid xxx
command lists all volumes under migration. Review this list to ensure that all volumes that must migrate are on the list and volumes that
must stay on the source system are not on the list. For OIU, use the option -volmapfile for the createmigration command when specifying a
large number of volumes for a migration. This option is particularly useful in the case of an offline migration.
Technical white paper Page 26

OIU can create multiple consistency groups on the createmigration command line. The SSMC allows you to create one consistency group.
This ability is disabled by default on the migration definition page. When it is enabled, all volumes under migration, including all implicitly
added volumes, are inside one single consistency group. The limit of 60 volumes and 120 TB combined virtual size in OIU and the Online
Import by SSMC applies if the destination HPE 3PAR array is running HPE 3PAR OS 3.3.1 EGA or later. Otherwise, the limits are 20 volumes
and 60 TB. HPE does not recommend exceeding these limits to stay within 30 seconds of no I/O during the simultaneous conversion of all
volumes in the consistency group from Peer to their final provisioning type.

When you select a volume on the IBM XIV system that is larger than supported on the destination HPE 3PAR array, the showmigration
command in OIU shows this output:
>showmigration

MIGRATIONID TYPE SOURCE_NAME


DESTINATION_NAME START_TIME END_TIME STATUS(PROGRESS)(MESSAGE)

1567605149523 MDM IBM.2810-7825118 aperol

Wed Sep 04 15:52:29 CEST 2019 -NA- preparationfailed(-NA-)(OIUERRPREP1027:


Volume ' VeryBig ' is ineligible for migration as size is less than minimum size limit 256 MB or
more than the maximum size 16 TB.;)

Online Import by SSMC shows the following error.

FIGURE 19. SSMC error message when the volume size on an IBM XIV array is too large for the destination HPE 3PAR array

Volumes exceeding the supported size on the destination HPE 3PAR array cannot be migrated.

The size of a migrated volume is the same as on the source system. You can expand the volume on the destination HPE 3PAR array after its
migration is complete using the growvv command. This expansion is nondisruptive for I/O to the volume on the HPE 3PAR array but might
require disruptive manipulation on the host operating system to allow the volume to use the additional space.

The WWN of a migrated volume on the destination array is identical to the WWN on the source array. The showvv -d command shows the
WWN of each volume. Output from the destination HPE 3PAR array for volumes vol_2 and vol_3 that were migrated from an IBM XIV array
would resemble the following output:
cli% showvv -d vol_2 vol_3 admin

Id Name Rd Mstr Prnt Roch Rwch PPrnt SPrnt PBlkRemain --------VV_WWN--------- ---CreationTime--------- Udid

0 admin RW 1/0/- --- --- --- --- -- 60002AC0000000000000000000020901 2018-04-17 17:08:18 CEST 0

337 vol_2 RW 0/1/- --- --- --- --- -- 00173800621E2E6F 2019-06-24 11:37:18 CEST 337

335 vol_3 RW 0/1/- --- --- --- --- -- 00173800621E2E70 2019-06-24 11:37:16 CEST 335

--------------------------------------------------------------------------------------------------------------

3 total 0

cli%
Technical white paper Page 27

Note that for volumes migrated from the IBM XIV array, the WWN in column VV_WWN is 64 bits. The WWN of volumes created natively on
an HPE 3PAR array like the admin volume is 128 bits long with the serial number of the array in hexadecimal form inside the last five
characters of the WWN. Having a short WWN without the destination HPE 3PAR serial number inside it is not a problem from a technical or
operational perspective. HPE recommends changing the WWN of the imported volumes to a native WWN for the destination system,
lengthening it to 128 bits. This change ensures that the name also contains the serial number of the destination HPE 3PAR array, indicating
the array the volume lives on. This change of WWN is disruptive to I/O and can be scheduled as a post-migration activity or during the next
planned downtime.

The presence of a migrated IBM XIV volume with a 64-bit WWN on an HPE 3PAR array prevents this volume and other natively created
volumes to be migrated by Peer Motion to another HPE 3PAR array. The error message in this situation is:
ERROR: OIUERRSRC0125 Failed to retrieve the volumes information from the source 3PAR array. OIURSLDST0033
Clean up any failed migrations, ensure premigration checks are taken care and retry the migration.

Changing the WWN of all volumes on the array with a 64-bit WWN to a 128-bit native WWN resolves the problem.
Volumes under migration on the IBM XIV array with a name or WWN already existing on the destination HPE 3PAR array cannot be
migrated. Volumes already transferred cannot be migrated again using the SSMC, even when the volume was deleted on the destination
HPE 3PAR array if the migration definition for it still exists on the Peer Motions screen on the Federations page. Remove the migration
definition to resolve this constraint.

After an MDM or offline migration, the disks in the host operating system might stay offline after the host reboot until brought online
manually. This is especially true for Windows systems with a SAN policy set to offline. In this case, applications that start when the host boots
up will not find their SAN LUNs and will fail to start. This can result in a major alert to a log host, triggering action by the application support
team. To avoid this, HPE recommends disabling automatic startup of the applications on a host with volumes that will be migrated using the
MDM or offline method. When the host is rebooted, you can bring the disks online and restart the applications manually. You can reinstate
the automatic startup of the applications at host boot time.

You can leverage an Online Import migration process to convert fully provisioned volumes on the source IBM XIV system to thin, deduped,
or compressed volumes on the destination HPE 3PAR array. It is also possible to convert thin or compressed IBM XIV volumes to full,
deduped, or compressed volumes on the HPE 3PAR array during an Online Import migration.

Online Import is not thin aware—a thin provisioned volume on the IBM XIV source system is imported by transferring all its allocated and
unallocated blocks. This is a limitation of the SMI-S specification. SMI-S cannot check the block allocation table of a volume to select and
migrate only the blocks in use. Contiguous blocks of 16 KB of zeroes in volumes on the source system are intercepted by the HPE 3PAR
ASIC on the destination system and are not written to disk for thin destination volumes. For migration to thin volumes, you need an HPE
3PAR Thin Provisioning Software license, which is part of the HPE All Inclusive Single System Software and All Inclusive Multi-System
Software licenses.

Applications access shared disks in a cluster serially by using an arbitration system based on SCSI reservations. You can view these
reservations on a cluster member by issuing the following IBM XIV XCLI command:

xcli reservation_list [vol=<vol_name>]

These reservations are managed by the clustering software and should not be released by the user; only the clustering software should clear
them. You must remove these reservations before starting the Online Import operation. You can do this by stopping the cluster service on
the host operating system.

Migrations
After the data transfer has started, the migration cannot be stopped; it must complete. You can issue multiple createmigration commands
sequentially. You also can start multiple import tasks in parallel.

The first nine Online Import tasks created are executed in parallel. If more than nine import tasks were created, those that are not yet
executing enter a queue and execute in order of arrival time. This queue is reordered if new import tasks arrive with a higher priority than
some on the queue. Be sure to understand the impact of adding import tasks with a high or low priority to the import task queue in light of
the potential starvation of low and medium priority tasks. Tasks in execution are never pre-empted.
Technical white paper Page 28

At completion of the admit phase for an online migration, a rescan is required to activate the paths from the host to the destination HPE
3PAR array. After the rescan, I/O from the host is spread over the original paths from the host to the source IBM XIV volumes and the newly
discovered paths from the host through the HPE 3PAR system and over the Peer links to the same IBM XIV source volumes. Applications
making use of the migrating volumes might experience a slight increase in latency in an I/O-intensive environment, potentially compensated
by the extra set of paths. HPE recommends executing the data transfer during a period of low application I/O traffic. For an online migration,
you can monitor the application traffic over the Peer ports before starting the actual data transfer by viewing a real-time graph in the SSMC
for I/O over the Peer ports on the destination. The graph can determine a good starting time with low I/O. If other migrations are ongoing,
the application I/O is buried in the Peer traffic and you have to look on the host for an estimation of the application I/O. For an MDM and
offline migration, no data flows over the Peer links until the data migration is started, so a period of low I/O must be determined from real-
time or historical information on the host.

Although the Peer volumes can exist indefinitely on the destination HPE 3PAR system, HPE recommends initiating the actual data migration
shortly after the completion of the admit phase and the host rescan to minimize the effect of increased latency on applications.
HPE also recommends the use of consistence groups for volumes that relate to each other, such as when serving the same application or
making up a Logical Volume Manager (LVM) structure. Embedding volumes in a consistency group causes continued writes by the host
applications to the volumes on the source IBM XIV system already migrated. These application writes travel in the opposite direction of the
migration traffic, leading to an increased amount of double writes as the migration progresses. In a write-intensive environment, this situation
could lead to a progressive reduction of the transfer throughput when volume migrations complete one by one. In this case, HPE
recommends executing the migration during a period of low application activity or refraining from using a consistency group. For destination
systems running HPE 3PAR OS 3.2.2 and later, a read by the host applications is retrieved from the destination HPE 3PAR system if the disk
block for it was already migrated from the source IBM XIV system. This approach reduces application read traffic over the Peer links to the
source system, diminishing the effect of increased latency due to the double writes when an HPE 3PAR consistency group is used.

For an online and minimally disruptive migration, the migrating volumes get unpresented from the source IBM XIV system at the start of the
actual data transfer. If the migration does not complete successfully and rollback to the initial state is needed, you must manually recreate
these presentations to the host on the source IBM XIV system. In the rollback, you must also remove the presentation of the migrating
volumes to the Peer hosts on the IBM XIV array and possibly clear the SCSI reservations on them. To improve the performance of the
participating arrays in an online or minimally disruptive migration, HPE recommends migrating the hosts with the least number of volumes
or the smallest total volume size first. This process frees up controller resources on the source and destination system for migration of larger
or busier hosts.

The region mover subsystem in HPE 3PAR OS is common among tasks executing Dynamic Optimization (tune), Adaptive Optimization,
snapshot and clone promotions, Peer Motion, and Online Import. This means every instance of these tasks executing during an Online
Import operation reduces the maximum number of the nine simultaneous Online Import tasks by one on the destination array, lowering the
migration throughput. Figure 20 illustrates this situation, where an active tune_vv task causes only eight instead of nine tasks of the
import_vv type to execute. Import task 27642 in the figure is active but does not migrate regions until one of the other nine tasks completes.
When nine import tasks are active and a task using the region mover is submitted, this task is queued. HPE recommends stopping Adaptive
Optimization schedules and refraining from executing Dynamic Optimization, snapshot promotion tasks on the HPE 3PAR destination
system if they will coincide in time with an Online Import migration. Volume clones (physical copies) are started even when the nine import
tasks are executing.

FIGURE 20. The presence of tune tasks forces the number of Online Import tasks to be less than the nine simultaneous tasks
Technical white paper Page 29

Peer links and Peer volumes


Having the Peer ports configured and the zoning for the Peer links in place days or weeks before the data migration takes place does not
introduce a performance penalty to applications. The Peer links carry no migration traffic until the Online Import work is started. HPE
recommends setting up the Peer links and verifying their operational status a few days before the migration starts to allow time to correct
any SAN zoning or other issues.

Removing the Peer links after a migration is not necessary. They can stay in place indefinitely. Converting the Peer ports again into host
ports and removing the SAN zones for the Peer links after all planned migrations are complete is a best practice to make additional host
ports available.

Managing Peer link throughput


For an online and minimally disruptive migration, applications on the host are operational during the actual data transfer. The bandwidth of
the Peer links is shared by Peer traffic and application I/O. Online Import gives precedence to application I/O when bandwidth contention on
the Peer links happens. In case an application writes heavily to the migrating volumes, managing the throughput of imported data over the
Peer links will benefit the latency of the application.

You can manage the throughput of the Peer links by using any of these methods:
1. All major SAN switch vendors market models that can prioritize traffic at the packet level based on the content of the CS_CTL frame. The
granularity of this is a single source to a single target.
2. Reduce the priority of the import tasks to low. This setting reduces the bandwidth footprint of Online Import on the Peer links, giving way
to application I/O.
3. For online and minimally disruptive migrations in OIU, transfer a small number of volumes simultaneously by using the -subsetvolmap
option. There is no equivalent of -subsetvolmap in Online Import by SSMC.
4. For offline migrations, select a limited number of volumes for migration at a time and let the import for them complete before taking
another batch. Migrating three to five volumes at a time does not saturate the Peer links, leaving more room for host application I/O and
reducing the usage of shared host ports.
Method 1 can prioritize application I/O over Online Import traffic between a host and a storage system, but the configuration can be
complex. The impact of Method 2 is limited. Methods 3 and 4 are the most effective ways to prioritize application I/O over Online Import
traffic.

Although Peer volumes can be placed in a VVset with an HPE 3PAR Priority Optimization QoS rule or can be governed by the System rule,
Online Import migration traffic is not affected. Reducing the speed of the SFPs on the source array, destination array, or SAN ports used for
the Peer links affects Peer traffic but also application I/O hence does not result in prioritizing application I/O. It can even cause increased
latency for application I/O.

For maximum throughput and the shortest migration duration, HPE recommends executing an Online Import migration at times of low
application activity. For predictable migration duration, an offline migration is recommended.

Monitoring and reporting


HPE encourages active monitoring of the data transfer throughput on the source and the destination storage system by using IBM XIV and
HPE 3PAR tools. It is good practice to record a performance baseline on the source IBM XIV system before starting the Online Import data
transfer to the HPE 3PAR system. If the host ports that are the end points of the Peer links on the IBM XIV are shared with other hosts, you
should monitor the traffic across the ports before and during the volume import phase.

The following sections explain how to monitor the volume import and report on it.

On the IBM XIV system


The IBM XIV Top tool, integrated into the IBM XIV GUI or as a stand-alone application, offers real-time performance graphs per volume or
host showing IOPS, latency, and bandwidth. Figure 21 shows a sample screenshot of the tool. The refresh frequency of the graph is between
1 and 45 seconds; the sliding window with data points is 60 seconds. You can obtain statistics over a longer period or an arbitrary one (up to
one year) by using the IBM XIV XCLI.
Technical white paper Page 30

In the HPE 3PAR CLI


The HPE 3PAR CLI statport -peer command on the destination system produces information about each Peer port’s throughput, number of
IOPS, queue length, service time, and I/O size. This information updates by default every two seconds; you can change the update frequency
by using the -d parameter at the start of the command. Figure 22 shows a screenshot of the output of this command for four iterations. The
screenshot shows that the load across both Peer ports is balanced and reaches the theoretical limit of 8.5 Gbps for the SFPs on the IBM XIV
system. The dips in the transfer bandwidth are attributed to the saturation of the I/O buses on the IBM XIV system.

The output in Figure 22 was recorded with no ongoing application I/O traffic and shows that the I/O packet size is 266.2 KiB or 260.0 KB.
This is the block size by which Online Import fetches data from the IBM XIV array. Figure 22 was recorded for an HPE 3PAR destination
array with T10 DIF enabled on the array meaning disk sectors of 520 bytes. Without T10 DIF, the Online Import I/O size would show
262.1 KiB or 256 KB because of the 512-byte sectors. Because the extra 8 bytes do not convey application data, you can conclude that the
Peer link bandwidth drops by 1.6% when T10 DIF is enabled.
You can also obtain Online Import throughput information by monitoring the ports on the SAN switches in the data path of the Peer links.
For Brocade switches, use the portperfshow <port number> command; for Cisco switches, use the show interface <port number> command.
These switch vendors also make port statistics available in graphical form.

FIGURE 21. Screenshot of IBM XIV Top, the real-time performance collector for IBM XIV systems
Technical white paper Page 31

FIGURE 22. Output of the statport -peer command showing detailed information on the Peer port traffic for an HPE 3PAR array with T10 DIF enabled

The import transfer granularity of Online Import is a 256 MiB block of data in an HPE 3PAR region. You can find information about the
progress of the region moves per import task in the output of the showtask –active –type import_vv CLI command as shown in the Step
column in Figure 23. The value of 642 in that column is the total size of the volume expressed in regions. From this output, you can deduct
that the size of the volumes under migration is 642/4 = 160.5 TiB. The bulk of the transfer time is spent in Phase 2/4; the other phases
combined take less than 15 seconds on average.

FIGURE 23. Region move progress information from the output of the showtask HPE 3PAR CLI command
Technical white paper Page 32

You can filter out detailed information in the 3PAR Event Log about past migrations by Online Import by executing the CLI command
showeventlog –min <minutes> -msg import –oneline –nmsg null. The -min parameter, expressed in minutes, indicates how long to look back
in time to find lines in the event log containing the word “import” but not the word “null.” Figure 24 shows sample output for this command
for a nine-volume migration.

FIGURE 24. Extracting information from the HPE 3PAR Event Log on Online Import migrations within the last 60 minutes

The output of the command reveals the name of the migrated volumes (VV_001 to VV_009), the CPG they were created in (SSD_r5) on the
destination system, the CPG for their snapshots (SSD_r6), and their provisioning type (tpvv being thin) on the HPE 3PAR destination. The
showeventlog –min <minutes> -msg <volume> -oneline command shows detailed information about the individual steps executed by
Online Import for the migration for an individual volume. Figure 25 illustrates this output for volume VV_008.

FIGURE 25. Extracting detailed migration information from the HPE 3PAR Event Log for a particular volume
Technical white paper Page 33

You can obtain detailed information per task during the migration and after its completion by issuing the showtask –d <task number>
command; you can find the task number for the migration in the Id column in Figure 23.

The data migration is directed by the HPE 3PAR destination array and continues while any of the monitoring interfaces are closed or while
other work is executed on the SSMC or the CLI window. You can reopen the tools at a later time and view the progress of the migration.

When the Online Import migration is executed inside the SSMC, information on the migration is logged as well in the event log on the
destination HPE 3PAR array. A suitable filter for extracting relevant information from the 3PAR event log on the migration of an IBM XIV
volume named “LUNtest” is showeventlog -min 60 -msg LUNtest -oneline -nmsg REGION -nmsg setkv. You can remove the -oneline option
for more detail. Figure 26 shows the filtered information from this filter for the migration of volume LUNtest.

FIGURE 26. Information filtered from HPE 3PAR event log when the migration was executed by the SSMC

In the HPE 3PAR SSMC


The graphical reporting capabilities of the SSMC include real-time and historical graphs on IOPS, bandwidth, service time, I/O size, queue
length, and average busy over the Peer links. Note that for an online and minimally disruptive data transfer, the Peer links traffic combines
the influx of importing data from the IBM XIV system to the HPE 3PAR system and the host application I/O to the migrating volumes on the
IBM XIV system, traveling in the opposite direction.

Figure 27 shows the real-time graph for the Peer ports for the migration of nine volumes of 160 GiB each. The vertical axis in the graph
shows the bandwidth over the Peer links in Kbps. The granularity of the data points is five seconds by default. You can change this to a
larger value when you create the chart. The data points are averaged over the polling interval. The host application I/O for this transfer was
approximately 620 MBps. The graph shows the throughput over the Peer links of around 8.2 Gbps, close to the maximum of 8.5 Gbps per
Peer link Fibre Channel port on the IBM XIV Gen3 system. This 8.2 Gbps includes the 620 Mbps host I/O.

The migration of all nine volumes completed in 17 minutes and 12 seconds per the SSMC Activity page. This results in a throughput of (9 x
160 GiB) x 3.6/1032 seconds = 5.0 TiB/hour. Combining the two, the bandwidth of the Peer links during the migration was about
5.6TiB/hour or 98.8% of the theoretical maximum. For offline migration, the graph in Figure 27 would represent the Online Import bandwidth
from the IBM XIV to the HPE 3PAR system exclusively.
Technical white paper Page 34

FIGURE 27. Graphical representation of the real-time throughput for the Peer ports on the destination HPE 3PAR system

In the OIU console


During the import phase, the showmigration command in the OIU console shows the progress of the migration by a percent value for all
volumes combined as an arithmetic average. Volumes that are queued for migration contribute as zero in the average. The
showmigrationdetails command shows the progress broken down per volume under migration. Both commands take many parameters
including one for writing the output to a file and another to format the output into CSV layout. This output can be pasted into technical
reports on the migration.

Post-migration
No SCSI reservation or key remains on the source volumes on the IBM XIV system after migration, meaning they can be presented again to a
host for a final tape backup, for example. Be careful when performing this process because the information on the volumes is no longer
current after the migration ended. No warranty is provided for application data consistency on a source volume after the mirroring of the
data stopped at the end of the migration. Having the migrating volumes in an HPE 3PAR consistency group does not guarantee application
data consistency. In case of rollback, most applications can repair their data volumes, recovering all or nearly all data at their restart from the
migrated volumes on the source array.

If the automatic restart of an application was disabled at reboot, bring the disks online manually after rebooting the migrating host and start
the applications by hand. Then reinstate the automatic startup of the applications at boot time.

Migrated volumes retain the 64-bit WWN they had on the source IBM XIV system. You can change their WWN to one native to the
destination HPE 3PAR array. The command setvv –wwn auto <volume> for this is disruptive for I/O to the volume.
Volumes on an IBM XIV array can be created in GB, GiB, or blocks of 512 bytes. Even when created in GB or GiB, their size on the HPE 3PAR
system is not a multiple of 256 MiB, which is a requirement for operations such as tunevv. Using the growvv <volume> 1 command moves
the volume size to the nearest 256 MiB boundary. This is not disruptive to I/O. You can adjust the file system for the increased volume size
after the process completes.
Technical white paper Page 35

LICENSING
The destination HPE 3PAR array must have a valid Online Import or Peer Motion license installed. A perpetual Online Import license is
bundled with the All Inclusive Single and Multi-System Software license that is included with the purchase of HPE 3PAR 8000, 9000, and
20000 storage systems. You can convert existing HPE 3PAR 8000 and 20000 systems to this All Inclusive scheme for a fee. Existing HPE
3PAR 7000 and 10000 systems received a term-based license for Online Import valid for one year from the day the system was built in the
HPE factory; you can purchase a permanent license. Consult your HPE representative or HPE partner for more licensing information. No
license for Online Import is required on the source IBM XIV system.

DELIVERY MODEL
HPE has designed Online Import to be easy to use. As a result, customers can execute the premigration, admit, import, and post-migration
phases. Assistance for the migration is available from the HPE Pointnext Services team. An Online Import migration can be part of a
packaged data migration project or a custom data migration service led by HPE. Each type of service will include expertise, best practices,
and automation to deliver a successful end-to-end solution. Consult your HPE representative or HPE partner and this HPE webpage for
more information about migration services.

TROUBLESHOOTING
An expanded section at the end of the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide contains information about
troubleshooting situations that can occur when executing an Online Import migration. Refer to that section for help when setting up or
executing the migration and try the steps listed to remedy the problem.

If the migration starts but does not complete, the SCSI reservations placed by Online Import on the IBM XIV volumes remain present and
must be removed manually before the migration can be restarted. To release a SCSI reservation on a volume, issue the command xcli
reservation_clear vol=<volume>.

Rollback
If the data migration does not complete for some reason (for example, a SAN issue leading to broken Peer links), use the following steps to
clean up all objects created by Online Import.
1. On the host:
a. For an online or minimally disruptive migration, stop the applications on the host.
2. If using OIU, on the HPE 3PAR destination array:
a. Cancel all active import tasks.
b. Remove the export of all Peer volumes.
c. Remove all Peer volumes.
3. If using Online Import by SSMC, on the SSMC:
a. Abort the migration task.
b. Delete the migration definition in the Peer Motions screen on the Federation page.
c. Remove the export of all Peer volumes.
d. Remove all Peer volumes.
4. On the IBM XIV source array:
a. Remove the presentation of the migrating volumes to the Peer hosts.
b. Remove any trailing SCSI reservations on the volumes under migration. Refer to the SCSI reservation section in the HPE 3PAR Peer
Motion and HPE 3PAR Online Import User Guide.
c. (Optional) Remove the Peer hosts.
5. If no consistency groups, copy back to the IBM XIV the volumes that were already migrated completely to the destination HPE 3PAR
6. On the IBM XIV source array:
a. Export the migrating volumes to the host and restart the applications.
Technical white paper Page 36

Information to collect before contacting HPE Support


Before contacting HPE Support for an IBM XIV Online Import issue, you can proactively collect information and attach it to your request for
help. The steps for collecting the information are:
1. When using Online Import by SSMC:
a. Log in to the SSMC appliance console in Hyper-V or VMware vSphere® using the ssmcadmin account.
b. Execute the command /usr/bin/config_appliance and select Option 8 to collect the SSMC logs. When finished, note the
location and the name of the file that was generated.
c. Select X to revert to the appliance shell.
d. Navigate to /var/opt/hpe/support-bundle and use FTP or SCP to transfer the support file to a server.
2. When using OIU:
a. Log on to the server running the OIU server portion.
b. Close the OIU console window.
c. Stop the HPE 3PAR Online Import service: net stop HPEOIUESERVER
d. Navigate to the data directory: cd C:\Program Files (x86)\Hewlett Packard Enterprise\hpe3paroiu
e. Copy the directory OIUData: copy OIUdata OIUdata_support
f. Compress the OIUdata_support directory to a file with the same name.
g. Navigate to the OIU log directory:
cd C:\Program Files (x86)\Hewlett Packard Enterprise\hpe3paroiu\OIUTools\tomcat\32-bit\apache-tomcat-7.0.59

h. Copy the log directory: copy logs logs_support


i. Compress the logs_support directory to a file with the same name.
j. Restart the HPE 3PAR Online Import service: net start HPEOIUESERVER
3. At the CLI of the destination HPE 3PAR array:
a. After the migration fails, copy the output of the following CLI commands to a file:
I. showsys –d
II. showversion –a –b
III. shownode –d
IV. showportdev ns n:s:p with n:s:p the location of each of the Peer ports
V. showtarget –rescan
VI. showtarget
VII. showtarget –lun all
VIII. showeventlog –startt “<start_time>” –endt “<end_time>” –online
IX. showeventlog –startt “<start_time>” –endt “<end_time>” –oneline -debug
The start and end times in the last two commands should cover the period of the execution of the migration starting with addsource until
the failure happens. The format for the time is YYYY-MM-DD HH:MM.
b. Compress the file obtained.
4. On the Windows system running the IBM XIV GUI software:
a. Log in to the IBM XIV Storage Management GUI with an administrative account.
b. In the Tools menu, navigate to Collect Support Logs and click the Start button to create the Support Log File. This file is named
“retrieved_system_xray_<array model><array S/N>_<year>” and is located in C:\Users\<username>\Application
Data\Roaming\XIV\GUI12\xray_logs.
Send the files obtained in Steps 1d or 2f and 2i, 3b, and 4b to the HPE Support group upon request.
Technical white paper

Resources, contacts, or additional links


HPE 3PAR Software Products
https://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=c04199812

HPE 3PAR Online Import Software – Overview


https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c04515233&docLocale=en_US
HPE 3PAR StoreServ Management Console
https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=SSMC_CONSOLE

LEARN MORE AT
hpe.com/us/en/storage/3par.html

Make the right purchase decision.


Contact our presales specialists.

Get updates

© Copyright 2017-2020 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change
without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Microsoft, Windows, and Windows Server are
either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. UNIX is a
registered trademark of The Open Group. VMware ESXi, VMware, and VMware vSphere are registered trademarks or trademarks
of VMware, Inc. and its subsidiaries in the United States and other jurisdictions. All other third-party marks are property of their
respective owners.

a00022568ENW, June 2020, Rev. 2

You might also like