Au Aixstorage PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Using the AIX Logical Volume Manager to perform SAN

storage migrations
Migrating AIX systems to new SAN storage subsystems with the
Logical Volume Manager
Chris Gibson
AIX Specialist
Southern Cross Computer Systems

13 July 2010

This article provides examples of how to migrate AIX systems from old to new SAN storage
subsystems. It covers both dedicated and virtual I/O systems. We will examine IBM and nonIBM storage migrations, using the AIX Logical Volume Manager (LVM).

Introduction
Connecting with Chris
Chris is a popular author and blogger on AIX. Browse Chris's other articles and check out his
blog and My developerWorks profile

If you work with AIX long enough, a time will come when you will need to migrate an existing AIX
system from an old SAN storage device to a shiny, new SAN storage subsystem. The lease may
be up on the old device, or it may simply need to be replaced by newer technology. This storage
device could be an IBM product or that of another vendor. Either way, you will be faced with the
decision of how best to migrate your AIX system to the newer device. Of course, you will also want
to minimize the impact to your running systems as a result of the migration.
In this article, I will share some examples of how I migrated AIX systems from old to new SAN
storage subsystems. I will cover both dedicated and virtual I/O (VIO) systems. To demonstrate the
approach, the examples will include both IBM and non-IBM storage.

Preparation and planning


Any migration of this type requires careful preparation and planning. Im going to assume that you
or your SAN storage administrator have already cabled, configured and connected your new SAN
storage device to your existing SAN. Also, I'll assume that your AIX systems already have SAN
connectivity to your existing SAN fabric.
Copyright IBM Corporation 2010
Using the AIX Logical Volume Manager to perform SAN storage
migrations

Trademarks
Page 1 of 20

developerWorks

ibm.com/developerWorks/

Before migrating, verify support with your storage vendor. Review the vendor's storage support
matrix. These matrices are usually listed on the storage vendor's website (or are available from
the vendor's representative). They will highlight which systems are supported with their storage
device. For example, for IBM Enterprise class storage devices (such as the DS8300), they
provide a support matrix which can be used as a reference during your planning. The DS8300
Interoperability Matrix (see the Resources section) identifies important support and compatibility
considerations with respect to various host systems and adapters. They cover a wide spectrum of
support checks such as supported operating systems, patch levels, required Fibre Channel (FC)
adapter firmware, supported SAN switch types/firmware, and much more.
One of the most important pieces of the puzzle is the required multi-path I/O device drivers and
recommended FC adapter firmware (Microcode) levels. I have seen all sorts of problems when
these components are not checked prior to integrating a new storage device into an existing
SAN and AIX environment. Most vendors provide tools to help with the planning and verification,
for example, IBM provides the Fix Level Recommendation Tools website, as well the IBM HBA
support site (see the Resources section). If a vendor does not provide online tools to assist in the
planning, then I recommend you ask them for help directly. After all, it is in their interests to help
you make their product work in your environment!
Another important (and surprisingly sometimes overlooked) stage in the planning and preparation
is the design phase. Take the time to design how your new SAN storage device is going to fit into
your existing SAN. Ask questions that will help the design process, such as:
Can/should this device connect to the existing SAN or is this a good time to provision a new
SAN Fabric?
How will the AIX systems connect to the new storage device?
How will the AIX operating system and data be migrated from the old to the new disk?
If it helps (and it usually does), draw pictures to help demonstrate answers to these questions.
It will also help others to visualize and understand what you are trying to achieve. Start with a
diagram that encapsulates your current state, then another that describes how the new device
will fit, followed by the proposed migration process or processes. Finally, at the end, state how the
environment will look once the old device is no longer needed and all the data has been migrated
from it.
Will you be migrating from a dedicated I/O environment to a virtual one? If you are, then I
recommend you review the latest IBM Redbook on migrating from physical to virtual storage (see
Resources). This publication guides you through the migration process and offers several methods
for migrating.
If you are migrating a dedicated I/O environment to another dedicated I/O configuration, then
consider the software requirements on each LPAR. For example, for IBM DS8300 storage
you need to ensure that you have the appropriate SDDPCM MPIO device drivers (e.g.
devices.sddpcm.53.rte) and the DS8300 Host Attachment kit (e.g. devices.fcp.disk.ibm.mpio.rte)
prior to the migration (or immediately after), along with the required FC adapter Microcode
(firmware).
Using the AIX Logical Volume Manager to perform SAN storage
migrations

Page 2 of 20

ibm.com/developerWorks/

developerWorks

If you already have a virtual I/O environment (running a virtual I/O server, or VIOS, for disk traffic),
then consider the requirements for the VIOS. If you are migrating from IBM to IBM storage, then it
is likely that you will simply need to update the MPIO code, FC adapter firmware and supporting
device drivers. However, if you are migrating from Vendor A to Vendor B, you may need a different
approach. The design process should shake out these considerations beforehand.
For example, if you are migrating (VIOS presented disk) from IBM DS to NetApp storage, then it is
likely that you will need to consider provisioning new VIO servers for the NetApp disk. Rather than
mix two vendors' MPIO code on the same VIOS, it may be simpler to manage if each storage type
has its own VIOS. I recommend this approach and my examples will cover how I chose to deploy
this type of configuration.
Unfortunately, I won't be discussing N-Port ID Virtualization (NPIV) in this article. NPIV adds
another dimension to storage virtualization on the Power platform. NPIV with a VIOS, utilizes
virtual FC devices to present disk (and tape) natively to the client LPARs. See the resources
section for more information.
I'm also assuming that you do not have an IBM SAN Volume Controller (SVC) in your environment.
If you do have an SVC and all your AIX systems are already behind it, then I suggest you take
advantage of this product's amazing capabilities. It can migrate storage transparently, from one
storage device to another, without the host system ever being aware of the move.
If you dont have an SVC, and you are considering implementing one, I say proceed without delay!
With an SVC in your environment, storage migrations (for the AIX administrator) become a thing
of the past. And thats just one of the many advantages to using this wonderful device. See the
Resources section for more information on the IBM SVC.
Without an SVC, AIX storage migrations will most likely involve the use of the AIX Logical Volume
Manager (LVM). This is what I will cover in the examples that follow.

IBM SAN Storage Migration with AIX and dedicated I/O


Several years ago I needed to migrate a large number of AIX hosts from an old IBM ESS (F20) to
a shiny, new IBM DS8300 storage subsystem. The AIX hosts were all using dedicated FC adapters
(HBAs). Each host had at least two FC adapters connected to our SAN. The following diagram
shows the high-level view of the SAN storage and AIX LPAR connectivity to the existing SAN
Fabric:

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 3 of 20

developerWorks

ibm.com/developerWorks/

Figure 1. IBM ESS to DS8300 storage migration - Dedicated I/O - Current state

The MPIO code for every AIX system connected to this type of storage had to be updated to the
latest SDD device driver and FC adapter Microcode. For example, our design document stated the
following:
Ensure that the following software is installed on all AIX systems, at these levels:
AIX
5200-05
IY62165 Abstract: Target device rejects writes - AIX Host to McData switch.
Fileset devices.pci.df1000f7.com:5.2.0.50 is applied on the system.
IY62116 Abstract: after EEH error, attached hdisks failed
Fileset devices.pci.df1000f7.com:5.2.0.50 is applied on the system.
Using the AIX Logical Volume Manager to perform SAN storage
migrations

Page 4 of 20

ibm.com/developerWorks/

developerWorks

ibm2105.rte
32.6.100.24 COMMITTED IBM 2105 Disk Device
devices.sdd.52.rte
1.6.0.2 COMMITTED IBM Subsystem Device Driver for AIX V52
devices.fcp.disk.ibm.rte
1.0.0.0 COMMITTED IBM FCP Disk Device
Ensure that the latestMicrocode for Fibre Channel adapters has been applied. Use the lscfg
command to determine the Microcode level of the FC adapter.
$ lscfg -vpl fcs0 | grep Z9
Updating all of our 100+ AIX systems, before migrating, was a sizeable task. However, once
completed we were able to move to the new storage device without an issue.
The approach to migrating data from the old disk to the new disk was to employ the AIX Logical
Volume Manager. There were two LVM utilities at the core of our data migration strategy (mirrorvg
and migratepv). Both commands can copy/move data between disks while the system is running.
Due to the very I/O intensive nature of these commands, they could impose a slight performance
impact to I/O on the system. Therefore, it was determined that we would not perform a data
migration when the system was running peak (disk I/O) load. We would schedule these tasks
during relatively quiet periods.
The arrow (from hdisk0 to hdisk11) in Figure 2 represents the LVM mirroring (and migratepv)
process for data migration.
The mirrorvg command would be used to migrate the operating system (rootvg) from the old ESS
to the DS8300. However, for the application/data volume groups, we chose to use the migratepv
command. This would give us some level of control over how much additional I/O activity we could
unleash on the running system. Some of the migrating systems were in production and we did not
want to flood the I/O subsystem and cause unnecessary performance issues.
Obviously before we could start, our storage team had to first attach and configure the new
DS8300 into our existing SAN. Once this was completed, we worked with the storage team to
determine what type and how many LUNs we would require for each of the AIX systems that were
migrating to this new device.
Our planning also captured each LPAR's hostname, the existing AIX hdisk names, the existing
SDD (vpath) configuration, the World Wide Port Name (WWPN) for each FC adapter, the current
FC adapter name (e.g. fcs0 and fcs1), the current LUN configuration and the proposed new LUN
configuration.

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 5 of 20

developerWorks

ibm.com/developerWorks/

With the storage carved up, we were able to assign the new LUNs to the existing AIX hosts and
perform the migration. Figure 2 below shows the DS8300 is connected to our SAN and a LUN
from it has been allocated to an existing AIX LPAR. The LUN appears to AIX as a hdisk device
(hdisk11). This disk has been allocated to an existing volume group (rootvg).

Figure 2. IBM ESS to DS8300 storage migration - Dedicated I/O - LVM Mirror Migration state

Prior to migrating, we performed a backup of the system (including a mksysb). The migration
could execute while the applications were running on the system. However, at some point after
the migration, a reboot would be required. The system boots from SAN disk. We will be migrating
the system to a new SAN disk for the operating system. To ensure that the boot disk has migrated
successfully we must make sure that we can boot the system on the new disk. The downside is
that if this fails, we would need to restore the system from a mksysb backup.
In hindsight (5 years later!), I could have suggested we use alt_disk_install (on AIX 5.2) instead
of mirrorvg. The alt_disk_install command (now replaced by the alt_disk_copy command in AIX
5.3) can clone an existing rootvg onto another disk. Using this method would have provided a
Using the AIX Logical Volume Manager to perform SAN storage
migrations

Page 6 of 20

ibm.com/developerWorks/

developerWorks

more efficient back out path. Fortunately, we never had to initiate a back out during any storage
migration.
Before making any changes to the existing system, we documented the system configuration. The
current disk, LVM and SDD (vpath) configuration was also captured. Historically, SDD configured
vpath devices to provide MPIO to hdisk devices. To keep my examples simple and more generic,
I will refer to hdisk devices rather than the vpath devices. You would typically work with vpath
devices in a pure SDD environment. Storage vendors will present MPIO disk differently. Please
keep this is in mind when dealing with your storage devices and MPIO code.
#
#
#
#
#
#

lsdev -Cc disk


lsvg l rootvg
lsvg l datavg
lsvg -p rootvg
lsvg -p datavg
lsvpcfg

When the new LUN had been assigned to the AIX host, the cfgmgr command was executed to
pickup the new DS8300 disk. I confirmed that the new disk was discovered and the paths had
been configured correctly.
;
#
#
#

A disk type of 2107 confirms it is DS8300 storage.


lsdev Cc disk | grep 2107
lspv
lsvpcfg

At this point, I could add the new DS8300 LUN (hdisk11) to rootvg with the extendvg command.
Then, I used the mirrorvg command to create an exact copy of the data on the new disk. Using AIX
LVM commands, this process was straightforward.
# extendvg f rootvg hdisk11
# mirrorvg S rootvg hdisk11

The mirroring process can take some time. As a precautionary measure, I made sure that there
was a new (secondary) dump logical volume (LV) on the new disk, that a new boot image was
created, and the boot list contained both the old and the new hdisks. If I needed to restart the
system at this point in the migration, I could be assured that I could boot from either disk.
# mklv -y hd71 -t dump rootvg 8 hdisk11
# lspv -l hdisk11
# sysdumpdev -s /dev/hd71
# bosboot -a -d /dev/hdisk11
# bosboot -a -d /dev/hdisk0
# bootlist m normal -o
# bootlist -m normal hdisk0 hdisk11
# ipl_varyon i

Once the mirrors had synced (i.e. lsvg l rootvg did not show any stale partitions), I removed the
old hdisk from rootvg. First I had to unmirror rootvg (unmirrorvg) from the older disk. I also made
Using the AIX Logical Volume Manager to perform SAN storage
migrations

Page 7 of 20

developerWorks

ibm.com/developerWorks/

sure that any active dump device on this disk was removed. I temporarily changed the primary
dump device to /dev/sysdumpnull. Then I removed the disk from the volume group and the AIX
Object Data Manager (ODM), with the rmdev command.
#
#
#
#
#
;
#
#
#
#

ps ef | grep sync
lsvg l rootvg | grep i stale
unmirrovg rootvg hdisk0
sysdumpdev -Pp /dev/sysdumpnull
rmlv hd7
no output from the lspv command, means no LV data on disk.
lspv l hdisk0
reducevg rootvg hdisk0
chpv c hdisk0
rmdev dl hdisk0

After making these changes, it was important that I re-create the boot image, check the boot list
contained only the new hdisk and that the primary dump device was set correctly.
#
#
#
#
#

bosboot -a -d /dev/hdisk11
sysdumpdev -Pp /dev/hd71
bootlist m normal -o
bootlist -m normal hdisk11
ipl_varyon -i

With the operating system now residing on the new disk, I focused on migrating the data volume
groups. The storage team assigned new data LUNs to the host and provided me with a list of the
LUN ids. First I needed to identify the DS8300 disks which would be used for the data migration.
Both the lspv and lsdev commands can display information relating to hdisks on an AIX system.
; hdisk NOT ASSIGNED TO A VG
# lspv
hdisk12
none
None
hdisk13
none
None
; A disk type of 2107 is displayed for DS8300 disks on AIX.
# lsdev Cc disk | grep 2107

With the correct disks identified (hdisk12 and 13), I could add these disks (with extendvg) to the
existing data volume group (datavg).
# extendvg datavg hdisk12 hdisk13

The data migration from the old disk to the new disk would be accomplished by the LVM
command, migratepv. I had to select the source and destination disks for the migratepv operation.
For example, the following commands would migrate data from hdisk2 to hdisk12 and hdisk3 to
hdisk13. At the end of the migration, both hdisk2 and hdisk3 would be empty and all of their data
would now reside on hdisk12 and hdisk13 respectively.
# migratepv hdisk2 hdisk12
# migratepv hdisk3 hdisk13

To confirm that all the data (logical volumes) for each of the old hdisks had been migrated, I ran
the lspv command to list the contents of each disk. The command did not return any output,
Using the AIX Logical Volume Manager to perform SAN storage
migrations

Page 8 of 20

ibm.com/developerWorks/

developerWorks

confirming that the disks were indeed empty. It was now safe for me to remove the old disks from
the volume group and remove them from the ODM.
#
#
#
#
#

lspv l hdisk2
lspv l hdisk3
reducevg datavg hdisk2 hdisk3
rmdev dl hdisk2
rmdev dl hdisk3

With the migration complete, I could now ask the storage team to reclaim the old LUNs on the F20.
To ensure that the boot disk and boot list had been configured correctly, I would reboot the
system to verify. This would also ensure that the newly migrated disks, volume groups, logical
volumes and filesystems would continue to function as expected after the migration. The desired
end state for our dedicated I/O system had been achieved. The old ESS storage could now be
decommissioned and removed from the data centre.

Figure 3. IBM ESS to DS8300 storage migration - Dedicated I/O - End state

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 9 of 20

developerWorks

ibm.com/developerWorks/

Mixed vendor SAN storage migration with AIX and virtual I/O
In the next scenario, Ill discuss how I performed a similar storage migration. The difference with
this example is that I had to migrate from IBM to NetApp storage using the virtual I/O server
(VIOS).
Lets review the current state (the state of the environment prior to migrating to the new storage).
Figure 4 below shows that there is an AIX LPAR, connected to a pair of VIOS. Both are connected
to IBM DS8300 storage. The VIOS pair (hvio1 and hvio2) are serving the DS8300 LUNs as
virtual SCSI (VSCSI) disk to the client LPAR. The LPAR has a single volume group, rootvg for the
operating system, on a single disk. Other disks and volume groups also exist for application data
but are not depicted for simplicity.
The NetApp storage device has been attached to our existing SAN fabric. However, none of the
VIOS or AIX systems are accessing it at this time.

Figure 4. DS8300 to NetApp storage migration - DS8300 VIOS - Current state

Our procedure document outlined how we would migrate an existing AIX LPAR, which currently
uses hvio1 and hvio2 for all disk (DS8300) traffic. The LPAR and the VIOS reside on a Power6 570
Using the AIX Logical Volume Manager to perform SAN storage
migrations

Page 10 of 20

ibm.com/developerWorks/

developerWorks

(570-1). The new NetApp VIOS (hvio11 and 12) also reside on the same managed system and
will be used for all virtual I/O traffic to/from the NetApp storage device. These details are captured
during our planning process:
Client LPAR: lpar1
Managed system: 570-1
DS8300 VIOS pair (source):hvio1 and hvio2
NetApp VIOS pair (target):hvio11 and hvio12
The current VIOS and LPAR FC adapter, DS8300 disk, virtual adapter and virtual disk
configuration is also captured and used in the planning for the new disk assignments. For
example, the following extract from our planning spreadsheet shows each LPAR, the current
DS8300 disks assigned, the current VIOS, the existing vhost/vscsi relationship, the new NetApp
LUNs required, the new VIOS and the new vhost to vscsi mapping for each LPAR. For examples,
refer to Figures 5, 6 and 7.

Figure 5.

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 11 of 20

developerWorks

ibm.com/developerWorks/

Figure 6.

Figure 7.

Before we could migrate to the NetApp storage array, our Storage administrator first configured the
LUNs that we require and prepared to present them to the NetApp VIOS, hvio11 and hvio12. We
have provided them with the following information (at a minimum) for the allocation to take place:

Client LPAR name : lpar1 (LPAR on 570-1). SAN boot.


Source DS8300 VIOS pair names: hvio1 and hvio2.
Target NetApp VIOS pair names: hvio11 and hvio12.
WWPNs for all FC adapters on all VIOS involved in the migration.
Quantity, size and purpose of the LUNs required, e.g. 1 x 50GB Boot LUN - rootvg, 1 x 100GB
Application data - datavg.

We capture the current disk and volume group configuration with several LVM commands.
Using the AIX Logical Volume Manager to perform SAN storage
migrations

Page 12 of 20

ibm.com/developerWorks/

#
#
#
#
#
#

developerWorks

lsdev -Cc disk


lsvg
lsvg o | lsvg l
lsvg o | lsvg -il
lspv
lsvg | lsvg pi

Of course, we backup our LPAR before we make any changes to it, just in case. This includes
performing a mksysb and savevg backup, followed by a file level backup with our corporate
backup tool.
Prior to starting the migration, we deployed two new VIOS (hvio11 and 12), specifically for use
with NetApp. During our design phase, we determined that it would be a good idea to deploy the
latest version of VIO server, version 2.1, for the build of the NetApp VIOS. The NetApp MPIO Host
Attachment software is installed on both NetApp VIOS. Both VIOS will SAN boot from NetApp
storage. You can refer to Figure 8.

Figure 8. DS8300 to NetApp storage migration - Introduction of NetApp


storage and VIOS

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 13 of 20

developerWorks

ibm.com/developerWorks/

The storage team allocated a NetApp LUN (LUNy) to hvio11 and hvio12. This disk would be used
by the client LPAR for rootvg (boot disk).
We also dynamically add (with DLPAR) two new VSCSI adapters to the client LPAR. That is,
vscsi2 and vscsi3, where vscsi2 is connected to hvio11 and vscsi3 is connected hvio12. Again,
using DLPAR, we assigned a new virtual SCSI server adapter (vhost) to hvio11 and hvio12. I also
ensure that we update the VIOS and LPAR partition profile (on the HMC) with these newly created
virtual adapters.
We then mapped the LUN (as Virtual Target Device or VTD) to the client LPAR on each VIOS and
presented it to the LPAR. The disk (hdisk11) appears as a virtual SCSI disk on client. This new
hdisk has been included in the existing root volume group (rootvg, also shown in Figure 8).
Once the LUN has been assigned to the NetApp VIOS, we perform our standard VIOS disk
mapping to the client LPAR, that is, with mkvdev and lsmap to create and verify the disk has been
assigned to the correct LPAR.
On the client LPAR, we ran cfgmgr to discover the new hdisk. We verify that the new disk and
paths are available. There should be two paths to the disk, one via vscsi2 (hvio11) and one via
vscsi3 (hvio12).
# lsdev Cc disk
# lspv
# lspath l hdisk11
Enabled hdisk11 vscsi2
Enabled hdisk11 vscsi3

To migrate the data from the old to the new disk, the method is similar to the previous example.
Except this time we will use mirrorvg (instead of migratepv) for all data migrations (including the
data volume groups). These systems are either development or test systems. While we do not
want to impact performance, some of these systems are relatively idle or have very low user
numbers, so we can add additional I/O activity without too much of a performance concern. Using
mirrorvg also simplifies the migration as we do not have to run a migratepv command against each
of the source/target disks.
First, we add the new disk (hdisk11) to rootvg. Then we mirror the volume group with the new
hdisk. This mirroring process takes place in the background. You can refer to Figure 9.
# extendvg rootvg hdisk11
# mirrorvg S rootvg hdisk11

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 14 of 20

ibm.com/developerWorks/

developerWorks

Figure 9. Migration state - LVM mirror

As a pre-cautionary measure, we verify that rootvg is now mirrored, include the new disk in the
boot list and create a new boot image for the mirrored volume group. Just as before, if I need to
reboot the LPAR at this point, I can rest assured that I can boot from either disk.
#
#
#
#
#

bosboot -a -d /dev/hdisk11
bosboot a d /dev/hdisk0
bootlist -m normal hdisk0 hdisk11
bootlist m normal o
ipl_varyon -i

With rootvg mirrored successfully, we can now unmirror again, remove the old disk (hdisk0) from
the volume group and remove it from the ODM. Again, we also need to check the boot list and
recreate the boot image for this now non-mirrored volume group.

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 15 of 20

developerWorks
#
#
#
#
#
#
#
#
#
#
#
#

ibm.com/developerWorks/

ps ef | grep sync
lsvg l rootvg
unmirrorvg rootvg hdisk0
chpv c hdisk0
lspv l hdisk0
lsvg l rootvg
reducevg rootvg hdisk0
lsvg p rootvg
rmdev -dl hdisk0
bosboot -a -d /dev/hdisk11
bootlist m normal hdisk11
bootlist m normal -o

With our AIX OS now residing on the new hdisk (the NetApp LUN), we can now migrate the data
volume groups. First, we identify the NetApp LUNs which will be used for the data volume group
migration. The data LUNs had already been assigned to the VIOS and mapped and configured on
the client LPAR.
Again, we use extendvg to add the NetApp disks to the data volume group (datavg).
# extendvg datavg hdisk12
# lsvg p datavg
# lspv hdisk12

We mirror the volume group from the DS8300 disk (hdisk2) to the new NetApp disk (hdisk12). The
mirroring process is set to run in the background with the S flag. To ensure the volume group and
logical volumes are synced, before proceeding, we check that there are no stale physical partitions
(PPs) and that there are no LVM sync processes still running in the background.
# mirrorvg S datavg hdisk12
# ps ef | grep lresync
# lsvg l datavg
datavg:
LV NAME
TYPE
LPs
PPs
datalv
jfs2
96
192
loglv01
jfs2log
1
2

PVs
2
2

LV STATE
open/syncd
open/syncd

MOUNT POINT
/data
N/A

With the mirroring process complete, we unmirror the volume group from the DS8300 storage
(hdisk2).
# unmirrovg datavg hdisk2

To verify that there is no longer any data on the DS8300 hdisks, the lspv command is run and
should not return any output.
# lspv l hdisk2

Now we can remove the DS8300 hdisks from the data volume group using reducevg and remove
the disk from the ODM.
# reducevg datavg hdisk2
# rmdev dl hdisk2

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 16 of 20

ibm.com/developerWorks/

developerWorks

As this is a virtual I/O environment, there are some additional steps we must execute before we
can hand back the DS8300 LUNs to the storage team.
First, we must remove the device mappings for the DS8300 disk from the DS8300 VIOS (hvio1
and 2). We should also take note of the DS8300 LUN id (from pcmpath query device) so we can
provide the storage admin with a list of LUN ids that can be reclaimed. In the following example,
we check the disk mappings for vhost2 and verify the backing device hdisk number. We then enter
the OEM VIOS environment to run the pcmpath utility and obtain the LUN ids associated with
each hdisk. Next, we run the rmvdev command to remove any virtual target device (VTD) mapping
associated with the hdisk (e.g. vtscsi2). And finally we remove the hdisk from the ODM on the
VIOS. The old DS8300 LUNs can now be reclaimed by the storage team.
$
$
#
#
$
$

lsmap vadapter vhost2 | grep hdisk


oem_setup_env
pcmpath query device 20 | grep Serial
exit
rmvdev -vtd vtscsi2
rmdev dev hdisk20

The virtual SCSI server adapters (i.e. vhost2) are removed from each VIOS ($ rmdev dev
vhost2). And using DLPAR, we remove these virtual adapters from the LPAR definition. The VIOS
partition profile is also updated to reflect the removal of these virtual devices.
The original virtual SCSI adapters on the client LPAR, vscsi0 and vscsi1, are also removed now
using rmdev and DLPAR (the LPAR partition profile is also updated).
Again, at this point, you would reboot the LPAR once you are satisfied that all data migrations have
completed successfully. Verify correct system operation after the reboot (volume groups online,
logical volumes open/syncd, filesystems mounted, etc.).
#
#
#
#
#

lsvg
lsvg o
df
mount
lsvg | lsvg il | grep close

With the changes completed successfully we now document the system configuration after the
migration. At this point, we have reached our desired end state. The AIX system is now using
NetApp storage via our new VIOS. You can refer to Figure 10.
The DS8300 (in this case) will remain in our environment. Likewise, the VIOS used to serve
DS8300 storage to client LPARs, will also remain. As it has been determined that the DS storage
will be used for AIX systems requiring a high level of performance and availability. While all other
non-critical AIX test systems will utilize the NetApp device.

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 17 of 20

developerWorks

ibm.com/developerWorks/

Figure 10. VIOS end state

Summary
AIX storage migrations can be time consuming and tricky. Without the latest storage virtualization
technology, such as IBM's SVC, the task requires several phases, stages and tasks.
Fortunately, the AIX LVM is a very mature, stable and robust tool. It can greatly empower us when
faced with these challenges.
Using LVM, we can reduce the outage required for migrating the data. Having the ability to move
data between source and targets disks, while the system/applications are still running, is a huge
benefit in an operational computing environment.
As always, I strongly recommend that you test your procedures in a non-production environment
before attempting a storage migration on a production system.
If youve found this article interesting, then there are other LVM commands which are also worth
researching, such as replacepv, redefinevg and recreatevg. You may also find the LVM hot
spare policy of interest. Review the Resources section below to find out more.
Using the AIX Logical Volume Manager to perform SAN storage
migrations

Page 18 of 20

ibm.com/developerWorks/

developerWorks

Resources

IBM Fix Level recommendation tools


IBM TotalStorage support: Search for host bus adapters, firmware
IBM System Storage DS8300 series Interoperability matrix
SDDPCM support site
PowerVM migration from physical to virtual storage
AIX Logical Volume Manager from A to Z: Introduction and concepts
AIX Logical Volume Manager from A to Z: Troubleshooting and commands
Virtual fibre channel (NPIV)
migratepv command
mirrorvg command
LVM Hot-spare disk policies
Replacing a failed physical volume in a mirrored volume group
alt_disk_copy command
replacepv command
Follow developerWorks on Twitter.
Get involved in the My developerWorks community.
Participate in the AIX and UNIX forums:
AIX Forum
AIX Forum for developers
Cluster Systems Management
IBM Support Assistant Forum
Performance Tools Forum
Virtualization Forum
More AIX and UNIX Forums

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 19 of 20

developerWorks

ibm.com/developerWorks/

About the author


Chris Gibson
Chris Gibson is an AIX systems specialist located in Melbourne, Australia. He is an
IBM CATE, System p platform and AIX 5L, and a co-author of the IBM Redbooks
publication, "NIM from A to Z in AIX 5L."

Copyright IBM Corporation 2010


(www.ibm.com/legal/copytrade.shtml)
Trademarks
(www.ibm.com/developerworks/ibm/trademarks/)

Using the AIX Logical Volume Manager to perform SAN storage


migrations

Page 20 of 20

You might also like