Professional Documents
Culture Documents
VSCSI
VSCSI
Virtual SCSI is based on a client/server relationship. The Virtual I/O Server owns the physical resources
and acts as server or, in SCSI terms, target device. The client logical partitions access the virtual SCSI
backing storage devices provided by the Virtual I/O Server as clients.
Virtual SCSI server adapters can be created only in Virtual I/O Server. For HMC-managed systems, virtual
SCSI adapters are created and assigned to logical partitions using partition profiles.
The vhost SCSI adapter is the same as a normal SCSI adapter. You can have multiple disks assigned to it.
Usually one virtual SCSI server adapter mapped to one virtual SCSI client adapter will be configured,
mapping backing devices through to individual LPARs. It is possible to map these virtual SCSI server
adapters to multiple LPARs, which is useful for creating virtual optical and/or tape devices, allowing
removable media devices to be shared between multiple client partitions.
on VIO server:
root@vios1: / # lsdev -Cc adapter
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
The client partition accesses its assigned disks through a virtual SCSI client adapter. The virtual SCSI
client adapter sees the disks, logical volumes or file-backed storage through this virtual adapter as
virtual SCSI disk devices.
on VIO client:
root@aix21: / # lsdev -Cc adapter
vscsi0 Available Virtual SCSI Client Adapter
In SCSI terms:
virtual SCSI server adapter: target
virtual SCSI client adapter: initiator
(Analogous to server client model, where client is the initiator.)
Physical disks presented to the Virtual I/O Server can be exported and assigned to a client partition in a
number of different ways:
- The entire disk is presented to the client partition.
- The disk is divided into several logical volumes, which can be presented to a single client or multiple
different clients.
- With the introduction of Virtual I/O Server 1.5, files can be created on these disks and file-backed
storage can be created.
- With the introduction of Virtual I/O Server 2.2 Fixpack 24 Service Pack 1 logical units from a shared
storage pool can be created.
The IVM and HMC environments present 2 different interfaces for storage management under different names.
Storage Pool interface under IVM is essentially the same as LVM under HMC. (These are used sometimes
interchangeably.) So volume group can refer to both volume groups and storage pools, and logical volume can
refer to both logical volumes and storage pool backing devices.
Once these virtual SCSI server/client adapter connections have been set up, one or more backing devices
(whole disks, logical volumes or files) can be presented using the same virtual SCSI adapter.
When using Live Partition Mobility storage needs to be assigned to the Virtual I/O Servers on the target
server.
----------------------------
VSCSI adapters have a fixed queue depth that varies depending on how many VSCSI LUNs are configured for the
adapter. There are 512 command elements of which 2 are used by the adapter, 3 are reserved for each VSCSI
LUN for error recovery and the rest are used for IO requests. Thus, with the default queue_depth of 3 for
VSCSI LUNs, that allows for up to 85 LUNs to use an adapter: (512 - 2) / (3 + 3) = 85.
So if we need higher queue depths for the devices, then the number of LUNs per adapter is reduced. E.G., if
we want to use a queue_depth of 25, that allows 510/28= 18 LUNs. We can configure multiple VSCSI adapters
to handle many LUNs with high queue depths, each requiring additional memory. One may have more than one
VSCSI adapter on a VIOC connected to the same VIOS if you need more bandwidth.
Also, one should set the queue_depth attribute on the VIOC's hdisk to match that of the mapped hdisk's
queue_depth on the VIOS.
Note that to change the queue_depth on an hdisk at the VIOS requires that we unmap the disk from the VIOC
and remap it back, or a simpler approach is to change the values in the ODM (e.g. # chdev -l hdisk30 -a
queue_depth=20 -P) then reboot the VIOS.
----------------------------
Virtual I/O Server (VIOS) version 1.5 introduced file-backed virtual SCSI devices. These virtual SCSI
devices serve as disks or optical media devices for clients.
In the case of file-backed virtual disks, clients are presented with a file from the VIOS that it accesses
as a SCSI disk. With file-backed virtual optical devices, you can store, install and back up media on the
VIOS, and make it available to clients.
----------------------------
----------------------------
2. padmin@bb_vios1: /home/padmin # lspv | grep 5c2a <--check which disk has this
pvid on vio server
hdiskpower21 00080e82a84a5c2a None
3. padmin@bb_vios1: /home/padmin # lsmap -all -field SVSA "Backing Device" VTD "Client Partition ID" Status
-fmt ":" | grep hdiskpower21
vhost13:0x0000000c:hdiskpower21:pid12_vtd0:Available <--check vhost adapter of the
given disk
----------------------------
1. HMC -> VIO Server -> DLPAR -> Virtual Adapter (create vscsi adapter, name the client which can use it,
then create the same in profile)
(the profile can be updated: configuration -> save current config.)
(in case of an optical device, check out any client partition can connect)
2. HMC -> VIO Client -> DLPAR -> Virtual Adapter (create the same adapter as above, the ids should be
mapped, do it in the profile as well)
3. cfgdev (VIO server), cfgmgr (client) <--it will bring up vhostX on vio server,
vscsiX on client
4. create needed disk assignments:
-using physical disks:
mkvdev -vdev hdisk34 -vadapter vhost0 -dev vclient_disk <--for easier identification useful to give
a name with the -dev flag
rmvdev -vdev <backing dev.> <--back. dev can be checked with lsmap -all
(here vclient_disk)
mkvdev -vdev testlv_client -vadapter vhost0 -dev <any_name> <--for easier identification useful
to give a name with the -dev flag
(here backing device is an lv
(testlv_client)
rmvdev -vdev <back. dev.> <--removes an assignment to the client
mkbdsp -bd <lv> -sp <vg> 10G <--creates an lv with given size in the sp
lssp -bd -sp <vg> <--lists lvs in the given vg (sp)
rmbdsp -bd <lv> -sp <vg> <--removes an lv from the given vg (sp)
mkbdsp -sp fb_testvg -bd fb_bb -vadapter vhost2 10G <--it will create a file backed device and
assigns it to the given vhost
mkbdsp -sp fb_testvg -bd fb_bb1 -vadapter vhost2 -tn balazs 8G <--it will also specify a virt. target
device name (-tn)
lssp -bd -sp fb_testvg <--lists the lvs (backing devices) of the
given sp
rmbdsp -sp fb_testvg -bd fb_bb1 <--removes the given lv (bd) from the sp
rmsp <file sp name> <--remove s the given file storage pool
removing it:
rmdev -dev vhost1 -recursive
----------------------------
On client partitions, MPIO for virtual SCSI devices currently only support failover mode (which means only
one path is active at a time:
root@bb_lpar: / # lsattr -El hdisk0
PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True
----------------------------
on VIO SERVER:
# lsdev -dev <hdisk_name> -attr <--checking disk attributes
# lsdev -dev <fscsi_name> -attr <--checking FC attributes
# chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes-perm <--reboot is needed for these
fc_err_recov=fast_fail <--in case of a link event IO will fail
immediately
dyntrk=yes <--allows the VIO server to tolerate
cabling changes in the SAN
# chdev -dev hdisk3 -attr reserve_policy=no_reserve <--each disk must be set to no_reservr
reserve_policy=no_reserve <--if this is configured, dual vio
server can present a disk to client
on VIO client:
# chdev -l vscsi0 -a vscsi_path_to=30 -a vscsi_err_recov=fast_fail -P <--path timout checks health of
VIOS and detects if VIO Server adapter isn't responding
vscsi_path_to=30 <--by default it is disabled (0), each client adapter must
be configured, minimum is 30
vscsi_err_recov=fast_fail <--failover will happen immediately rather than delayed
# chdev -l hdisk0 -a queue_depth=20 -P <--it must match the queue depth value used for the
physical disk on the VIO Server
queue_depth <--it determines how many requests will be queued on the
disk
Never set the hcheck_interval lower than the read/write timeout value of the underlying physical disk on
the Virtual I/O Server. Otherwise, an error detected by the Fibre Channel adapter causes new healthcheck
requests to be sent before the running requests time out.
The minimum recommended value for the hcheck_interval attribute is 60 for both Virtual I/O and non Virtual
I/O configurations.
In the event of adapter or path issues, setting the hcheck_interval too low can cause severe performance
degradation or possibly cause I/O hangs.
It is best not to configure more than 4 to 8 paths per LUN (to avoid too many hchecks IO), and set the
hcheck_interval to 60 in the client partition and on the Virtual I/O Server.
----------------------------
By default all the paths are defined with priority 1 meaning that traffic will go through the first path.
If you want to control the paths 'path priority' has to be updated.
Priority of the VSCSI0 path remains at 1, so it is the primary path.
Priority of the VSCSI1 path will be changed to 2, so it will be lower priority.
PREPARATION ON CLIENT:
# lsattr -El hdisk1 | grep hcheck
hcheck_cmd test_unit_rdy <--hcheck is configured, so path should come back
automatically from failed state
hcheck_interval 60
hcheck_mode nonactive
# chpath -l hdisk1 -p vscsi1 -a priority=2 <--I changed priority=2 on vscsi1 (by default both
paths are priority=1)
TEST 1:
3. ON CLIENT: # lspath
Enabled hdisk1 vscsi0
Failed hdisk1 vscsi1 <--it will show failed path on vscsi2 (this is
coming from VIOS2)
5. ON VIOS2: # mkvdev -vdev hdiskpower1 -vadapter vhost0 -dev testdisk <--configure back disk mapping
from VIOS2
7. ON CLIENT: # errpt <--error report will show path has been recovered
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
F31FFAC3 0324121213 I H hdisk1 PATH HAS RECOVERED
TEST 2:
I did the same on VIOS1 (rmdev...disk, which has path priority 1 (IO is going there by default)
ON CLIENT: # lspath
Failed hdisk1 vscsi0
Enabled hdisk1 vscsi1
----------------------------
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi2 <--we want to change vsci2 to vscsi1
On VIO client:
1. # rmpath -p vscsi2 -d <--remove paths from vscsi2 adapter
2. # rmdev -dl vscsi2 <--remove adapter
On VIO server:
3. # lsmap -all <--check assignment and vhost device
4. # rmdev -dev vhost0 -recursive <--remove assignment and vhost
device
On HMC:
5. Remove deleted adapter from client (from profil too)
6. Remove deleted adapter from VIOS (from profil too)
7. Create new adapter on client (in profil too) <--cfgmgr on client
8. Create new adapter on VIOS (in profil too) <-cfgdev on VIO server
On VIO server:
9. # mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev rootvg_hdisk0 <--create new assignment
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1 <--vscsi1 is there (cfgmgr may
needed)
----------------------------
4. cfgmgr (on client lpar) <--bring up cd0 device on client (before moving cd0 device rmdev
device on client first)
5. rmdev -dev vtopt0 -recursive <--to move cd0 to another client, remove assignment from vhost0
6. mkvdev -vdev cd0 -vadapter vhost1 <--create new assignment to vhost1
7. cfgmgr (on other client lpar) <--bring up cd0 device on other client
(Because VIO server adapter is configured with "Any client partition can connect" option, these pairs are
not suited for client disks.)
----------------------------