Professional Documents
Culture Documents
VIOS Tech 2
VIOS Tech 2
This
can become an issue, for example if the system is located at a different place in a different building in
a different street etc. or if you'll have to install more than one POWER5/5+ server. The solution is to
install the VIOS using an installation server preferably a Linux based one.
Prerequisites
The following section assumes that you'll hava a Linux based installation server up and running,
meaning tftp, NFS and DHCP works for installation purposes.
Thanks!
My special thanks go to my colleaque Bernhard Zeller, IBM Germany, for working out how it works!
bc1-mms:/ # cd /export/vios
bc1-mms:/export/vios # tar -xzf ./ispot.tar.Z
total 597824
drwxr-xr-x 3 root root 200 Aug 14 12:12 .
drwxrwxr-x 22 nobody nobody 544 Aug 14 12:04 ..
drwxr-xr-x 3 root root 72 Jan 10 2006 SPOT
-rw-r--r-- 1 root root 11670966 Aug 14 12:04 booti.chrp.mp.ent.Z
-rw-r--r-- 1 root root 951 Aug 14 12:04 bosinst.data
-rw-r--r-- 1 root root 34033393 Aug 14 12:04 ispot.tar.Z
-rw-r--r-- 1 root root 565862400 Aug 14 12:04 mksysb
Now it is time to copy the bootkernel to your /tftpboot directory.
Keep the name of the bootkernel and point each client to that kernel.
Use a unique name for each system you want to install.
Now the first option works - the system loads and starts the kernel but then it is getting a little bit
tricky. As a next step the system or partition will try to load a file call <bootkernelname>.info. This
.info file includes all required information about which is the NFS server, which NFS directories to
mount, which files to use for installation and things like the client identity (i.e. hostname etc.). As you
might assume at this point, this little .info file must be unique for each client/system/lpar you want to
install! For example if you want to install one VIOS on the systems called bc1-js21-1-vio and bc1-
js21-2-vio you must have two info file - one called bc1-js21-1-vio.info and one called bc1-js21-2-
vio.info. If you've decided to keep the name of the bootkernel as it is you must change the contents of
the booti.chrp.mp.ent.info file each time you want to install a different system. In my opinion, this is
not very comfortable.
You can, of course, create a copy of the kernel and rename it. Which could be a waste of disk space.
A much easier way is to create a symbolic link for each system pointing to that special kernel.
Preparing DHCP
Now you must tell DHCP how to react on boot request from a specific system - i.e. which bootkernel
to provide. Open the file /etc/dhcpd.conf with an editor of your choice and create a client stanza for
each system - similar to the one below. Once again be carefull that the name of the file reflects the
naming conventions in your /tftpboot directory.
...
#IBM-VIO
host bc1-js21-1-vio.stuttgart.de.ibm.com {
hardware ethernet 00:11:25:c9:1a:ed;
filename "/bc1-js21-1-vio";
fixed-address 9.154.2.112;
next-server 9.154.2.86;
}
...
Remember...
Remember to restart the DHCP server in order to activate the changes.
Preparing NFS
Due to the fact that the installation of the VIOS works only over NFS you must add a line to the
/etc/exports.
#Export File
...
/export/vios *(ro,insecure,no_root_squash,sync)
...
Note..
Please note the option insecure. This is mandatory otherwise the VIOS partition will not be able to
mount the NFS share! In addition use no_root_squash because the system will try to mount the NFS
shares as root user!
Ah, don't forget to tell the NFS server to reload the configuration file!
Preparing syslogd
Wouldn't it be nice to see what'll happen during the installation of the VIOS? Personally I'll like the
idea and if you, too, want to know what's going on it is usefull to modify the configuration of the
syslog daemon.
Note...
Please note that SYSLOGD_PARAMS could also be called SYSLOGD_OPTIONS depending on the
Linux distribution you use.
Next modify the configuration of the syslogd by editing /etc/syslog.conf.
# /etc/syslog.conf
...
#local2,local3.* -/var/log/localmessages
local3.* -/var/log/localmessages
local4,local5.* -/var/log/localmessages
local6,local7.* -/var/log/localmessages
local2.* -/var/log/nimol.log
...
And finally restart syslogd to activate the changes.
To create this file simply use vi /tftpboot/bc1-js21-1-vio.info. The content should read as follows.
Note..
Please note that each parameter must be written in one line. Do not use backslashes to make the file
more human-readable - the system you want to install could be irritated and abort the installation.
After you've made all necessary changes don't forget to save the file!
Note...
If you've not been able to include the correct MAC address of the LAN adapter yet, this is the point
where you can find it. Enter Setup Remote IPL and you'll find the MAC addresses (called Hardware
Address). Use this addresses and include the desired one in your dhcpd.conf file. Don't forget to restart
the DHCP server!
...
Aug 15 14:44:42 bc1-mms dhcpd: BOOTREQUEST from 00:11:25:c9:17:d9 via eth0
Aug 15 14:44:42 bc1-mms dhcpd: BOOTREPLY for 9.154.2.116 to op710-1-vio.stuttgart.de.ibm.com
(00:11:25:c9:17:d9) via eth0
Aug 15 14:45:07 bc1-js21-2-vio nimol:,info=LED 610: mount -r bc1-mms:/export/vios/SPOT/usr
/SPOT/usr,
Aug 15 14:45:07 bc1-mms rpc.mountd: authenticated mount request from op710-1-
vio.stuttgart.de.ibm.com:659 for /export/vios/SPOT/usr (/export/vios)
Aug 15 14:45:07 bc1-js21-2-vio nimol:,info=,
Aug 15 14:45:08 bc1-js21-2-vio nimol:,-S,booting,op710-1-vio,
Aug 15 14:45:08 bc1-js21-2-vio nimol:,info=LED 610: mount bc1-mms:/export/vios/bosinst.data
/NIM_BOSINST_DATA,
Aug 15 14:45:08 bc1-mms rpc.mountd: authenticated mount request from op710-1-
vio.stuttgart.de.ibm.com:703 for /export/vios/bosinst.data (/export/vios)
Aug 15 14:45:08 op710-1-vio nimol:,info=LED 610: mount bc1-mms:/export/vios/mksysb
/NIM_BOS_IMAGE,
Aug 15 14:45:08 bc1-mms rpc.mountd: authenticated mount request from op710-1-
vio.stuttgart.de.ibm.com:713 for /export/vios/mksysb (/export/vios)
Aug 15 14:45:08 bc1-js21-2-vio nimol:,info=,
Aug 15 14:45:15 bc1-js21-2-vio nimol:,-R,success,op710-1-vio,
Aug 15 14:45:15 bc1-js21-2-vio nimol:,info=extract_data_files,
Aug 15 14:45:15 bc1-mms rpc.mountd: authenticated unmount request from op710-1-
vio.stuttgart.de.ibm.com:659 for /export/vios/bosinst.data (/export/vios)
Aug 15 14:45:15 bc1-js21-2-vio nimol:,info=query_disks,
Aug 15 14:45:16 bc1-js21-2-vio nimol:,info=extract_diskette_data,
Aug 15 14:45:17 bc1-js21-2-vio nimol:,info=setting_console,
Aug 15 14:45:17 bc1-js21-2-vio nimol:,info=initialization,
Aug 15 14:45:18 bc1-js21-2-vio nimol:,info=verifying_data_files,
Aug 15 14:45:24 bc1-js21-2-vio nimol:,info=,
...
ENJOY!
If you don't get an DHCP/BOOTP response be sure that the DHCP server is configured correctly - i.e.
that you are using the correct LAN adapter (check the MAC address in /etc/dhcpd.conf. Note that
most network setups do not allow a BOOTP request travelling over subnet boundaries. Either check
the setup of your firewalls/routers or make shure that the installation is within the same subnet of the
systems you want to install.
I/O Virtualization is one of the founding pillars of PowerVM. Virtual IO Server (VIOS) is a software
appliance in PowerVM that facilitates virtualization of storage and network resources. Physical
resources are associated with the Virtual I/O Server and these resources are shared among multiple
client logical partitions (a.k.a. LPARs or VMs).
Since each Virtual I/O Server partition owns physical resources, any disruptions in sharing the
physical resource by the Virtual I/O Server would impact the serviced LPARs. To ensure client LPARs
have uninterrupted access to their I/O resources, it is necessary to set up a fully redundant
environment. Redundancy options are available to remove the single point of failure anywhere in the
path from client LPAR to its resource.
Fundamentally, the primary reasons for recommending VIOS and I/O redundancy include:
Protection against unscheduled outages due to physical device failures or natural events
Outage avoidance in case of VIOS software issue (i.e. including VIOS crash)
Improved serviceability for planned outages
Future hardware expansion
Protection against unscheduled outages due to human intervention
Role of Dual Virtual I/O Server
Dual VIOS configuration is widely employed, and it is recommended for enterprise environments. A
dual VIOS configuration allows the client LPARs to have multiple routes (two or more) to their
resources. In this configuration, if one of the routes is not available, the client LPAR can still reach its
resources through another route.
These multiple paths can be leveraged to set up highly available I/O virtualization configurations, and
it can provide multiple ways for building high-performance configurations. All this is achieved with
the help of advanced capabilities provided by PowerVM (VIOS and PHYP) and the operating systems
on the client LPARs.
Both HMC and NovaLink allow configuration of dual Virtual I/O Server on the managed systems.
The remainder of this blog focuses on approaches to achieve virtual storage redundancy for client
LPARs.
Figure 1
The basic solution against physical adapter failure is to have two (or more) physical adapters
preferably from multiple different I/O drawers to the Virtual I/O Server. Storage needs to be made
accessible via both physical adapters. MPIO capability on the VIOS can be leveraged to configure the
additional physical paths in fail-over mode. Figure 1 shows storage connectivity to client LPAR made
available via both paths in the VIOS.
To effectively leverage the capacity of both adapters, this configuration can be fine tuned by specific
Multi-Path I/O (MPIO) settings to share the load across these paths. This can result in better utilization
of resources on the system.
Figure 2
VIOS restart may be required during VIOS software updates, which can result in VIOS not being
available to service dependent LPARs while it is rebooting. A dual VIOS setup alleviates the loss of
storage access for any planned or unplanned VIOS outage scenarios.
In this kind of architecture, the client LPARs are serviced via two VIOS partitions (i.e. dual VIOS).
One VIOS acts as the primary server for all client requests and another VIOS acts as the
secondary/backup server. The backup server services the client only when the primary server is not
available to service the client requests. This kind of arrangement is achieved with the help of Storage
multi-pathing on client LPARs.
On the client LPARs running the AIX operating system, multi-pathing is achieved by using MPIO
Default Path Control Module (PCM). MPIO manages routing of I/O through available paths to a given
disk storage (Logical Unit). For more information on MPIO, see Multiple Path I/O on the IBM
Knowledge Center.
Figure 3
The basic solution to protect against disk failures is to keep a mirrored copy of disk data onto another
disk. This can be achieved by the mirroring functionality provided by the client operating system; in
the case of AIX disk mirroring is provided by Logical Volume Manager (LVM). For high-availability,
each mirrored data should be located on separate physical disks, using separate I/O adapters coming
from different VIOS. Furthermore, putting each disk into a separate disk drawer protects against data
loss due to power failure.
Notes:
It is possible to access both primary and mirrored disks through single VIOS.
RAID array is another method to protect against disk failures.
High redundancy system with dual VIOS
Figure 4
Figure 4
In order to achieve a highly redundant system, all the solutions discussed above are combined to
derive an end-to-end redundancy solution.
Here, the client LPAR sees two disks, one of which is used as mirrored disk to protect against disk
failure. Each disk seen on client LPAR has paths from two different VIOS, which ensures protection in
case of VIOS failure. Also, each VIOS has two physical adapters to provide redundancy in case of
physical adapter failure. Though this arrangement is good from redundancy perspective, it is certainly
not the most efficient one, since one VIOS is used for backup purpose only and not utilized
completely. To effectively utilize all the available VIO Servers, the VIOS load can be shared across
these available VIO Servers. This configuration is explained in the next section with the help AIX
client LPARs.
Figure 5
Figure 5
Here, we have two LPARs (LPAR 1 and LPAR 2) which are being serviced by VIOS 1 and VIOS 2.
The client partition LPAR 1 is using the VIOS1 as an active path and VIOS2 as passive path to reach
its Disk A. Similarly, LPAR2 is using VIOS 2 as primary path and VIOS 1 as secondary path to reach
Disk B. An important thing to note here is that for the mirrored disk, the configurations are just
opposite i.e. on LPAR 1 active path is VIOS 2 and passive path is VIOS 1 for mirrored disk A’ and on
LPAR 2 active path is VIOS 1 and passive path is VIOS 2. The active and passive/backup VIOS is
designated based on the path priority set for the disk for the active I/O path.
Using this configuration, we can shut down one of the VIOS for a scheduled maintenance and all
active clients can automatically access their disks through the backup VIOS. When the Virtual I/O
Server comes back online, no action is needed on the virtual I/O clients.
Common practice is to use mirroring in the client LPARs for rootvg disks and the datavg disks are
protected by RAID configuration provided by the storage array.
RAID stands for Redundant Array of Independent Disks and its design goal is to increase data
reliability and increase input/output (I/O) performance. When multiple physical disks are set up to use
the RAID technology, they are said to be in a RAID array. This array distributes data across multiple
disks but from the computer user and operating system perspective, it appears as a single disk.
Redundancy in NPIV
N_Port ID Virtualization (NPIV) is a method for virtualizing physical fiber channel adapter ports to
have multiple virtual World Wide Port Numbers (WWPNs) and therefore multiple N_Port_IDs. Once
all the applicable WWPNs are registered with FC switch, each of these WWPNs can be used for SAN
masking/zoning or LUN presentation.
More information on NPIV is available at IBM Knowledge center: Virtual Fibre Channel
Figure 6
Figure 6
Above figure shows that redundancy against physical adapter failure can be achieved by having one
more physical fiber channel HBA and redundancy against VIOS failure can be achieved by having a
redundant path through another VIOS.
Note: As the storage is directly mapped from SAN for the client LPAR, in order to have protection
against physical adapter failures, all of the virtual WWPN/N_Port_IDs of the client LPAR should be
zoned/masked for the same storage on the SAN. Additional adapters and paths cannot guarantee
redundancy unless zoning is done properly. Similar to vSCSI, active and passive/failover path is
managed by multi-path software on client LPAR.
In the case of VIOS serving multiple client LPARs, the workload can be spread across all the available
VIO Servers and I/O adapters as shown in Figure 7 below.
Figure 7
Figure 7
As shown above, client LPAR 1 and LPAR 2 have paths from both the VIO Servers, i.e. VIOS 1 and
VIOS 2, to reach their respective storage disks/LUNs. LPAR 1 is using VIOS 1 as active path and
VIOS 2 as the passive path, while LPAR 2 is using VIOS 2 as active and VIOS 1 as the passive path.
In this setup, if one VIOS is down for maintenance or the active path is unable to route the traffic, the
multi-path software running on the client LPAR will take care of routing the IO through other
available passive path.
One important thing to note in this configuration is that each VIOS has two paths through it and each
one of these paths is on a separate fabric. If there is a switch failure, the client will failover to the other
path in the same VIOS and not to the other VIOS.
Figure 8
Figure 8
Above figure shows that a single storage pool spans across multiple VIO Servers and multiple
systems, thus enabling location transparency.
Figure 9
Figure 9