Professional Documents
Culture Documents
AIX Boot Process
AIX Boot Process
1.
When the server is Powered on Power on self test(POST) is run and
checks the hardware
2.
On successful completion on POST Boot logical volume is searched by
seeing the bootlist
3.
The AIX boot logical contains AIX kernel, rc.boot, reduced ODM & BOOT
commands. AIX kernel is loaded in the RAM.
4.
Kernel takes control and creates a RAM file system.
5.
Kernel starts /etc/init from the RAM file system
6.
init runs the rc.boot 1 ( rc.boot phase one) which configures the base
devices.
7.
rc.boot1 calls restbase command which copies the ODM files from Boot
Logical Volume to RAM file system
8.
rc.boot1 calls cfgmgr f command to configure the base devices
9.
rc.boot1 calls bootinfo b command to determine the last boot device
10.
Then init starts rc.boot2 which activates rootvg
11.
rc.boot2 calls ipl_varyon command to activate rootvg
12.
rc.boot2 runs fsck f /dev/hd4 and mount the partition on / of RAM file
system
13.
rc.boot2 runs fsck f /dev/hd2 and mounts /usr file system
14.
rc.boot2 runs fsck f /dev/hd9var and mount /var file system and runs
copy core command to copy the core dump if available from /dev/hd6
to /var/adm/ras/vmcore.0 file. And unmounts /var file system
15.
rc.boot2 runs swapon /dev/hd6 and activates paging space
16.
rc.boot2 runs migratedev and copies the device files from RAM file
system to /file system
17.
rc.boot2 runs cp /../etc/objrepos/Cu* /etc/objrepos and copies the ODM
files from RAM file system to / filesystem
18.
rc.boot2 runs mount /dev/hd9var and mounts /var filesystem
19.
rc.boot2 copies the boot log messages to alog
20.
rc.boot2 removes the RAM file system
21.
Kernel starts /etc/init process from / file system
22.
The /etc/init points /etc/inittab file and rc.boot3 is started. Rc.boot3
configures rest of the devices
23.
rc.boot3 runs fsck f /dev/hd3 and mount /tmp file system
24.
rc.boot3 runs syncvg rootvg &
25.
rc.boot3 runs cfgmgr p2 or cfgmgr p3 to configure rest of the
devices. Cfgmgr p2 is used when the physical key on MCA architecture is
on normal mode and cfgmgr p3 is used when the physical key on MCA
architecture is on service mode.
26.
rc.boot3 runs cfgcon command to configure the console
27.
rc.boot3 runs savebase command to copy the ODM files from /dev/hd4
to /dev/hd5
28.
rc.boot3 starts syncd 60 & errordaemon
29.
rc.boot3 turn off LEDs
30.
rc.boot3 removes /etc/nologin file
rc.boot3 checks the CuDv for chgstatus=3 and displays the missing
devices on the console
32.
The next line of Inittab is executed
/etc/inittab file format: identifier:runlevel:action:command
Mkitab-----Add records to the /etc/inittab file
Lsitab-----List records in the /etc/inittab file
Chitab-----changes records in the /etc/inittab file
Rmitab-----removes records from the /etc/inittab file
What is ODM?
ODM
o
Maintains system config, device and vital product data
o
Provide a more robust, secure and sharable resource
o
Provide a reliable object oriented database facility
31.
After Installation
changes :
/etc/inittab,/etc/rc.net,/etc/services,/etc/snmpd.conf,/etc/snmpd.peers,/etc/syslog.conf,
/etc/trcfmt,/var/spool/cron/crontabs/root,/etc/host.
Software Components:
Application server
HACMP Layer
RSCT Layer
AIX Layer
LVM Layer
TCP/IP Layer
HACMP Services :
Cluster communication daemon(clcomdES)
Cluster Manager (clstrmgrES)
Cluster information daemon(clinfoES)
Cluster lock manager (cllockd)
Cluster SMUX peer daemon (clsmuxpd)
HACMP Deamons: clstrmgr, clinfo, clmuxpd, cllockd.
HA supports up to 32 nodes
HA supports up to 48 networks
HA supports up to 64 resource groups per cluster
HA supports up to 128 cluster resources
IP Label : The label that is associated with a particular IP address as defined by the DNS
(/etc/hosts)
Base IP label : The default IP address. That is set on the interface by aix on startup.
Service IP label: a service is provided and it may be bound on a single/multiple nodes. These
addresses that HACMP keep highly available.
IP alias: An IP alias is an IP address that is added to an interface. Rather than replacing its base
IP address.
RSCT Monitors the state of the network interfaces and devices.
IPAT via replacement : The service IP label will replace the boot IP address on the interface.
IPAT via aliasing: The service IP label will be added as an alias on the interface.
Persistent IP address: this can be assigned to a network for a particular node.
In HACMP the NFS export : /use/es/sbin/cluster/etc/exports
Shared LVM:
Shared volume group is a volume group that resides entirely on the external disks shared
by cluster nodes
Shared LVM can be made available on Non concurrent access mode, Concurrent Access
mode, Enhanced concurrent access mode.
NON concurrent access mode: This environment typically uses journaled file systems to
manage data.
Create a non concurrent shared volume group: smitty mkvgGive VG name, No for
automatically available after system restart, Yes for Activate VG after it is created, give VG major
number.
Create a non concurrent shared file system: smitty crjfsRename FS names, No to
mount automatically system restart, test newly created FS by mounting and unmounting it.
Importing a volume group to a fallover node:
o
Concurrent Acccess Mode: Its not supported for file systems. Instead must use raw LVs and
Physical disks.
Creating concurrent access volume group:
o
Varyonvg vgname
Service Interface: This interface used for providing access to the application running on that node.
The service IP address is monitored by HACMP via RSCT heartbeat.
Boot Interface: This is a communication interface. With IPAT via aliasing, during failover the
service IP label is aliased onto the boot interface
Persistent node IP label: Its useful for administrative purpose.
When an application is started or moved to another node together with its associated resource
group, the service IP address can be configured in two ways.
Replacing the base IP address of a communication interface. The service IP label and boot
existing one. This method is IP aliasing. All Ip addresses/labels must be on different subnet.
Default method is IP aliasing.
HACMP Security: Implemented directly by clcomdES, Uses HACMP ODM classes and the
/usr/es/sbin/cluster/rhosts file to determine partners.
Resource Group Takeover relationship:
Resource Group: Its a logical entity containing the resources to be made highly available by
HACMP.
Resources: Filesystems, NFS, Raw logical volumes, Raw physical disks, Service IP addresses/Labels,
Application servers, startup/stop scripts.
To made highly available by the HACMP each resource should be included in a Resource group.
Resource group takeover relationship:
1.
Cascading
2.
Rotating
3.
Concurrent
4.
Custom
Cascading:
o
Resource group can be activated on low priority node if the highest priority node is
not available at cluster startup.
If node failure resource group falls over to the available node with the next
priority.
Upon node reintegration into the cluster, a cascading resource group falls back to
its home node by default.
Attributes:
1. Inactive takeover(IT): Initial acquisition of a resource group in case the home node is not
available.
2. Fallover priority can be configured in default node priority list.
3. cascading without fallback is an attribute that modifies the fall back behavior. If cwof flag is set
to true, the resource group will not fall back to any node joining. When the flag is false the
resource group falls back to the higher priority node.
Rotating:
At cluster startup first available node in the node priority list will activate the
resource group.
If the resource group is on the takeover node. It will never fallback to a higher
resource chain must all share the same network connection to the resource group.
Concurrent:
A concurrent RG can be active on multiple nodes at the same time.
o
Custom:
o
Users have to explicitly specify the desired startup, fallover and fallback
procedures.
Startup Options:
o
Online using distribution policyThe resource group will only be brought online if the node
has no other resource group online. You can find this by lssrc ls clstrmgrES
Fallover Options:
o
Fallover using dynamic node priorityThe fallover node can be selected on the basis of
either its available CPU, its available memory or the lowest disk usage. HACMP uses RSCT to
gather all this information then the resource group will fallover to the node that best meets.
Bring offlineThe resource group will be brought offline in the event of an error occur. This
option is designed for resource groups that are online on all available nodes.
Fallback Options:
o
Never fallback
Planning
Save snapshot
Config_too_long message appears when the cluster manager detects that an event has
been processing for more than the specified time. To change the time interval ( smitty hacmp
extended configurationextended event configurationchange/show time until warning)
Physical Networks: TCP/IP based, such as Ethernet and token ring Device based, RS 232 target
mode SSA(tmssa)
Configuring cluster Topology:
Standard and Extended configuration
Smitty hacmpInitialization and standard configuration
IP aliasing is used as the default mechanism for service IP label/address assignment to a network
interface.
Configure nodes : Smitty hacmpInitialization and standard configurationconfigure nodes
cluster nodes.
Configuring an HA cluster: smitty hacmpextended configurationextended topology
Defining persistent IP labels: It always stays on the same node, does not require installing
an additional physical interface, its not part of any resource group.smitty hacmpextended
topology configurationconfigure persistent node IP label/addressesadd persistent node IP
label(enter node name, network name, node IP label/address)
Bring a resource group offline: smitty cl_adminselect hacmp resource group and
application managementBring a resource group offline.
Bring a resource group online: smitty hacmp select hacmp resource group and
application managementBring a resource group online.
Move a resource group: smitty hacmp select hacmp resource group and application
management Move a resource group to another node
HACMP LVM
Stop the cluster services by using smitty clstop : graceful, takeover, forced. In the log file
/tmp/hacmp.out search for node_down and node_down_complete.
Graceful: Node will be released, but will not be acquired by other nodes.
Graceful with takeover: Node will be released and acquired by other nodes.
Forced: Cluster services will be stopped but resource group will not be released.
Resource group states: online, offline, aquiring, releasing, error, temporary error, or unknown.
Find the resource group status: /usr/es/sbin/cluster/utilities/clfindres or clRGinfo.
Options: -t : If you want to display the settling time p: display priority override locations
To review cluster topology: /usr/es/sbin/cluster/utilities/cltopinfo.
Different type of NFS mounts: hard and soft
Hard mount is default choice.
NFS export file: /usr/es/sbin/cluster/etc/exports.
If the adapter configured with a service IP address : verify in /tmp/hacmp.out event
swap_adapter has occurred, Service IP address has been moved using the command netstat in .
You can implement RS232 heartbeat network between any 2 nodes.
To test a serial connection lsdev Cc tty, baud rate is set to 38400, parity to none, bits per
character to 8
Test to see RSCT is functioning or not : lssrc ls topsvcs
RSCT verification: lssrc ls topsvcs. To check RSCT group services: lssrc ls grpsvcs
Monitor heartbeat over all the defines networks: cllsif.log from /var/ha/run/topsvcs.clustername.
Prerequisites:
PowerHA Version 5.5 AIX v5300-9 RSCT levet 2.4.10
BOS components: bos.rte.*, bos.adt.*, bos.net.tcp.*,
Bos.clvm.enh ( when using the enhanced concurrent resource manager access)
Cluster.es.nfs fileset comes with the powerHA installation medium installs the NFSv4. From aix
BOS bos.net.nfs.server 5.3.7.0 and bos.net.nfs.client 5.3.7.0 is required.
Check all the nodes must have same version of RSCT using lslpp l rsct
Installing powerHA: release notes: /usr/es/sbin/cluster/release_notes
Enter smitty install_allselect input devicePress f4 for a software listingenter
Steps for increase the size of a shared lun:
o
Run cfgmgr
Varyonvg vgname
Lsattr El hdisk#
Chvg g vgname
Lsvg vgname
Varyoffvg vgname
On subsequent cluster nodes that share the vg. (run cfgmgr, lsattr El hdisk#, importvg L
vgname hdisk#)
Synchronize
PowerHA creates a backup copy of the modified files during synchronization on all nodes. These
backups are stored in /var/hacmp/filebackup directory.
The file collection logs are stored in /var/hacmp/log/clutils.log file.
User and group Administration:
Adding a user: smitty cl_usergroupselect users in a HACMP clusterAdd a user to the cluster.(list
users, change/show characteristics of a user in cluster, Removing a user from the cluster
Adding a group: smitty cl_usergroupselect groups in a HACMP clusterAdd a group to the cluster.
(list groups, change/show characteristics of a group in cluster, Removing a group from the cluster
Command is used to change password on all cluster nodes: /usr/es/sbin/cluster/utilities/clpasswd
Smitty cl_usergroupusers in a HACMP cluster
o
Remove a group
If more than 2 nodes exist in your cluster, you will need a minimum of n number of non-IP
heartbeat networks.
Disk heartbeating will typically requires 4 seeks/second. That is each of two nodes will
write to the disk and read from the disk once/second. Filemon tool monitors the seeks.
Vpaths are configured as member disks of an enhanced concurrent volume group. Smitty
lvmselect volume groupsAdd a volume groupGive VG name, PV names, VG major number,
Set create VG concurrent capable to enhanced concurrent.
Import the new VG on all nodes using smitty importvg or importvg V 53 y c23vg vpath5
Capped Mode : The processing capacity can never exceed the entitled
capacity.
Virtual Processors :A virtual processor is a representation of a physical
processor that is presented to the operating system running in a micro
partition.
If a micro partition is having 1.60 processing units , and 2 virtual processors.
Each virtual processor will have 0.80 processing units.
Dedicated processors : Dedicated processors are whole processors that
are assigned to dedicated LPARs . The minimum processor allocation for an
LPAR is one.
IVM(Integrated virtualization manager): IVM is a h/w management
solution that performs a subset of the HMC features for a single server,
avoiding the need of a dedicated HMC server.
Live partition Mobility: Allows you to move running AIX or Linux partitions
from one physical Power6 server to another without disturb.
VIO
Version for VIO 1.5
For VIO command line interface is IOSCLI
The environment for VIO is oem_setup_env
The command for configuration through smit is cfgassist
Initial login to the VIO server is padmin
Help for vio commands ex: help errlog
Hardware requirements for creating VIO :
1.
Power 5 or 6
2.
HMC
3.
At least one storage adapter
4.
If you want to share Physical disk then one big Physical disk
5.
Ethernet adapter
6.
At least 512 MB memory
Latest version for vio is 2.1 fixpack 23
Copying the virtual IO server DVD media to a NIM server:
Mount /cdrom
Cd /cdrom
Cp /cdrom/bosinst.data /nim/resources
Execute the smitty installios command
Using smitty installios you can install the VIO S/w.
Topas cecdisp flag shows the detailed disk statistics
Viostat extdisk flag shows detailed disk statistics.
Wklmgr and wkldagent for handling workload manager. They can be used to
record performance data and that can be viewed by wkldout.
Chtcpip command for changing tcpip parameters
Create the virtual device for the DVD drive.(mkvdev vdev cd0
vadapter vhost3 dev vcd)
5.
Create a client scsi adapter in each lpar using the HMC.
6.
Run the cfgmgr
Moving the drive :
1.
Find the vscsi adapter using lscfg |grep Cn(n is the slot number)
2.
rmdev Rl vscsin
3.
run the cfgmgr in target LPAR
Through dsh command find which lpar is currently holding the drive.
4.
LVM Mirroring
Virtual Scsi Redundancy:
Virtual scsi redundancy can be achieved using MPIO and LVM mirroring.
Client is using MPIO to access a SAN disk, and LVM mirroring to access 2 scsi
disks.
MPIO: MPIO for highly available virtual scsi configuration. The disks on the
storage are assigned to both virtual IO servers. The MPIO for virtual scsi
devices only supports failover mode.
Configuring MPIO:
o
Create 2 virtual IO server partitions
o
Install both VIO servers
o
Change fc_err_recov( to fast_fail and dyntrk(AIX tolerate cabling
changes) to yes. ( chdev dev fscsi0 attr fc_err_recov=fast_fail
dyntrk=yes perm
o
Reboot the VIO servers
o
Create the client partitions. Add virtual Ethernet adapters
o
Use the fget_config(fget_config vA) command to get the LUN to hdisk
mappings.
o
Use the lsdev dev hdisk vpd command to retrieve the information.
o
The reserve_policy for each disk must be set to no_reserve.(chdev dev
hdisk2 attr reserve_policy=no_reserve)
o
Map the hdisks to vhost adapters.( mkvdev vdev hdisk2 vadapter
vhost0 dev app_server)
o
Install the client partitions.
o
Configure the client partitions
o
Testing MPIO
Configure the client partitions:
o
Check the MPIO configuration (lspv, lsdev Cc disk)
o
Run lspath
o
Enable the health check mode (chdev l hdisk0 a hcheck_interval=50
P
o
Enable the vscsi client adapter path timeout ( chdev l vscsi0 a
vscsi_path_to=30 P)
o
Changing the priority of a path( chpath l hdisk0 p vscsi0 a
priority=2)
Testing MPIO:
o
Lspath
o
Shutdown VIO2
o
Lspath
o
Start the vio2
o
Lspath
LVM Mirroring: This is for setting up highly available virtual scsi
configuration. The client partitions are configured with 2 virtual scsi
adapters. Each of these virtual scsi adapters is connected to a different VIO
server and provides one disk to the client partition.
o
Each SEA must have at least one virtual Ethernet adapter with the
access external network flag(trunk flag) checked. This enables the SEA
to provide bridging functionality between the 2 VIO servers.
o
This adapter on both the SEAs has the same pvid
o
Priority value defines which of the 2 SEAs will be the primary and
which is the secondary. An adapter with priority 1 will have the highest
priority.
Procedure for configuring SEA failover:
o
Configure a virtual Ethernet adapter via DLPAR. (ent2)
o
Select the VIO-->Click task button-->choose DLPAR-->virtual
adapters
o
Click actions-->Create-->Ethernet adapter
o
Enter Slot number for the virtual Ethernet adapter into adapter
ID
o
Enter the Port virtual Lan ID(PVID). The PVID allows the virtual
Ethernet adapter to communicate with other virtual Ethernet adapters
that have the same PVID.
o
Select IEEE 802.1
o
Check the box access external network
o
Give the virtual adapter a low trunk priority
o
Click OK.
o
Create another virtual adapter to be used as a control channel on
VIOS1.( give another VLAN ID, do not check the box access external
network (ent3)
o
Create SEA on VIO1 with failover attribute. ( mkvdev sea ent0
vadapter ent2 default ent2 defaultid 1 attr ha_mode=auto
ctl_chan=ent3. Ex: ent4
o
Create VLAN Ethernet adapter on the SEA to communicate to the
external VLAN tagged network ( mkvdev vlan ent4 tagid 222) Ex:ent5
o
Assign an IP address to SEA VLAN adapter on VIOS1. using mktcpip
o
Same steps to VIO2 also. ( give the higher trunk priority:2)
Client LPAR Procedure:
o
Create client LPAR same as above.
Network interface backup : NIB can be used to provide redundant access
to external networks when 2 VIO servers used.
Configuring NIB:
o
Create 2 VIO server partitions
o
Install both VIO servers
o
Configure each VIO server with one virtual Ethernet adapter. Each VIO
server needs to be a different VLAN.
o
Define SEA with the correct VLAN ID
o
Add virtual Scsi adapters
o
Create client partitions
o
Define the ether channel using smitty etherchannel
Configuring multiple shared processor pools:
o
Configuration --> Shared processor pool management --> Select the pool
name
VIOs Security:
Enable basic firewall settings: viosecure firewall on
view all open ports on firewall configuration: viosecure firewall view
To view current security settings: viosecure view nonint
Change system security settings to default: viosecure level default
List all failed logins : lsfailedlogin
Dump the global command log: lsgcl
Backup:
Create a mksysb file of the system on a nfs mount: backupios file
/mnt/vios.mksysb mksysb
Create a backup of all structures of VGs and/or storage pools: savevgstruct
vdiskvg ( data will be stored to /home/ios/vgbackups)
List all backups made with savevgstruct: restorevgstruct ls
Backup the system to a NFS mounted file system: backupios file /mnt
Performance Monitoring:
Retrieve statistics for ent0: entstat all ent0
Reset the statistics for ent0: entstat reset ent0
View disk statistics: viostat 2
Show summary for the system in stats: viostat sys 2
Show disk stats by adapter: viostat adapter 2
Turn on disk performance counters: chdev dev sys0 attr iostat=true
Topas cecdisp
Link aggregation on the VIO server:
Link aggregation means you can give one IP address to two network cards
and connect to two different switches for redundancy purpose. One network
card will be active on one time.
Devices --> communication --> Etherchannel/IEEE 802.3 ad Link Aggregation
--> Add an etherchannel / Link aggregation
Select ent0 and mode 8023ad
Select backup adapter as redundancy ex: ent1
Automatically virtual adapter will be created named ent2.
Then put IP address : smitty tcpip --> Minimum configuration and startup -->
select ent2 --> Put IP address
MES - Miscellaneous Equipment Specification. This is a change order to a system, typically in the
form of an upgrade. A RPO MES is for Record Purposes Only. Both specify to IBM changes that are
made to a system.
MSPP - Multiple Shared Processor Pools. This is a capability introduced in Power 6
systems that allows for more than one SPP.
NIM - Network Installation Management / Network Install Manager (IBM documentation
refers to both expansions of the acronym.) NIM is a means to perform remote initial BOS
installs, and manage software on groups of AIX systems.
ODM - Object Data Manager. A database and supporting methods used for storing
system configuration data in AIX. See the ODM section for additional information.
PP - Physical Partition. An LVM concept where a disk is divided into evenly sized
sections. These PP sections are the backing of LPs (Logical Partitions) that are used to
build volumes in a volume group. See the LVM section for additional information.
PV - Physical Volume. A PV is an LVM term for an entire disk. One or more PVs are used
to construct a VG (Volume Group). See the LVM section for additional information.
PVID - Physical Volume IDentifier. A unique ID that is used to track disk devices on a
system. This ID is used in conjunction with the ODM database to define /dev directory
entries. See the LVM section for additional information.
SMIT - System Management Interface Tool. An extensible X Window / curses interface to
administrative commands. See the SMIT section for additional information.
SPOT - Shared Product Object Tree. This is an installed copy of the /usr file system. It is
used in a NIM environment as a NFS mounted resource to enable remote booting and
installation.
SPP - Shared Processor Pool. This is an organizational grouping of CPU resources that
allows caps and guaranteed allocations to be set for an entire group of LPARs. Power 5
systems have a single SPP, Power 6 systems can have multiple.
VG - Volume Group. A collection of one or more PVs (Physical Volumes) that have been
divided into PPs (Physical Partitions) that are used to construct LVs (Logical Volumes).
See the LVM section for additional information.
VGDA - Volume Group Descriptor Area. This is a region of each PV (Physical Volume) in
a VG (Volume Group) that is reserved for metadata that is used to describe and manage
all resources in the VG. See the LVM section for additional information.
Before you perform this step, make sure you have reliable backups of your
data and any customized applications or volume groups. The instructions on
how to create a system backup are described later in this article.
Using this scenario, you can install the AIX operating system for the first time
or overwrite an existing version of the operating system. This scenario
involves the following steps:
Version 5.2 and AIX 5L Version 5.3 require 128MB of memory and 2.2GB
of physical disk space.
o
Make sure your hardware installation is complete, including all external
devices.
o
If your system needs to communicate with other systems and access
their resources, make sure you have the information in the following
worksheet before proceeding with the installation:
Network Attribute
Value
Network interface
Host name
IP address
Network mask
Nameserver
Domain name
Gateway
2.
3.
4.
5.
6.
7.
Make sure all external devices attached to the system, such as CDROM drives, tape drives, DVD drives, and terminals, are turned on. Only
the CD-ROM drive from which you will install AIX should contain the
installation media.
Power on the system.
When the system beeps twice, press F5 on the keyboard or 5 on an
ASCII terminal. If you have a graphics display, you will see the keyboard
icon on the screen when the beeps occur. If you have an ASCII terminal,
you will see the word keyboard when the beeps occur.
Select the system console by pressing F1 or 1 on an ASCII terminal
and press Enter.
Select the English language for the BOS installation menus by typing
a 1 in the Choice field. Press Enter to open the Welcome to Base
Operating System Installation and Maintenancescreen.
Type 2 to select 2 Change/Show Installation Settings and
Install in the Choice field and pressEnter.
8.
Choice [1]: 2
Otherwise, go to sub-step 2.
2.
To change the System Settings, which includes the method of
installation and disk where you want to install, type 1 in the Choice field
and press Enter.
3.
1 System Settings:
Method of Installation..................New and Complete Overwrite
Disk Where You Want to Install..hdisk0
6.
Type 1 for New and Complete Overwrite in the Choice field and
press Enter. The Change Disk(s) Where You Want to Install screen
now displays.
Yes
Use any other options at this time. You can return to the Configuration
Assistant or the Installation Assistant by typing configassist or smitty assist at
the command line.
4.
Select Exit the Configuration Assistant and select Next. Or,
press F10 or ESC+0 to exit the Installation Assistant.
5.
If you are in the Configuration Assistant, select Finish now. Do not
start the Configuration Assistant when restarting AIX and select Finish.
At this point, the BOS Installation is complete, and the initial configuration of
the system is complete.
Ensure that the root user has a primary authentication method of SYSTEM. You
can check this condition by typing the following command:
# lsuser -a auth1 root
2.
If needed, change the value by typing the following command:
# chuser auth1=SYSTEM root
3.
Before you begin the installation, other users who have access to your system
must be logged off.
4.
Verify that your applications will run on AIX 5L Version 5.3. Also, check if your
applications are binary compatible with AIX 5L Version 5.3. For details on binary
compatibility, check out the AIX 5L Version 5 binary compatibility Web site. If your
system is an application server, verify that there are no licensing issues. Refer to
your application documentation or provider to verify on which levels of AIX your
applications are supported and licensed.
5.
6.
All requisite hardware, including any external devices, such as tape drives or
CD/DVD-ROM drives, must be physically connected and powered on.
7.
Use the errpt command to generate an error report from entries in the system
error log. To display a complete detailed report, type the following:
# errpt -a
8.
There must be adequate disk space and memory available. AIX 5L Version 5.3
requires 128MB of memory and 2.2GB of physical disk space.
9.
10.
where "N" is your CD drive number.
11.
Make a backup copy of your system software and data. The instructions on how
to create a system backup are described elsewhere in this article.
12.
Always refer to the release notes for the latest migration information.
2.
3.
4.
5.
6.
Select the English language for the BOS installation menus by typing a 1 at
the Choicefield and press Enter. The Welcome to Base Operating System
Installation and Maintenance menu opens.
7.
>>>
88
Help ?
99
Previous Menu
Choice [1]: 2
Verify that migration is the method of installation. If migration is not the method
of installation, select it now. Select the disk or disks you want to install.
1 System Settings:
Method of Installation....................Migration
Disk Where You Want to Install............hdisk0
2.
3.
Type 3 and press Enter to select More Options. To use the Help menu to learn
more about the options available during a migration installation, type 88 and
press Enter in the Installation Options menu.
4.
5.
When the Migration Confirmation menu displays, follow the menu instructions to
list system information or continue with the migration by typing 0 and
pressing Enter.
Migration Confirmation
------------------------------------------------------------
>>> Choice[0]:
2.
Select the Accept Licenses option to accept the electronic licenses for the
operating system.
3.
4.
5.
If you are in the Configuration Assistant, select Finish now. Do not start
theConfiguration Assistant when restarting AIX and select Finish.
6.
When the login prompt displays, log in as the root user to perform system
administration tasks.
7.
The nimadm utility offers several advantages over a conventional migration. Following are
the advantages of nimadm over other migration methods:
Reduced downtime for the client: The migration can execute while the system is up and
running as normal. There is no disruption to any of the applications or services running on
the client. Therefore, the upgrade can be done at a anytime time. Once upgrade complete
we need take a downtime from the client and scheduled a reboot in order to restart the
system at the later level of AIX.
Flexibility: The nimadm process is very flexible and it can be customized using some of the
optional NIM customization resources, such as image_data, bosinst_data, pre/post_migration
scripts, exclude_files, and so on.
Quick recovery from migration failures: All changes are performed on the copied rootvg
(altinst_rootvg). If there are any problems with the migration, the original rootvg is still
available and the system has not been impacted. If a migration fails or terminates at any
stage, nimadm is able to quickly recover from the event and clean up afterwards. There is
little for the administrator to do except determine why the migration failed, rectify the
situation, and attempt the nimadm process again. If the migration completed but issues are
discovered after the reboot, then the administrator can back out easily by booting from the
original rootvg disk.
The nimadm command performs a migration in 12 phases. All migration activity is logged on
the NIM master in the /var/adm/ras/alt_mig directory. It is useful to have knowledge of each
phase before performing a migration. After starting the alt_disk process from NIM master we
output as below, these are pre ALT_DISK steps
+----------------------------------------------------------------------------+
Executing nimadm phase 1.
+----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -j -M 6.1 -P1 -d "hdisk0"
Calling mkszfile to create new /image.data file.
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5
Creating logical volume alt_hd6
Creating logical volume alt_hd8
+----------------------------------------------------------------------------+
Executing nimadm phase 2.
+----------------------------------------------------------------------------+
Creating nimadm cache file systems on volume group nimvg.
Checking for initial required migration space.
Creating cache file system /webmanual01_alt/alt_inst
Creating cache file system /webmanual01_alt/alt_inst/admin
Creating cache file system /webmanual01_alt/alt_inst/adminOLD
Creating cache file system /webmanual01_alt/alt_inst/crmhome
Creating cache file system /webmanual01_alt/alt_inst/home
Creating cache file system /webmanual01_alt/alt_inst/opt
Creating cache file system /webmanual01_alt/alt_inst/sw
Creating cache file system /webmanual01_alt/alt_inst/tmp
Creating cache file system /webmanual01_alt/alt_inst/usr
Creating cache file system /webmanual01_alt/alt_inst/var
Explanation of Phase 3 : The NIM master copies the NIM clients data to the cache file systems in
nimvg. This data copy is done by either rsh or nimsh.
+----------------------------------------------------------------------------+
Executing nimadm phase 3.
+----------------------------------------------------------------------------+
Syncing client data to cache ...
cannot access ./tmp/alt_lock: A file or directory in the path name does not
exist.
Explanation of Phase 4 : If a pre-migration script resource has been specified, it is executed at this time.
+----------------------------------------------------------------------------+
Executing nimadm phase 4.
+----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.
Explanation of Phase 5 : System configuration files are saved. Initial migration space is calculated and
appropriate file system expansions are made. The bos image is restored and the device database is
merged. All of the migration merge methods are executed, and some miscellaneous processing takes
place.
+----------------------------------------------------------------------------+
Executing nimadm phase 5.
+----------------------------------------------------------------------------+
Saving system configuration files.
Checking for initial required migration space.
Setting up for base operating system restore.
/webmanual01_alt/alt_inst
Restoring base operating system.
Merging system configuration files.
Running migration merge method: ODM_merge Config_Rules.
Running migration merge method: ODM_merge SRCextmeth.
Running migration merge method: ODM_merge SRCsubsys.
Running migration merge method: ODM_merge SWservAt.
Running migration merge method: ODM_merge pse.conf.
Running migration merge method: ODM_merge vfs.
Running migration merge method: ODM_merge xtiso.conf.
Running migration merge method: ODM_merge PdAtXtd.
Running migration merge method: ODM_merge PdDv.
Running migration merge method: convert_errnotify.
Running migration merge method: passwd_mig.
Running migration merge method: login_mig.
Running migration merge method: user_mrg.
Running migration merge method: secur_mig.
Running migration merge method: RoleMerge.
+----------------------------------------------------------------------------+
Executing nimadm phase 6.
+----------------------------------------------------------------------------+
Installing and migrating software.
Updating install utilities.
+----------------------------------------------------------------------------+
Pre-installation Verification...
+----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...
SUCCESSES
--------Filesets listed in this section passed pre-installation verification
and will be installed.
+----------------------------------------------------------------------------+
BUILDDATE Verification ...
+----------------------------------------------------------------------------+
Verifying build dates...done
FILESET STATISTICS
-----------------1 Selected to be installed, of which:
1 Passed pre-installation verification
---1 Total to be installed
+----------------------------------------------------------------------------+
Installing Software...
+----------------------------------------------------------------------------+
[LOTS OF OUTPUT]
Installation Summary
-------------------Name Level Part Event Result
+----------------------------------------------------------------------------+
Executing nimadm phase 7.
+----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.
Explanation of Phase 8 : The bosboot command is run to create a client boot image, which is written to
the clients alternate boot logical volume (alt_hd5)
+----------------------------------------------------------------------------+
Executing nimadm phase 8.
+----------------------------------------------------------------------------+
Creating client boot image.
bosboot: Boot image is 47136 512 byte blocks.
Writing boot image to client's alternate boot disk hdisk0.
Explanation of Phase 9 : All the migrated data is now copied from the NIM masters local cache file and
synced to the clients alternate rootvg.
+----------------------------------------------------------------------------+
+----------------------------------------------------------------------------+
Executing nimadm phase 10.
+----------------------------------------------------------------------------+
Explanation of Phase 11 :The alt_disk_install command is called again to make the final adjustments
and put altinst_rootvg to sleep. The bootlist is set to the target disk
+----------------------------------------------------------------------------+
Executing nimadm phase 11.
+----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -M 6.1 -P3 -d "hdisk0"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/sw
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/crmhome
forced unmount of /alt_inst/admin
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
+----------------------------------------------------------------------------+
Executing nimadm phase 12.
+----------------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client webmanual01.
Please review log to verify success
Initializing the NIM master.
Initializing NIM client webmanual01.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/webmanual01_alt_mig.log
Starting Alternate Disk Migration.
After the migration is complete, login to client and confirm bootlist is set to the altinst_rootvg disk.
# lspv | grep rootvg
Hdisk1 0000273ac30fdcfc rootvg active
hdisk0 000273ac30fdd6e altinst_rootvg active
# bootlist -m normal -o
Hdisk0 blv=hd5
+----------------------------------------------------------------------------+
+----------------------------------------------------------------------------+
Executing nimadm phase 2.
+----------------------------------------------------------------------------+
Creating nimadm cache file systems on volume group nimvg.
Checking for initial required migration space.
+----------------------------------------------------------------------------+
Executing nimadm phase 3.
+----------------------------------------------------------------------------+
Syncing client data to cache ...
cannot access ./tmp/alt_lock: A file or directory in the path name does not
exist.
Explanation of Phase 4 : If a pre-migration script resource has been specified, it is executed at this time.
+----------------------------------------------------------------------------+
+----------------------------------------------------------------------------+
Executing nimadm phase 5.
+----------------------------------------------------------------------------+
Saving system configuration files.
Checking for initial required migration space.
Setting up for base operating system restore.
/webmanual01_alt/alt_inst
Restoring base operating system.
Merging system configuration files.
Running migration merge method: ODM_merge Config_Rules.
Running migration merge method: ODM_merge SRCextmeth.
Running migration merge method: ODM_merge SRCsubsys.
Running migration merge method: ODM_merge SWservAt.
Running migration merge method: ODM_merge pse.conf.
Explanation of Phase 6: All system filesets are migrated using installp. Any required RPM images are
also installed during this phase.
+----------------------------------------------------------------------------+
Executing nimadm phase 6.
+----------------------------------------------------------------------------+
Installing and migrating software.
Updating install utilities.
+----------------------------------------------------------------------------+
Pre-installation Verification...
+----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...
SUCCESSES
--------Filesets listed in this section passed pre-installation verification
and will be installed.
+----------------------------------------------------------------------------+
BUILDDATE Verification ...
+----------------------------------------------------------------------------+
Verifying build dates...done
FILESET STATISTICS
-----------------1 Selected to be installed, of which:
1 Passed pre-installation verification
---1 Total to be installed
+----------------------------------------------------------------------------+
Installing Software...
+----------------------------------------------------------------------------+
[LOTS OF OUTPUT]
Installation Summary
-------------------Name Level Part Event Result
-----------------------------------------------------------------------------lwi.runtime 6.1.6.15 USR APPLY SUCCESS
lwi.runtime 6.1.6.15 ROOT APPLY SUCCESS
X11.compat.lib.X11R6_motif 6.1.6.15 USR APPLY SUCCESS
Java5.sdk 5.0.0.395 USR APPLY SUCCESS
Java5.sdk 5.0.0.395 ROOT APPLY SUCCESS
Java5.sdk 5.0.0.395 USR COMMIT SUCCESS
Java5.sdk 5.0.0.395 ROOT COMMIT SUCCESS
lwi.runtime 6.1.6.15 USR COMMIT SUCCESS
lwi.runtime 6.1.6.15 ROOT COMMIT SUCCESS
X11.compat.lib.X11R6_motif 6.1.6.15 USR COMMIT SUCCESS
+----------------------------------------------------------------------------+
Executing nimadm phase 7.
+----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.
Explanation of Phase 8 : The bosboot command is run to create a client boot image, which is written to
the clients alternate boot logical volume (alt_hd5)
+----------------------------------------------------------------------------+
Executing nimadm phase 8.
+----------------------------------------------------------------------------+
Creating client boot image.
bosboot: Boot image is 47136 512 byte blocks.
Writing boot image to client's alternate boot disk hdisk0.
Explanation of Phase 9 : All the migrated data is now copied from the NIM masters local cache file and
synced to the clients alternate rootvg.
+----------------------------------------------------------------------------+
Executing nimadm phase 9.
+----------------------------------------------------------------------------+
Adjusting client file system sizes ...
Adjusting size for /
Adjusting size for /admin
Adjusting size for /adminOLD
Adjusting size for /crmhome
Adjusting size for /home
Adjusting size for /opt
Adjusting size for /sw
Adjusting size for /tmp
+----------------------------------------------------------------------------+
Executing nimadm phase 10.
+----------------------------------------------------------------------------+
Unmounting client mounts on the NIM master.
forced unmount of /webmanual01_alt/alt_inst/var
forced unmount of /webmanual01_alt/alt_inst/usr
forced unmount of /webmanual01_alt/alt_inst/tmp
forced unmount of /webmanual01_alt/alt_inst/sw
forced unmount of /webmanual01_alt/alt_inst/opt
forced unmount of /webmanual01_alt/alt_inst/home
forced unmount of /webmanual01_alt/alt_inst/crmhome
forced unmount of /webmanual01_alt/alt_inst/adminOLD
forced unmount of /webmanual01_alt/alt_inst/admin
forced unmount of /webmanual01_alt/alt_inst
Removing nimadm cache file systems.
Explanation of Phase 11 :The alt_disk_install command is called again to make the final adjustments
and put altinst_rootvg to sleep. The bootlist is set to the target disk
+----------------------------------------------------------------------------+
Executing nimadm phase 11.
+----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -M 6.1 -P3 -d "hdisk0"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var
+----------------------------------------------------------------------------+
Executing nimadm phase 12.
+----------------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client webmanual01.
Please review log to verify success
Initializing the NIM master.
# bootlist -m normal -o
Hdisk0 blv=hd5
THE UNIX
Home
IBM AIX Operating System - Some useful Commands gathered from IBM
and other websites!!!
lspv
lspv -l hdisk6
lsvg
lsvg -o
The difference between lsvg and lsvg -o are the imported VGs that are offline.
List all LVs on VG vg01
lsvg -l vg01
lsvg -p vg02
lsfs
lsfs -q /home
logform /dev/datalog1
A jfs2 log must exist in this VG and be logform(ed). (This was done in the
previous steps.) -mspecifies the mount point for the fs, and -A y is a option to
automatically mount (with mount -a).
Create a scalable VG called vg01 with two disks
varyoffvg datavg
Vary-on the datavg VG
varyonvg datavg
By default the import operation will vary-on the VG. An explicit vary-on will be
required for concurrent volume groups that can be imported onto two (or more)
systems at once, but only varied-on on one system at a time.
Remove the datavg VG from the system
exportvg datavg
Import the VG on hdisk5 as datavg
The VG in this example spans multiple disks, but it is only necessary to specify
a single member disk to the command. The LVM system will locate the other
member disks from the metadata provided on the single disk provided.
Import a VG on a disk by PVID as datavg
In each of the chfs grow filesystem examples, AIX will automatically grow the
underlying LV to the appropriate size.
Grow the /var filesystem to 1 Gig
mklvcopy -k -s y fslv08 2
syncvg -l fslv08 must be run if the -k (sync now) switch is not used
for mklvcopy.
Add hdisk3 and hdisk4 to the vg01 VG
du -smx /var
lslv -l datalv01
The "COPIES" column relates the mirror distribution of the PPs for each LP. (PPs
should only be listed in the first part of the COPIES section. See the next example.)
The "IN BAND" column tells how much of the used PPs in this PV are used for this LV.
The "DISTRIBUTION" column reports the number of PPs in each region of the PV.
(The distribution is largely irrelevant for most modern SAN applications.)
Create a LV with 3 copies in a VG with a single PV
The migratepv command is an atomic command in that it does not return until
complete. Mirroring / breaking LVs is an alternative to explicitly migrating them. See
additional migratepv,mirrorvg, and mklvcopy examples in this section.
Put a PVID on hdisk1
chdev -l hdisk1 -a pv=yes
PVIDs are automatically placed on a disk when added to a VG
Remove a PVID from a disk
This will remove the PVID but not residual VGDA and other data on the
disk. dd can be used to scrub remaining data from the disk. The AIX install CD/DVD
also provides a "scrub" feature to (repeatedly) write patterns over data on disks.
Move (migrate) VG vg02 from hdisk1 to hdisk2
Mirroring and then unmirroring is another method to achieve this. See the
next example
Move (mirror) VG vg02 from hdisk1 to hdisk2
This creates a stripe width of 2 with a (total) stripe size of 32K. This command
will result in an upper bound of 2 (same as the stripe size) for the LV. If this LV is to
be extended to another two disks later, then the upper bound must be changed to 4
or specified during creation. The VG in this example was a scalable VG.
Determine VG type of VG myvg
MAX PVs is 32 for normal, 128 for big, and 1024 for scalable VGs.
Set the system to boot to the CDROM on next boot
The system will boot to one of the mirror pairs (hdisk0 or hdisk1) if the boot
from the CD ROM does not work. This can be returned to normal by repeating the
command without cd0.
List the boot device for the next boot
bootlist -m normal -o
Kernel
How would I know if I am running a 32-bit kernel or 64-bit kernel?
To display if the kernel is 32-bit enabled or 64-bit enabled, type:
bootinfo -K
How do I know if I am running a uniprocessor kernel or a multiprocessor kernel?
/unix is a symbolic link to the booted kernel. To find out what kernel mode is running, enter ls
-l /unix and see what file /unix it links to. The following are the three possible outputs from the ls
-l /unix command and their corresponding kernels:
During the installation process, one of the kernels, appropriate for the AIX version and the
hardware in operation, is enabled by default. Use the method from the previous question and
assume that the 32-bit kernel is enabled. Also assume that you want to boot it up in the 64-bit
kernel mode. This can be done by executing the following commands in sequence:
lslv -m hd5
Note:
In AIX V5.2, the 32-bit kernel is installed by default. In AIX V5.3, the 64-bit kernel is installed on
64-bit hardware and the 32-bit kernel is installed on 32-bit hardware by default.
Hardware
How do I know if my machine is capable of running AIX 5L Version 5.3?
AIX 5L Version 5.3 runs on all currently supported CHRP (Common Hardware Reference Platform)based POWER hardware.
How do I know if my machine is CHRP-based?
Run the prtconf command. If it's a CHRP machine, the string chrp appears on the Model
Architecture line.
How do I know if my System p machine (hardware) is 32-bit or 64-bit?
To display if the hardware is 32-bit or 64-bit, type:
bootinfo -y
How much real memory does my machine have?
To display real memory in kilobytes (KB), type one of the following:
bootinfo -r
lsattr -El sys0 -a realmem
Can my machine run the 64-bit kernel?
lsattr -l rmt0 -E
To list the default values of the attributes for the tape device, rmt0, type:
lsattr -l rmt0 -D
To list the possible values of the login attribute for the TTY device, tty0, type:
lsattr -E -l sys0
How many processors does my system have?
To display the number of processors on your system, type:
lspv
How do I list information about a specific physical volume?
To find details about hdisk1, for example, run the following command:
lspv hdisk1
How do I get a detailed configuration of my system?
Type the following:
lscfg
The following options provide specific information:
For example, to display details about the tape drive, rmt0, type:
uname
-p
uname
-r
uname
-s
uname
-n
uname
-a
uname
-M
uname
-v
uname
-m
uname
-u
AIX
What version, release, and maintenance level of AIX is running on my system?
Type one of the following:
oslevel -r
lslpp -h bos.rte
How can I determine which fileset updates are missing from a particular AIX level?
To determine which fileset updates are missing from 5300-04, for example, run the following
command:
oslevel -s 5300-04-02
Is a CSP (Concluding Service Pack) installed on my system?
To see if a CSP is currently installed on the system, run the oslevel -s command. Sample output for
an AIX 5L Version 5.3 system, with TL3, and CSP installed, would be:
oslevel -s 5300-03-CSP
How do I create a file system?
The following command will create, within volume group testvg, a jfs file system of 10MB with
mounting point /fs1:
In AIX V5.3, the size of a JFS2 file system can be shrunk, as well.
How do I mount a CD?
Type the following:
mount {-a|all}
How do I unmount a file system?
Type the following command to unmount /test file system:
umount /test
How do I display mounted file systems?
Type the following command to display information about all currently mounted file systems:
mount
How do I remove a file system?
Type the following command to remove the /test file system:
rmfs /test
How can I defragment a file system?
The defragfs command can be used to improve or report the status of contiguous space within a
file system. For example, to defragment the file system /home, use the following command:
defragfs /home
Which fileset contains a particular binary?
To show bos.acct contains /usr/bin/vmstat, type:
lslpp -w /usr/bin/vmstat
Or to show bos.perf.tools contains /usr/bin/svmon, type:
which_fileset svmon
How do I display information about installed filesets on my system?
Type the following:
lslpp -l
How do I determine if all filesets of maintenance levels are installed on my system?
Type the following:
instfix -i | grep ML
How do I determine if a fix is installed on my system?
To determine if IY24043 is installed, type:
lppchk -v
How do I get a dump of the header of the loader section and the symbol entries in symbolic
representation?
Type the following:
dump -Htv
How do I determine the amount of paging space allocated and in use?
Type the following:
lsps -a
How do I increase a paging space?
You can use the chps -s command to dynamically increase the size of a paging space. For example,
if you want to increase the size of hd6 with 3 logical partitions, you issue the following command:
chps -s 3 hd6
How do I reduce a paging space?
You can use the chps -d command to dynamically reduce the size of a paging space. For example,
if you want to decrease the size of hd6 with four logical partitions, you issue the following
command:
chps -d 4 hd6
How would I know if my system is capable of using Simultaneous Multi-threading (SMT)?
Your system is capable of SMT if it's a POWER5-based system running AIX 5L Version 5.3.
How would I know if SMT is enabled for my system?
If you run the smtctl command without any options, it tells you if it's enabled or not.
Is SMT supported for the 32-bit kernel?
Yes, SMT is supported for both 32-bit and 64-bit kernel.
How do I enable or disable SMT?
You can enable or disable SMT by running the smtctl command. The following is the syntax:
-m
off
-m
on
-w
boot
-w
now
If neither the -w boot or the -w now options are specified, then the mode change is made
immediately. It persists across subsequent reboots if you run the bosboot command before the
next system reboot.
How do I get partition-specific information and statistics?
The lparstat command provides a report of partition information and utilization statistics. This
command also provides a display of Hypervisor information.
chvg
How do I create a logical volume?
Type the following:
mklv
-y name_of_logical_volume name_of_volume_group number_of_partition
How do I increase the size of a logical volume?
To increase the size of the logical volume represented by the lv05 directory by three logical
partitions, for example, type:
extendlv lv05 3
How do I display all logical volumes that are part of a volume group (for example, rootvg)?
You can display all logical volumes that are part of rootvg by typing the following command:
lsvg -l rootvg
How do I list information about logical volumes?
Run the following command to display information about the logical volume lv1:
lslv lv1
How do I remove a logical volume?
You can remove the logical volume lv7 by running the following command:
rmlv lv7
The rmlv command removes only the logical volume, but does not remove other entities, such as
file systems or paging spaces that were using the logical volume.
How do I mirror a logical volume?
1.
2.
syncvg VolumeGroupName
rmlvcopy testlv 2
Each logical partition in the logical volume now has at most two physical partitions.
Queries about volume groups
To show volume groups in the system, type:
lsvg
To show all the characteristics of rootvg, type:
lsvg rootvg
To show disks used by rootvg, type:
lsvg -p rootvg
How to add a disk to a volume group?
Type the following:
syncvg -v testvg
How do I replace a disk?
1.
2.
3.
alt_disk_copy -d hdisk1
Network
How can I display or set values for network parameters?
The no command sets or displays current or next boot values for network tuning parameters.
How do I get the IP address of my machine?
Type one of the following:
lsdev -Cc if
ifconfig -a
To get information about one specific network interface, for example, tr0, run the command:
ifconfig tr0
How do I activate a network interface?
To activate the network interface tr0, run the command:
ifconfig tr0 up
How do I deactivate a network interface?
For example, to deactivate the network interface tr0, run the command:
netstat -r -f inet
To display interface information for an Internet interface, type:
netstat -i -f inet
To display statistics for each protocol, type:
netstat -s -f inet
iptrace /tmp/nettrace
The trace information is placed into the /tmp/nettrace file.
To record packets received on an interface en0 from a remote host airmail over the telnet port,
enter:
Workload partitions
How do I create a workload partition?
To create a workload partition named temp with the IP Address xxx.yyy.zzz.nnn, type:
mkwpar -f /tmp/wpar1.spec
How do I create a new specification file for an existing workload partition wpar1?
To create a specification file wpar2.spec for an existing workload partition wpar1, type:
startwpar temp
How do I stop a workload partition?
To stop the workload partition called temp, type:
stopwpar temp
rmwpar temp
To stop and remove the workload partition called temp preserving data on its file system, type:
rmwpar -p -s temp
Note: Workload Partitions (WPARs), a set of completely new software-based system
virtualization features, were introduced in IBM AIX Version 6.1.
vmstat
To display five summaries at 2-second intervals, type:
vmstat 2 5
To display a summary of the statistics for all of the workload partitions after boot, type:
vmstat -@ ALL
To display all of the virtual memory statistics available for all of the workload partitions, type:
iostat
To display a continuous disk report at 2-second intervals for the disk with the logical name disk1,
type:
iostat -d disk1 2
To display 6 reports at 2-second intervals for the disk with the logical name disk1, type:
iostat disk1 2 6
To display 6 reports at 2-second intervals for all disks, type:
iostat -d 2 6
To display only file system statistics for all workload partitions, type:
iostat -F -@ ALL
To display system throughput of all workload partitions along with the system, type:
iostat -s -@ ALL
How do I display detailed local and remote system statistics?
Type the following command:
topas
To go directly to the process display, enter:
topas -P
To go directly to the logical partition display, enter:
topas -L
To go directly to the disk metric display, enter:
topas -D
To go directly to the file system display, enter:
topas -F
How do I report system unit activity?
Type the following command:
sar
To report processor activity for the first two processors, enter:
sar -u -P 0,1
This produces output similar to the following:
cpu %usr %sys %wio %idle 0 45 45 5 5 1 27 65 3 5