Professional Documents
Culture Documents
Tutorial & Cheatsheet FOR Solaris Live Upgrade: by John R Avery
Tutorial & Cheatsheet FOR Solaris Live Upgrade: by John R Avery
Tutorial & Cheatsheet FOR Solaris Live Upgrade: by John R Avery
FOR
SOLARIS LIVE UPGRADE
by John R Avery
===
Page 1 of 52
Revisions
==
Page 2 of 52
Table of Contents
PREFACE........................................................................ 5
INTRODUCTION................................................................... 6
Five Phases of Live Upgrade: Overview ........................................ 6
A Few Preliminary Definitions ................................................ 7
1 PHASE 1: PLANNING .......................................................... 8
1.1 Hardware / Disk-Space Planning ........................................ 8
1.2 RAID and Mirror Planning .............................................. 9
1.2.1 Integrate SVM with the Building of your N/T BE .................... 10
1.2.2 Using VxVM Volumes in the N/T BE .................................. 10
1.3 Prerequisites: Software Packages and Patches ........................ 10
1.3.1 Java-2 Runtime Environment (J2RE) + Patches ....................... 10
1.3.2 Determine All Prerequisite Packages for Live Upgrade .............. 12
1.3.3 Determine All Prerequisite Patches for Live Upgrade ............... 13
1.3.4 Acquire Copies of the Two Live-Upgrade (LU) Packages .............. 14
1.3.5 Determine Need for Patching the Two LU Packages ................... 15
2 PHASE 2: SYSTEM PREPARATION ............................................... 16
2.1 Install new hard-disks if necessary. ................................. 16
2.1.1 Physical Installation ............................................. 16
2.1.2 Logical Installation and Configuration ............................ 16
2.2 Prepare the Hard-Disks for the New/Target Boot-Environment (N/T BE). . 16
2.2.1 Partition the hard-disks. ......................................... 17
2.2.2 High-Level Format the Partitions to Your Desired File-System Type. 17
2.3 Prepare the Active Boot Environment (ABE) for Live Upgrade ........... 17
2.3.1 Java-2 Runtime Environment (J2RE) + Patches ....................... 17
2.3.2 Install all Live-Upgrade Prerequisite Packages .................... 18
2.3.3 Install the Latest Patches ........................................ 18
2.3.4 Remove Any Old Live-Upgrade Packages .............................. 18
2.3.5 Install the New Live-Upgrade Packages ............................. 18
2.3.6 Install the Latest Patches for SUNWlur and SUNWluu (?) ............ 19
3 PHASE 3: CREATE THE NEW BOOT ENVIRONMENT (BE) ............................ 20
3.1 Create a New Boot Environment from an Old Boot Environment ........... 20
3.1.1 Scenario 1 (Create N/T BE From O/S BE) ........................... 21
3.1.2 Scenario 2 (Create N/T BE From O/S BE) ........................... 23
3.1.3 Scenario 3 (Create N/T BE From O/S BE) ........................... 23
3.1.4 Scenario 4 (Create N/T BE From O/S BE) ........................... 24
3.1.5 Scenario 5 (Create N/T BE From O/S BE) ........................... 24
3.1.6 Scenario 6 (Create N/T BE From O/S BE) ........................... 25
3.1.7 Scenario 7 (Create N/T BE From O/S BE) ........................... 25
3.2 Create a New Boot Environment with a Flash Archive ................... 26
3.3 Create a New Boot Environment with SVM Mirrors ....................... 27
3.3.1 Scenario 1 (N/T BE w/ SVM Mirrors) ............................... 28
3.3.2 Scenario 2 (N/T BE w/ SVM Mirrors) ............................... 28
3.3.3 Scenario 3 (N/T BE w/ SVM Mirrors) ............................... 29
3.4 Create a New Boot Environment with VxVM Volumes ...................... 30
3.5 Confirm the Success of "lucreate" .................................... 31
4 PHASE 4: UPGRADE THE NEW BOOT ENVIRONMENT ................................ 32
4.1 Upgrading from Solaris-9 CDs ......................................... 33
4.2 Upgrading from Solaris-9 DVD ......................................... 38
4.3 Upgrading from a Jumpstart Image ..................................... 38
4.4 Upgrading from Flash Archive ......................................... 39
5 PHASE 5: MANAGE THE BOOT ENVIRONMENTS .................................... 41
5.1 Activate the Newly-Upgraded Boot Environment (and Reboot To It) ..... 41
5.2 Falling Back to the Previous Boot Environment ........................ 44
5.3 Check the BE_name of the Active Boot Environment ..................... 45
5.4 Display Status of Any or All Boot Environments ....................... 45
5.5 Compare Boot Environments ............................................ 46
5.6 Delete a Nonactive Boot Environment .................................. 46
5.7 Change the BE_name of a Boot Environment ............................. 47
Page 3 of 52
5.8 View the File-System Configuration of a Boot Environment ............. 47
5.9 Mount and Unmount an Entire Boot Environment (BE) .................... 47
5.10 Live Upgrade "-o outfile" and "-l error_log" Options ................. 48
5.11 Force Synchronization Between Boot Environments ...................... 49
5.12 Live Upgrade Standard Configuration Files and Log Files .............. 49
5.12.1 Live Upgrade Configuration Files .................................. 50
5.12.2 Live Upgrade Log Files ............................................ 51
6 Troubleshooting Tips ...................................................... 51
6.1 Failure to Reboot into the new BE .................................... 51
6.1.1 diag-switch? set to "true" ....................................... 51
6.1.2 If at first you don't succeed, ... ................................ 52
==
Page 4 of 52
PREFACE
This document is intended to be a from-scratch tutorial and quasi-cheatsheet
for the Solaris Live Upgrade procedures. If you have never used Live Upgrade,
this should be a good, quick place to gain a thorough grasp of the basics,
including enough command-line details for you to perform at least one major
variation of a Live Upgrade procedure. If you are hands-on familiar with Live
Upgrade, you should be able easily to wade through the beginner's details to the
basic pointers you are looking for, though this document is not intended to
provide information about all the details of Live Upgrade commands and options.
If this combination tutorial and cheatsheet has any particular advantage over
Sun's original documentation or anybody else's custom Live Upgrade
documentation, it lies in this document's meticulous thoroughness on the most
basic concepts and the supposedly-most-often-used features of Live Upgrade.
Thus, you get thoroughness and important detail in a relatively short document.
The descriptions and steps here are written generically; not necessarily
specific to anybody's in-house technical standards for how to use Live Upgrade
―though such information can be added and included when it is available. This
document, precisely as written, cannot be used directly as a template for a
workplan to upgrade any systems. In other words, I've not constructed it with
blank lines in various places for you to fill in the specific parameters for
your specific Live Upgrade task. However and naturally, it can be useful as a
set of guidelines for helping to produce a workplan for a specific upgrade.
• Within each phase and as much as possible, present explanations and steps
in the (hopefully) most helpful and practical order, and with just the
right amount of detail, for ....
ALSO NOTE: As of Rev 1.0, not all the Live Upgrade procedures, included in
this rev, have been thoroughly tested. Some level of trust in Sun's original
documentation is implied. By Rev 1.3, a project team, with which I was working,
followed these steps in general and found no notable flaws to them, except the
absence of any cautions about using ksh-93 (which now appear in a few different
places in Rev 1.4). In Phases 3 and 4, those commands that include "example
screen-output" represent the performance of an entire Live Upgrade procedure,
from fully-patched Solaris-8, 2/02, to Solaris-9, 9/04, on an Ultra-10 with an
8.5-GB disk: 1-GB swap; 3-GB for Sol-8 (/ only); 4.5-GB for Sol-9 (/ only).
Page 5 of 52
INTRODUCTION
Sun's Live Upgrade is a procedure by which a system's Solaris OE can be
upgraded, while the system and its services are fully up and running, by
creating an alternate, nonactive boot environment (BE) on separate disk-slices;
upgrading that alternate BE; and then affecting the system upgrade simply by
rebooting the system, which reduce the system's and services' downtime from
several hours to a few minutes.
This document presents the entire Live Upgrade routine in five major phases
(not an official Sun Microsystems delineation):
1. Planning
2. Syste Preparation
3. Creating a new Boot Environment
4. Upgrading the new Boot Environment
5. Switching to the new Boot Environment and
Managing All the Boot Environments on the System
Page 6 of 52
A Few Preliminary Definitions
IF YOU ARE NEW TO LIVE UPGRADE, READ THESE FIRST!
The following definitions do not include all the definitions included in the
glossaries of Sun's official documentation related to Live Upgrade. Those
glossaries tend to include definitions for terms that you are expected to
know by now or that you can easily look up elsewhere, such as "Jumpstart" and
"Flash Archive". A few of the following definitions are unique to this
document; particularly "new/target boot environment (N/T BE)" and
"original/source boot environment (O/S BE)".
Page 7 of 52
1 PHASE 1: PLANNING
Before you begin executing the actual Live-Upgrade commands, you must
investigate your system's level of preparation for the Live Upgrade and
determine what you need to do in order to prepare it. In a nutshell, these
preparations potentially involve three things:
During this planning phase, you determine precisely what needs to be done to
the system so that it has all the proper disk-space, with appropriately-sized
partitions (including a bootable partition for the new / file-system); so that
it has all the prerequisite software packages and patches for a successful Live
Upgrade procedure. Thus, this planning phase is, itself, a form of preparation
for Phase 2: System Preparation, during which you actually prepare the system
for the Live Upgrade. You can essentially combine Phase 1 and Phase 2, if you
prefer.
a) From which Solaris version are you upgrading; to which Solaris version
are you upgrading?
b) Are you planning to add any new software, besides the Solaris upgrade
itself, to the file-systems that contain the upgraded OS/OE image? How
much more space will these require?
(NOTE: These might include certain software packages and patches that
are explicit prerequisites for Live Upgrade, which are mentioned
directly, later in this phase.)
Page 8 of 52
partitions standpoints— also with this in mind. (See sub-subsection
1.2.1 for point-by-point considerations for this issue.)
e) Will your system be able to use at least one of your disks and
partitions, for the new boot environment (BE), as a bootable / file-
system partition? If not, what can you do about this?
f) Make sure you have an installable copy of the Solaris version to which
you are upgrading.
• If you plan to upgrade from CDs or DVD, do you have that media?
g) Order any harddisks that need to be installed to give you enough space
for the Live Upgrade.
h) Design the partitioning scheme(s) for the disk(s) on which you intend
to place the file systems for the new BE that you will be building and
upgrading.
Page 9 of 52
1.2.1 Integrate SVM with the Building of your N/T BE
If you want to use SVM in your N/T BE and to integrate the use of SVM volumes
into the Live Upgrade (LU) creation of your N/T BE (the lucreate command) then
you need to be aware of the following limitations:
Page 10 of 52
this is being recommended, read further in this subsection. Otherwise, I
consider that the remainder of this subsection is not in keeping with an
elegant-but-thorough beginner's tutorial and cheatsheet format, except when and
where it is clear that it is actually needed. I include this only just in case
you encounter some LU problems and cannot think of what else might be the
problem. Up to the time of writing this paragraph, I have been unable to
identify any clear and unambiguous series of steps to follow in order to address
this issue proactively. My best recommendation, at this time, is simply to
apply the recommended patch-cluster for your old/source boot environment (O/S
BE), and assume, unless you encounter problems, that (a) you actually have the
J2RE installed and (b) the recommended patch-cluster has covered the J2RE
sufficiently for your LU to work.
------------------------
$ pkginfo -l SUNWjvrt
PKGINST: SUNWjvrt
NAME: JavaVM run time environment
CATEGORY: system
ARCH: sparc
VERSION: 1.1.8,REV=2001.05.24.14.35
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: JavaVM run time environment, includes java, appletviewer,
and classes.zip
PSTAMP: sola010524133607
INSTDATE: May 05 2005 13:19
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 66 installed pathnames
6 shared pathnames
8 directories
19 executables
21743 blocks used (approx)
You might think that the "VERSION: 1.1.8" makes this perfectly clear but no.
A quick search through sunsolve.sun.com will reveal that the "Java 2 Runtime
Environment" has version-numbers attached to it, of the format 1.x.y[.z], such
as 1.3.0 and 1.3.0.01. So, the above might actually mean "Java 2 RE, version
1.1.8".
Furthermore, the following URL points to a web-page that displays (as of
Monday:2005/May/09) "14 Results found for 'Solaris 2.6 J2RE packages'":
Page 11 of 52
http://onesearch.sun.com/search/onesearch/index.jsp?qt=Solaris%202.6%20J2RE%20packages
Most of these 14 links point to pages with installation instructions for one
release or another of the Java-2 Runtime Environment on multiple versions of
Solaris from 2.6 and upward, including further links to pages with patch
information. Unfortunately, none of these pages makes any reference, at all, to
package names. Also, extensive searches for "SUNW" package-names related to
J2RE, on docs.sun.com and sunsolve.sun.come, resulted in nothing.
And even furthermore again, the text file at
http://java.sun.com/products/archive/j2se/1.3.0_05/README.sparc says " The Java
2 SDK is available either as a set of Solaris packages or as a self-extracting
binary; the JRE is available as a self-extracting binary." The obvious
implication here is that the J2RE is available only in the form of a "self-
extracting binary". This seems to imply that J2RE might not go onto the system
as one or more actual "packages" in the usual Sun-software sense. The above
pkginfo example seems to contradict this but, again, it also indicates
absolutely no distinction between Java 1 and Java 2.
If you are reading this because you have encountered problems, with your Live
Upgrade (LU), and you are suspecting this J2RE patch issue, here is my
recommendation: Go to the above-mentioned "onesearch.sun.com" URL and start
looking through the 14 links. Notice the files that these pages indicate get
installed when you follow the instructions for installing the J2RE. Look for
these files on your system.
• Notice the files that these pages indicate get installed when you follow
the instructions for installing the J2RE. Look for these files on your
system.
• If you find any of these files, attempt to run the string command on those
that are not ASCII files, to see if any embedded ASCII strings indicate
anything about "Java 2".
• Search sunsolve.sun.com for any strings related to those files and their
file-names, and adding the phrase "live upgrade", looking for any bug
reports and/or patches that might shed some light on your issue.
• Either of the following two URLs might also be of some small help:
http://java.sun.com/j2se/1.3/install-solaris-re.html
http://java.sun.com/j2se/1.3/install-solaris-patches.html
(J2SE stands for Java-2 Standard Edition)
If you are planning to Live Upgrade to Solaris 8, you probably should search
docs.sun.com and/or sunsolve.sun.com for similar package prerequisites for such
an upgrade.
At this point, you should determine whether or not the relevant packages are
already installed on your O/S BE. If not, either download them or otherwise
make sure that they are available to you for installation during the appropriate
step of Phase 2 (2.3.2).
Page 13 of 52
S7 FCS sparc 111666-01 or higher bzcat patch
S7 FCS sparc 112590-01 or higher fgrep patch
See Info Doc 72099 for info about patches for upgrading from other versions
and/or platforms of Solaris.
NOTE: This Info Doc, #72099, is accessible only to persons who both (a) have
registered a login account on sunsolve.sun.com and (b) have a legitimate Sun
Service # that they can attach to this account. If you ever want to check
the most-recent version of this info doc and if you do not have both of these
things, you will need to try some other channel by which you can obtain this
information.
Page 14 of 52
then you might want to double-check that these two packages actually are
included. Otherwise, you need do nothing else for this issue at this point.
Page 15 of 52
2 PHASE 2: SYSTEM
PREPARATION
Through the Planning Phase for what you need for a successful Live Upgrade
procedure, you probably discovered that you need to install and/or configure a
few things, perhaps including disks, packages, and patches. In this phase, you
actually perform these pre-Live-Upgrade tasks, before actually creating and then
upgrading your new boot environment (N/T BE).
Page 16 of 52
2.2.1 Partition the hard-disks.
It is assumed that your hard-disks, on which you intend to place your
new/target boot environment (N/T BE), is not necessarily partitioned precisely
as you need to contain the file-systems for your N/T BE. Partition them at this
time. (It is assumed that you already know how to do this or can look it up
elsewhere.)
NOTE: It is not absolutely necessary that you install both the J2RE packages
and patches before you install any of the others that you have determined
need to be installed, except where the documentation indicates certain
packages-dependencies and certain patch-dependencies or incompatibilities.
Outside of those possible issues, you can install them before or after or in
between any other packages and patches that need to be added. These Java-2
RE items are mentioned here, by themselves, only because Sun mentions them in
a special note on page 396 of the "Solaris 9 9/04 Installation Guide".
Page 17 of 52
2.3.2 Install all Live-Upgrade Prerequisite Packages
As mentioned in sub-subsection 1.3.2 of the Planning Phase, some packages are
required to be installed to your old/source boot-environment (O/S BE) before
performing the Live Upgrade.
By now, you should have determined which of these packages is already
installed and which are not already installed. You should have downloaded those
that need to be installed.
Install them now.
(Even if you believe that these packages are not already on your system's
old/source boot-environment (O/S BE), you might as well run the above command
anyway, just to be sure. If they are there, they will be removed; if not, you
will get harmless error-messages.)
Page 18 of 52
If you are installing from the 2-CD media, these two packages are located on
"CD 2 of 2". In the Solaris-9 CD, the subdirectory-path should be
<cd_mount_point>/Solaris_9/EA/products/Live_Upgrade_2.0/sparc/Packages.
(It is assumed that you already know the general command techniques for
installing packages from CD or DVD or other installation-sources for the Solaris
OE. There is nothing out of the ordinary about the installation of these two
packages.)
Page 19 of 52
3 PHASE 3: CREATE THE NEW
BOOT ENVIRONMENT (BE)
This phase is centered around the execution of the lucreate command.
Except for the final subsection below, 3.5, the following 3.x subsections are
each mutually exclusive; not sequential. The first, 3.1, presents variations on
creating a new/target boot environment (N/T BE) directly from an old/source boot
environment (O/S BE) already on the system. The second, 3.2, presents the
creation and populating of an N/T BE from a Solaris Flash Archive. The third,
3.3, presents steps for integrating the creation and configuration of SVM
(Solaris Volume Manager) mirrored volumes with the N/T BE creation phase (the
lucreate command). The fourth, 3.4, briefly discusses considerations for using
VxVM (Veritas Volume Manager) volumes on your N/T BE. Subsection 3.5 provides
some brief techniques for confirming that your lucreate routine has completed
successfully.
Page 20 of 52
-m -:c#t#d#s#:swap \ ◄▬ #
[-m -:shared:swap] ◄▬ ##
** -- The "-n <N/T BE name>" must be used for assigning a BE_name to the
new/target boot environment (N/T BE). (Example: -n Sol9_BE )
! -- Each and every new critical file-system, that you intend to exist in
the N/T BE, requires that you use the "-m <fs_mtpt>:c#t#d#s#:<fs_type>"
switch and options to specify a mount point (<fs_mtpt>), a device-to-be-
mounted (/dev/dsk/c#t#d#s#, or, simply c#t#d#s#), and a file-system type
(<fs_type>). So, if you original root (/) includes /opt, /usr, and /var
simply as large directory-trees within that one file-system but you want
/opt, /usr, and /var to be separate file-systems in the N/T BE then you
must include a separate "-m" switch and set of options for root (/), for
/opt, for /usr, and for /var (this is called "splitting file-systems").
(Example: -m /usr:/dev/dsk/c1t2d3s4:ufs or -m /usr:c1t2d3s4:ufs )
!! -- If you have two or more file-systems, in your O/S BE, that you want to
be only one file-system in the N/T BE, you must use the "merged" option
(instead of a /dev/dsk/c#t#d#s# device-name) with the "-m" switch. For
example: If your O/S BE has /, /usr, and /opt as separate file-systems but
you want /usr and /opt simply to be part of the / file-system in the N/T
BE, you would specify a "-m" switch with the "merged" option for each one,
/usr and /opt. (An example is shown further below.)
(Example: -m /opt:merged:ufs )
# -- The "-m -:c#t#d#s#:swap" switch and options are for specifying a new
slice to be used for swap-space for the N/T BE. If you want multiple
swap-slices for the N/T BE then you will include a separate "-m -
:c#t#d#s#:swap" combination for each such slice. If you include no
references at all to specific swap-slices in the lucreate command then the
command assumes that you will be using the exact same swap-space for the
N/T BE as you have been using for the O/S BE.
(Example: -:c4t3d2s1:swap )
## -- If you want to use, for your N/T BE, a combination of new swap-slices
plus the old swap-space that you have been using for your O/S BE, you will
include at least one of the "-m -:c#t#d#s#:swap" combinations described
above but also exactly one "-m -:shared:swap" combination, which refers
to all the swap-space being used by your O/S BE. You never need to use
this "-m -:shared:swap" combination if you are not intending to use both
the old swap-space and some newly-created swap-slices for the N/T BE.
(Example: always the exact same content: -m -:shared:swap )
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
Page 21 of 52
We begin with a scenario that is about as simple as possible. In the
old/source boot environment (O/S BE), we have only a / (root) file-system, which
contains all the other major directory-trees needed for SunOS/Solaris, such as
/opt, /var, and /usr. We want the same file-system configuration in the
new/target boot environment (N/T BE). So, we need only one "-m" option. This
will be the first Live Upgrade on this system, which means that we need to
specify a BE_name for the O/S BE. We'll also add a comment with "-A". The
actual screen-output, from this command, follows immediately below it.
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
Page 22 of 52
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
Page 23 of 52
3.1.4 Scenario 4 (Create N/T BE From O/S BE)
In this scenario, /usr, /var, and /opt are all separate file-systems in the
O/S BE but you want them all to be merged as major directory-trees within the /
(root) file-system in the N/T BE; so, you use the "merged" option three times.
You also want, this time, to use a combination of the original swap-space and
some newly-defined swap-slices for the N/T BE. This time, you do not try to
apply a custom BE_name to the O/S BE (either because you don't care or because
you have previously performed a Live Upgrade on this system, which means that it
already has an official BE_name). You also do not bother with a comment this
time. (Extra space, after the "-m", is simply for visual clarity.)
# lucreate \
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /usr:merged:ufs \ ◄▬[/usr to be merged under / fs]
> -m /var:merged:ufs \ ◄▬[/var to be merged under / fs]
> -m /opt:merged:ufs \ ◄▬[/opt to be merged under / fs]
> -m -:shared:swap \ ◄▬[says to use old swap for N/T BE]
> -m -:c1t2d0s0:swap \ ◄▬[specify also a new slice for swap]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
# lucreate \
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /usr:merged:ufs \ ◄▬[/usr to be merged under / fs]
> -m /var:merged:ufs \ ◄▬[/var to be merged under / fs]
> -m /usr/opt:merged:ufs \ ◄▬[/opt to be merged under /usr fs]
> -m -:shared:swap \ ◄▬[says to use old swap for N/T BE]
> -m -:c1t2d0s0:swap \ ◄▬[specify also a new slice for swap]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
Page 24 of 52
3.1.6 Scenario 6 (Create N/T BE From O/S BE)
This scenario is substantially different from the previous four. In each of
the previous four scenarios, you did not explicitly specify the old/source boot
environment (O/S BE) and, because of this, the lucreate command simply assumed
that you wanted the O/S BE to be the currently active boot environment (ABE).
In this scenario however, you are performing Live Upgrade on a system that
already contains two or more BEs and you want your actual O/S BE, for the
lucreate command, to be a BE other than the current ABE. This means that you
must use the lucreate "-s" switch and option to specify the O/S BE that you want
to use. For this variation on the "-s" switch, your option must be the BE_name
for the O/S BE; device-names do not work here. Except for the "-s" switch and
option, this example will be based on the example from Scenario 1 (though no "-
c" here, because this is not the first time a Live Upgrade has been performed on
this system).
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
Page 25 of 52
being used by the active boot environment (ABE), and one copy, which is
available for the lucreate routine.
But why would you have two separate and identical copies of /opt like that?
The answer, to that question, is for you to figure out. One example is that you
might have decided to create a separate copy of /opt (perhaps using a special
syntax with the tar command or with the cpio command), sometime prior to your
Live Upgrade routine, simply because you had an opportunity to make that copy at
an earlier time, in order to save yourself some time, later on, when you were to
run the lucreate routine. Whatever the reason(s) for which you might have any
such replications of your critical file-systems, the preserve option can be used
as described.
(The preserve option will probably make more sense, relative to real-life
situations, when presented again in sub-subsection 3.3.2, as an option for
moving a submirror from the O/S BE to the N/T BE.)
# lucreate \
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /opt:c2t3d1s3:preserve,ufs \ ◄▬[/opt to be preserved]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]
A simple variation, of this example, is where one has, in our example, the
/opt FS replicated on a SVM (Solaris Volume Manager, formerly Solstice DiskSuite
or SDS) mirrored-volume, and you "preserve" the entire contents of that mirror
for the /opt FS in the N/T BE (more about SVM RAID in subsection 3.3):
# lucreate \
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /opt:d40:preserve,ufs \ ◄▬[/opt to be preserved]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
Page 26 of 52
than directly from the file systems of a BE on the system being upgraded. The
Flash Archive might have been created from the system being upgraded or from
another system. The Flash Archive might represent an actual SunOS/Solaris
upgrade for the system or might simply be an alternate BE for the system.
The essence of the technique, for using a Flash Archive with Live Upgrade
(LU), is (a) to create empty disk-space for the N/T BE, with the lucreate
command, and then (b) to populate that disk-space from a Flash Archive, using
the luupgrade(1M) command. This means that most of the work is done in Phase 4
(subsection 4.4).
This section naturally covers only the lucreate command. The following
syntax example should be sufficient for you to construct a successful command-
line to create your N/T BE.
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
NOTE: If you are building new file-systems on new SVM mirrored volumes
(lucreate) from old file-systems that are also SVM mirrored volumes, you
should use the metastat(1M) command first, to make sure that the existing
mirrors and submirrors, from which you want to build the new, do not need
maintenance and are not busy, else your lucreate command will fail. The
lucreate command will check to see if an old/source mirrored device (used
in the command) is resyncing and will generate an error message if it
finds that this is so. (Resyncing is the copying of the contents of one
submirror to another, within a single volume, after either a submirror
failure or a system failure or the onlining or addition of a submirror to
the volume.)
NOTE: Page 403, of the "Solaris 9 9/04 Administration Guide" says "Use
the lucreate command rather than Solaris Volume Manager commands to
manipulate volumes on inactive boot environments. The Solaris Volume
Manager software has no knowledge of boot environments, whereas the
lucreate command contains checks that prevent you from inadvertently
destroying a boot environment. For example, lucreate prevents you from
overwriting or deleting a Solaris Volume Manager volume." But it further
states that you must use SVM, itself, to manage and manipulate complex
RAID devices that you have already created with SVM, the implication being
that these would be part of your old/source boot environment (O/S BE); not
part of the new/target boot environment (N/T BE) that you will create with
lucreate.
Page 27 of 52
3.3.1 Scenario 1 (N/T BE w/ SVM Mirrors)
We begin with probably the simplest scenario, in which you want to create a
new / (root) file-system mirror while creating the new/target boot environment
(N/T BE); you want to copy the contents of the / file-system from the old/source
boot environment (O/S BE), which is a typical thing to do; and it doesn't matter
as to whether or not the O/S BE's / file-system was a RAID device of any sort.
We begin with these parameters:
* --The "-s <O/S_BE_name>" is not necessary unless you are not using the
active boot environment (ABE) as the source for copying the contents into
the N/T BE.
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
Page 28 of 52
with a mirrored volume in the O/S BE; we create a corresponding mirrored volume
in the N/T BE; and that new mirrored volume's file-system gets populated by
transferring a submirror, with all its contents intact, from the old mirror to
the new. This operation is somewhat like the story of how Eve was created from
one of Adam's ribs: the "rib" is the submirror that gets transferred from the
old to the new.
# lucreate -n <N/T_BE_name> \
-m <fs_name>:<d##>:mirror,ufs \
-m <fs_name>:<c#t#d#s#>,<d##>:detach,attach,preserve \
-m <fs_name>:<c#t#d#s#>,<d##>:attach \
-m <fs_name>:<c#t#d#s#>,<d##>:attach
# lucreate -n Sol9_BE \
-m /:c1t1d1s1,d30:mirror,ufs \ ◄▬[d30 created now from c1t1d1s1]
-m /:c3t3d3s3,d31:detach,attach,preserve \ ◄▬[d31 created now]
-m /:c5t5d5s5,d32:attach \ ◄▬[d32 created now from c5t5d5s5]
-m /:c7t7d7s7,d33:attach ◄▬[d33 created now from c7t7d7s7]
[screen output omitted ... see 3.1.1]
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
Page 29 of 52
This scenario is a variation on the previous one, differing only in that we
create two SVM mirrored-volumes with the lucreate command, instead of only one.
Hopefully, it will be obvious that more than two can be create with a single
lucreate routine, simply by extending this example:
# lucreate -n Sol9_BE \
-m /:d30:mirror,ufs \ ◄▬[d30 already exists]
-m /:c1t1d1s1,d31:detach,attach,preserve \ ◄▬[d31 created now]
-m /:c3t3d3s3,d32:attach \ ◄▬[d32 created now from c3t3d3s3]
-m /:c5t5d5s5,d33:attach \ ◄▬[d33 created now from c5t5d5s5]
-m /opt:c2t3d4s5,d25:ufs,mirror \ ◄▬[d25 created now from c2t3d4s5]
-m /opt:c2t4d5s6,d26:attach \ ◄▬[d26 created now from c2t4d5s6]
-m /opt:c2t5d6s7,d27:attach ◄▬[d27 created now from c2t5d6s7]
[screen output omitted ... see 3.1.1]
(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
Page 30 of 52
a) Determine the fewest file-systems that must be included in your
lucreate routine.
b) Execute the lucreate routine; upgrade the N/T BE; activate the N/T
BE; reboot to it.
e) Temporarily mount other file-systems, from the O/S BE, that you want
to be copied over onto the newly-created VxVM volumes; perform the
copy (tar or cpio or your preferred technique); umount those
temporary mounts.
More and better instructions, for using VxVM with Live Upgrade, will be added
to this document as they become available or otherwise discovered.
Page 31 of 52
4 PHASE 4: UPGRADE THE
NEW BOOT ENVIRONMENT
!!!***NOTE***!!!: If you will be running the "lucreate" or "luupgrade"
commands from the Korn shell, you **MUST** be using Sun's original copy of
"ksh" --version "88i"-- that comes with SunOS/Solaris!!! Some people
install an upgraded "ksh" --typically version 93e-- which has some known
bugs and, even otherwise, does not work well with some of Sun's routines.
To determine which version you are running, ....
a) # echo $0 <--to determine that you are running "ksh"
b) # <Esc> <--i.e., simply press the <Esc> key
c) # ^v <--i.e., press the <Ctrl>/<v> key-combination
Version M-11/16/88i <--Sun's original on a Solaris-9 system
Version 11/16/88i <--Sun's original on a Solaris-5.5.1 system
Version M-12/28/93e <--Add-on copy of ksh-93
Among the problems that you might encounter, if you try to run Sun's Live-
Upgrade or Flash-Archive routines with ksh-93, are (a) 2-to-4 GB limit
when attempting either to create a new boot-environment with "lucreate" or
to upgrade an alternate boot-environment with "luupgrade"; (b) 2-to-4 GB
file-size limit when creating a Flash Archive file. !!!!!!!!!!!!!!!!!!
This phase is centered around the execution of the luupgrade command. The
entire point of this phase is to apply the actual OS/OE upgrade to the
new/target boot environment (N/T BE) that you just created with the lucreate
command in Phase 3. Hopefully you understand by now but, just in case: the
luupgrade command does not affect the O/S BE in any way; it affects only the N/T
BE.
Some of the following 4.x subsections are mutually exclusive: they are not
intended to be executed one right after another but, rather, one instead of
another. Notes will be included, at the end of each subsection, to point to any
other subsections to which you should go from there.
1. The "-u" switch indicates to perform a basic OS-upgrade from some media-
source:
The "-N" switch allows you to perform a so-called "dry run", to check for the correctness of
your command-syntax and for the viability of actually accomplishing this particular upgrade,
but without actually affecting anything.
2. The "-i" switch is mostly used to continue the luupgrade step when upgrading
the new BE from multiple Solaris-install CDs. Whenever you are using the
luupgrade command to perform the upgrade from the typica 2 CDs for Solaris
installation, this "-i" command will always be used to continue the upgrade
from the second CD and any more beyond that.
Page 32 of 52
# luupgrade -i -n <BE_name> -s <install_source> \
[ -O "-nodisplay -noconsole" ] [-N]
The " -O '-nodisplay -noconsole' " are options, related to installer(1M), that prevent
the installer-GUI from displaying unnecessarily when finishing an OS/OE-upgrade from the
second CD of the Solaris install-media.
The "-N" switch allows you to perform a so-called "dry run", to check for the correctness of
your command-syntax and for the viability of actually accomplishing this particular upgrade,
but without actually affecting anything.
3. The "-f" switch indicates to install an OS/OE, from scratch, from a Flash
Archive.
The "-N" switch allows you to perform a so-called "dry run", to check for the correctness of
your command-syntax and for the viability of actually accomplishing this particular upgrade,
but without actually affecting anything.
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)
1. If the vold daemon is running then the CDs will mount automatically. If it
is not running, you might want to start it, so that it can run for the
duration of this procedure, to simplify things. Simply insert the Solaris-9
CD "1 of 2" into the drive and wait a few seconds for it to mount.
# cd /cdrom/cdrom0
# ls -al <--[If you see directory "s0", mount succeeded.]
Page 33 of 52
3. If necessary, remind yourself of the BE_name of your N/T BE:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Sol8_BE yes yes yes no -
Sol9_BE yes no no yes -
Page 34 of 52
#
● Note all the INFORMATION: and WARNING: items in the screen output.
● All the references, to log-files underneath the /var/sadm/system subdirectory, are meant to
refer to the N/T BE, not the O/S BE or ABE (active).
● At this point, all "packages to be added" should simply be those that must be installed from
the Solaris installation CD, "2 of 2", which you will do further below.
● This entire procedure could take approximately one hour.
● The "-u" calculates the capacity, on the target disks, for holding all the files to be installed.
● Remove the CD if it does not automatically eject when this command is finished.
5. Optional: If it seems to be needed, mount the N/T BE and view the log files
mentioned in the INFORMATION: and WARNING: items from the screen output of
the "luupgrade -u" command:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Sol8_BE yes yes yes no -
Sol9_BE yes no no yes -
# lumount Sol9_BE
# cd /.alt.Sol9_BE
# cd ./var/sadm/system
# pwd
/.alt.Sol9_BE/var/sadm/system
# ls -al
total 10
drwxr-xr-x 5 root sys 512 May 13 15:23 .
drwxr-xr-x 10 root sys 512 May 25 11:11 ..
drwxr-xr-x 3 root sys 512 May 25 11:33 admin
drwxr-xr-x 3 root sys 512 May 25 11:33 data
drwxr-xr-x 2 root sys 512 May 25 10:34 logs
# cd data
# ls -al
total 412
drwxr-xr-x 3 root sys 512 May 25 11:33 .
drwxr-xr-x 5 root sys 512 May 13 15:23 ..
drwxr-xr-x 2 root other 512 May 25 11:33 .virtualpkgs2
-rw-r--r-- 1 root sys 32 May 25 11:33 locales_installed
-rw-r--r-- 1 root other 173235 May 25 11:33 packages_to_be_added
lrwxrwxrwx 1 root other 26 May 25 11:33 upgrade_cleanup ->
upgrade_cleanup_2005_05_25
-rw-r--r-- 1 root sys 16239 May 25 11:24
upgrade_cleanup_2005_05_25
-r--r--r-- 1 root other 10 May 25 11:07
upgrade_failed_pkgadds
# cat upgrade_failed_pkgadds
SUNWxwopt
# pkginfo -l SUNWxwopt ◄▬[Running this command based on assumption
that there might already be a copy of this
failed package installed on your ABE.]
PKGINST: SUNWxwopt
NAME: nonessential MIT core clients and server extensions
CATEGORY: system
ARCH: sparc
VERSION: 6.4.1.3800,REV=0.1999.12.15
Page 35 of 52
BASEDIR: /usr
VENDOR: Sun Microsystems, Inc.
DESC: nonessential MIT core clients and server extensions
PSTAMP: stomp19991215153354
INSTDATE: May 13 2005 22:16
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 80 installed pathnames
6 shared pathnames
8 directories
54 executables
3036 blocks used (approx)
# cd ../logs
# more upgrade_log
[output omitted]
# cd /
# luumount Sol9_BE
# ls -al /.alt.Sol9_BE
/.alt.Sol9_BE: No such file or directory
# cd /cdrom/cdrom0
# ls -al <--[If you see a Solaris_# directory, mount succeeded.]
Page 36 of 52
Unmounting BE <Sol9_04Sep_JRA>.
The installer run on boot environment <Sol9_BE> is complete.
● Because of the "-O ’-nodisplay -noconsole’", the usual GUI-display will not appear
on the screen for the installation/upgrade from this second CD. This is normal practice,
because there is typically nothing to answer or specify for the second CD. This second CD
could take approximately 30 minutes to complete.
● The "-i" switch must be used to install more software from any CDs after the first one.
● Note all the INFORMATION: and WARNING: items in the screen output.
● All the references, to log-files underneath the /var/sadm/system subdirectory, are meant to
refer to the N/T BE, not the O/S BE or ABE (active).
8. After an appropriate interval, check to see if the upgrade has completed:
# lustatus Sol9_BE
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Sol8_BE yes yes yes no -
Sol9_BE yes no no yes -
9. Optional: If it seems to be needed, mount the N/T BE and view the log files
mentioned in the INFORMATION: and WARNING: items from the screen output of
the "luupgrade -i" command. (See Step 5 for example commands and output.)
10. Apply the latest "recommended patch-cluster" for Solaris-9 to the N/T BE.
(You supposedly downloaded this patch cluster to your desired directory, such
as /tmp or /var/tmp, during Phase 1 of this Live Upgrade procedure.
# cd /var/tmp/patches
# unzip 9_Recommended.zip
● During the running of this command, the output of "df -k" shows that the appropriate file-
systems, for the N/T BE, have been temporarily mounted to temporary directory /a.
● During the running of this command, lustatus output will not reveal any activity or special
status for the N/T BE that is being patched:
# lustatus Sol9_BE
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Sol8_BE yes yes yes no -
Sol9_BE yes no no yes -
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these commands.)
Page 37 of 52
4.2 Upgrading from Solaris-9 DVD
(NOTE: At this time, it is assumed that DVD-based installation is similar
enough to CD-based installation and that the audience for this document is
experienced and intelligent enough to figure out the differences, so that
a detailed explanation for DVD-based installation is not necessary after
subsection 4.1. If any reader disagrees on this point and has any
important details to provide, they are welcome to contact me.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these commands.)
With a CD-based "luupgrade", you must first use "luupgrade -u", for the
first CD as installation-source; then you must use "luupgrade -i", for the
second CD as installation-source; then "luupgrade -i" again for each one
of any further CDs after that.
By contrast, a Jumpstart image typically contains all the packages,
within a single "Product" subdirectory, that normally require 2 CDs to
contain. So, you should need only the "luupgrade -u" command, unless you
are also luupgrade'ing with some supplementary software that is not
included in the SUNWXall installation-cluster (aka the "Entire
Distribution Plus OEM Support").
The option, for the "-s" switch, must be the grandparent directory of
the "Product" subdirectory in a Jumpstart configuration.
For example: If you have installed your Jumpstart configuration to a
directory call "/export/js9" and if the full path to the "Product"
subdirectory is ....
/export/js9/Solaris_9/Product
then the option, for the "-s" switch, is as in the following command-
example:
Note that "/export/js" is not the "parent" directory for the "Product"
subdirectory but, rather, what might loosely be called the "grandparent"
directory --that is, one level up from the parent.
Page 38 of 52
(Now, proceed to PHASE 5: MANAGE THE BOOT ENVIRONMENTS, to learn how to
activate your N/T BE and reboot into it, and to run other management commands on
the multiple boot-environments on your systems.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these commands.)
***-- The various forms of syntax, for how to express the FLAR-image path, are
as follows:
Type Format
NFS nfs://<server/path/to/FLAR_file> retry #
HTTP -*
http://<server>:/</path/to/FLAR_file> <options>
HTTPS -*
https://<server>:/<path/to/FLAR_file> <options>
FTP -*
ftp://<user>:<pswd>@<server>/<path/to/FLAR_file> <options>
or
-*
ftp://<user>:<pswd>@<server>:<port> </path/to/FLAR_file> <options>
Page 39 of 52
timeout <#_of_minutes> The number of minutes, without data from
the HTTP server, before which the
Jumpstart routine attempts to disconnect,
reconnect, and then restart at the best
available spot in the installation.
**- One more possible option for "local_device" is to specify, after the path,
the file-system type; all you need is the value, no keyword. However, FS-
types "ufs" and "hsfs" are assumed, in that order: If FS-type is not
specified, the Jumpstart routine attempts first to mount the resource as
"ufs" and, if that fails, then attempts to mount the resource as "hsfs". (Of
course, within SunOS/Solaris the "hsfs" type includes compatibility with ISO-
9660.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these commands.)
Page 40 of 52
5 PHASE 5: MANAGE THE
BOOT ENVIRONMENTS
We have reserved the activation of your new/target boot environment (N/T BE),
and the rebooting of your system into the N/T BE, to this phase because,
strictly speaking, these steps have nothing to do with the actual upgrading
covered in Phase 4.
1. Check for --and fix, if necessary-- the following conditions that can cause
your reboot, into the new BE, to fail:
• If the diag-switch? parameter (at the OBP level) is set to true then the
reboot, into the new BE, will fail. This is because there is something,
about that setting, that prevents a certain LU routine from automatically
changing the value of the boot-device parameter, which is absolutely
necessary in order for the reboot, into the new BE, to be successful.
# eeprom diag-switch?
diag-switch?=true <--[If "=false", no need to change.]
# eeprom diag-switch?=false
# eeprom diag-switch?
diag-switch?=false
• [....]
2. Run lustatus to make sure that the BE, that you want to activate, is in a
state in which it can be activated.
Page 41 of 52
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Larry_BE yes yes yes no -
Mo_BE yes no no yes -
Curly_BE no no no yes -
# /usr/sbin/luactivate
# luactivate Sol9_BE
WARNING: <1> packages failed to install properly on boot environment
<Sol9_BE>.
INFORMATION: </var/sadm/system/data/upgrade_failed_pkgadds> on boot
environment <Sol9_BE> contains a list of packages that failed to
upgrade or install properly. Review the file before you reboot the system
to determine if any additional system maintenance is required.
WARNING: The following files have changed on both the current boot
environment <Sol8_BE> and the boot environment to be activated
<Sol9_BE>:
/etc/group
INFORMATION: The files listed above are in conflict between the current
boot environment <Sol8_BE> and the boot environment to be activated
<Sol9_BE>. These files will not be automatically synchronized from
the current boot environment <Sol8_BE> when boot environment
<Sol9_BE> is activated.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot
environment:
2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:
Page 42 of 52
At the PROM monitor (ok prompt):
For boot to Solaris CD: boot cdrom -s
For boot to network: boot net -s
3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
4. Run <luactivate> utility with out any arguments from the current boot
environment root slice, as shown below:
/mnt/sbin/luactivate
**********************************************************************
5. Run lustatus again to check the results of your above luactivate command:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol8_02Feb_JRA yes yes no no -
Sol9_04Sep_JRA yes no yes no -
#
NOTE: If, at this point, you execute "eeprom boot-device", you will see
that the boot-device parameter, in the PROM, has not yet changed to the
N/T BE boot-partition. This will occur only after you perform the reboot
command.
# init 6
--OR--
During the downside of the reboot (shortly after the system begins
going down), about 8 to 10 "Live Upgrade:" information lines will be sent
to the console.
During the upside of the reboot, 3 "Live Upgrade:" information lines
will be sent to the console, and various configuration files, from the
previously-active BE will be copied-over or appended-to their counterparts
in the newly-activated BE. These files include /etc/passwd and
/etc/group, among others.
NOTE: If you have been using this document simply to learn how to create and
upgrade and configure a new BE, and to reboot the system to it in order to
Page 43 of 52
affect an OS/OE upgrade, this subsection basically finishes your tasks, as
long as you are not experiencing any problems upon the reboot. If you are,
the next subsection covers the technique for "falling back" to the
previously-active boot environment.
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these LU commands.)
1. If you actually have successfully booted all the way from the new BE and you
can login, you can simply try executing the /sbin/luactivate command:
# /sbin/luactivate <BE_name_for_fallback>
--OR--
# /sbin/luactivate
Do you want to fallback to activate boot environment <disk name>
(yes or no)? yes
--OR--
--OR--
3. After logging in as the super-user, you might check the integrity of the /
(root) file-system of your fallback BE:
Page 44 of 52
4. Now, mount the / (root) file-system of your (presently) active BE (the BE
from which you want to fall back, not "to which") to some temporary mount-
point, such as /mnt:
# /mnt/sbin/luactivate
This command should produce screen-output that confirms the success of the
fallback activation.
(You can now unmount this file-system mounted in Step 4 but it is not
necessary.)
6. Reboot the system, and it should boot from the fallback BE:
# init 6
--OR--
# shutdown -y -i 6 -g 0
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of the LU commands.)
# /usr/sbin/lucurr
# /usr/sbin/lucurr /mnt
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)
# /usr/sbin/lustatus
BE_name Complete Active ActiveOnReboot CopyStatus
---------------------------------------------------------------
Page 45 of 52
Larry_BE yes yes no -
Mo_BE yes no yes SCHEDULED
Curly_BE no no no -
# /usr/sbin/lustatus Mo_BE
BE_name Complete Active ActiveOnReboot CopyStatus
---------------------------------------------------------------
Mo_BE yes no yes SCHEDULED
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)
# /usr/sbin/lucompare <BE_name_of_nonactive_BE>
If you want to compare only certain files between the two BEs, you can
construct an ASCII-formatted file containing a list of those files —each
expressed with a full, absolute pathname— to be compared, and reference that
file in the command:
If you want to compare only all nonbinary files between the two BEs, use the
"-t" switch:
# /usr/sbin/lucompare -t <BE_name_of_nonactive_BE>
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)
• The BE must be complete (not in the midst of any operation that could
change its status).
• The BE cannot be activated for the next reboot.
• The BE cannot have any file-systems mounted with lumount.
# /usr/sbin/ludelete <BE_name>
Page 46 of 52
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)
• The BE must be complete (not in the midst of any operation that could
change its status).
• The BE cannot have any file-systems mounted with either lumount or mount.
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)
Page 47 of 52
avoids the hassles of creating multiple mount-points for multiple file-systems
and of running multiple mount and umount commands to mount and unmount them.
The simplest syntax for the lumount command is:
# lumount <BE_name>
--OR--
By default, lumount will mount all the file-systems, for a particular BE, to
the subdirectory "/.alt.<BE_name>". This is the result of the first instance of
the command, above. With the second instance of the command, you have decided
that you want to mount the BE to a different mount-point, which you specify at
the end of the command but which you do not need to create ahead of time.
So, if the fishmonger BE has separate file-systems for /, /opt, and /var, the
command "lumount fishmonger" will mount these three file-systems as follows:
# luumount <BE_name>
Simply be sure that nobody's PWD is within the BE's mounted tree and then run
the above command. All the BE's file-systems will be unmounted and the mount-
point directory, that was dynamically created when you ran the lumount command,
will dynamically be deleted when you run the luumount command.
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)
Page 48 of 52
Command -o <outfile> -l <error_log>
luactivate
lucancel
lucompare
lucreate
lucurr
ludelete
ludesc
lufslist
lumake
lumount
lurename
lustatus
luumount
luupgrade
# /usr/sbin/luactivate -s <nonactive_BE_name>
CAUTION!!!: Perform this operation with caution when working with different
BEs that use different releases of SunOS/Solaris. You might be resyncing
between an ABE that has a later release of Solaris and a nonactive BE that
has an earlier release, and the newer config-files and directories, that get
resynched to the nonactive BE, might not be compatible with the older release
of SunOS/Solaris installed there. This might cause your next reboot, into
that older BE, to fail.
Page 49 of 52
5.12.1 Live Upgrade Configuration Files
• /etc/lu
• /etc/lu/swapslices
• /etc/lu/synclist
/var/mail OVERWRITE
/var/spool/mqueue OVERWRITE
/var/spool/cron/crontabs OVERWRITE
/var/dhcp OVERWRITE
/etc/passwd OVERWRITE
/etc/shadow OVERWRITE
/etc/opasswd OVERWRITE
/etc/oshadow OVERWRITE
/etc/group OVERWRITE
/etc/pwhist OVERWRITE
/etc/default/passwd OVERWRITE
/etc/dfs OVERWRITE
/var/log/syslog APPEND
/var/adm/messages APPEND
The following directories and files represent a partial list of those that
also could be added to the synclist:
/var/yp OVERWRITE
/etc/mail OVERWRITE
/etc/resolv.conf OVERWRITE
/etc/domainname OVERWRITE
• /usr/share/lib/xml/dtd/lu_cli.dtd.<num>
XML DTD for the "-X" option (not discussed in this document) available
with most LU commands.
• [....]
Page 50 of 52
5.12.2 Live Upgrade Log Files
• /etc/lutab
• [....]
6 Troubleshooting Tips
These troubleshooting tips are being added as they are found and as time
allows. This major section, of this document, is not considered to represent a
"major phase" of the LU procedures.
As this new section is first being created, it is expected that the tips will
be presented in no particular order, other than that in which they are
discovered and happen to be added to the document.
# luactivate <New_BE>
# init 6
... the system reboots into the old Boot Environment (BE) that you were trying
to leave; effectively ignoring the new BE into which you were trying to boot!
This author has seen this problem in two different situations, described as
follows:
He decided to try running that command by hand, and got the following output
(note the underlined portion):
So, the "diag-switch?" parameter (Boot PROM) was checked. Indeed, it was set
to "true". It's value was changed to "false" (using the "eeprom" command).
Then then "luactivate" and "init 6" commands were executed again and, this time,
the system booted into the new BE without further incident.
Page 52 of 52