Tutorial & Cheatsheet FOR Solaris Live Upgrade: by John R Avery

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

TUTORIAL & CHEATSHEET

FOR
SOLARIS LIVE UPGRADE
by John R Avery

===

Page 1 of 52
Revisions

Rev Date Changes Made Author(s) /


YYYY/Mmm/DD Editor(s)
1.0 2005/May/26 original draft John R Avery
1.1 2005/Jun/03 second draft: filled in subsections John R Avery
3.2 and 4.3, about integrating Flash
Archives with actual LU commands
1.2 2005/Aug/04 third draft: A few passing John R Avery
references, to Flash Archives, were
added to the "PHASE 1: PLANNING" and
"PHASE 2: SYSTEM PREPARATION"
sections.
1.3 2005/Sep/26 fourth draft: Inserted a new John R Avery
subsection 4.3 to clarify the
precise install-path for the "-s"
switch to the "luupgrad -u" command.
Inserted a new Step 1 to subsection
5.1, for OBP and other settings that
might prevent a successful reboot
into the new BE.
1.4 2005/Oct/11 inserted a special cautionary John R Avery
"!!!***NOTE***!!!", in a few places,
about using ksh-93.
1.5 2006/Jan/02 Added new major-section 6 on John R Avery
Troubleshooting Tips, including
subsections 6.1 thru 6.1.2, about a
failure to reboot into the new BE.

As of 05/Sep/26, here's what definitely needs to be completed or polished:

Section or Subsection What needs to be done


4.2 Upgrading from Solaris-9 DVD Nothing has been written here except an
excuse for writing nothing, but this is
a low priority for now.

==

Page 2 of 52
Table of Contents
PREFACE........................................................................ 5
INTRODUCTION................................................................... 6
Five Phases of Live Upgrade: Overview ........................................ 6
A Few Preliminary Definitions ................................................ 7
1 PHASE 1: PLANNING .......................................................... 8
1.1 Hardware / Disk-Space Planning ........................................ 8
1.2 RAID and Mirror Planning .............................................. 9
1.2.1 Integrate SVM with the Building of your N/T BE .................... 10
1.2.2 Using VxVM Volumes in the N/T BE .................................. 10
1.3 Prerequisites: Software Packages and Patches ........................ 10
1.3.1 Java-2 Runtime Environment (J2RE) + Patches ....................... 10
1.3.2 Determine All Prerequisite Packages for Live Upgrade .............. 12
1.3.3 Determine All Prerequisite Patches for Live Upgrade ............... 13
1.3.4 Acquire Copies of the Two Live-Upgrade (LU) Packages .............. 14
1.3.5 Determine Need for Patching the Two LU Packages ................... 15
2 PHASE 2: SYSTEM PREPARATION ............................................... 16
2.1 Install new hard-disks if necessary. ................................. 16
2.1.1 Physical Installation ............................................. 16
2.1.2 Logical Installation and Configuration ............................ 16
2.2 Prepare the Hard-Disks for the New/Target Boot-Environment (N/T BE). . 16
2.2.1 Partition the hard-disks. ......................................... 17
2.2.2 High-Level Format the Partitions to Your Desired File-System Type. 17
2.3 Prepare the Active Boot Environment (ABE) for Live Upgrade ........... 17
2.3.1 Java-2 Runtime Environment (J2RE) + Patches ....................... 17
2.3.2 Install all Live-Upgrade Prerequisite Packages .................... 18
2.3.3 Install the Latest Patches ........................................ 18
2.3.4 Remove Any Old Live-Upgrade Packages .............................. 18
2.3.5 Install the New Live-Upgrade Packages ............................. 18
2.3.6 Install the Latest Patches for SUNWlur and SUNWluu (?) ............ 19
3 PHASE 3: CREATE THE NEW BOOT ENVIRONMENT (BE) ............................ 20
3.1 Create a New Boot Environment from an Old Boot Environment ........... 20
3.1.1 Scenario 1 (Create N/T BE From O/S BE) ........................... 21
3.1.2 Scenario 2 (Create N/T BE From O/S BE) ........................... 23
3.1.3 Scenario 3 (Create N/T BE From O/S BE) ........................... 23
3.1.4 Scenario 4 (Create N/T BE From O/S BE) ........................... 24
3.1.5 Scenario 5 (Create N/T BE From O/S BE) ........................... 24
3.1.6 Scenario 6 (Create N/T BE From O/S BE) ........................... 25
3.1.7 Scenario 7 (Create N/T BE From O/S BE) ........................... 25
3.2 Create a New Boot Environment with a Flash Archive ................... 26
3.3 Create a New Boot Environment with SVM Mirrors ....................... 27
3.3.1 Scenario 1 (N/T BE w/ SVM Mirrors) ............................... 28
3.3.2 Scenario 2 (N/T BE w/ SVM Mirrors) ............................... 28
3.3.3 Scenario 3 (N/T BE w/ SVM Mirrors) ............................... 29
3.4 Create a New Boot Environment with VxVM Volumes ...................... 30
3.5 Confirm the Success of "lucreate" .................................... 31
4 PHASE 4: UPGRADE THE NEW BOOT ENVIRONMENT ................................ 32
4.1 Upgrading from Solaris-9 CDs ......................................... 33
4.2 Upgrading from Solaris-9 DVD ......................................... 38
4.3 Upgrading from a Jumpstart Image ..................................... 38
4.4 Upgrading from Flash Archive ......................................... 39
5 PHASE 5: MANAGE THE BOOT ENVIRONMENTS .................................... 41
5.1 Activate the Newly-Upgraded Boot Environment (and Reboot To It) ..... 41
5.2 Falling Back to the Previous Boot Environment ........................ 44
5.3 Check the BE_name of the Active Boot Environment ..................... 45
5.4 Display Status of Any or All Boot Environments ....................... 45
5.5 Compare Boot Environments ............................................ 46
5.6 Delete a Nonactive Boot Environment .................................. 46
5.7 Change the BE_name of a Boot Environment ............................. 47

Page 3 of 52
5.8 View the File-System Configuration of a Boot Environment ............. 47
5.9 Mount and Unmount an Entire Boot Environment (BE) .................... 47
5.10 Live Upgrade "-o outfile" and "-l error_log" Options ................. 48
5.11 Force Synchronization Between Boot Environments ...................... 49
5.12 Live Upgrade Standard Configuration Files and Log Files .............. 49
5.12.1 Live Upgrade Configuration Files .................................. 50
5.12.2 Live Upgrade Log Files ............................................ 51
6 Troubleshooting Tips ...................................................... 51
6.1 Failure to Reboot into the new BE .................................... 51
6.1.1 diag-switch? set to "true" ....................................... 51
6.1.2 If at first you don't succeed, ... ................................ 52

==

Page 4 of 52
PREFACE
This document is intended to be a from-scratch tutorial and quasi-cheatsheet
for the Solaris Live Upgrade procedures. If you have never used Live Upgrade,
this should be a good, quick place to gain a thorough grasp of the basics,
including enough command-line details for you to perform at least one major
variation of a Live Upgrade procedure. If you are hands-on familiar with Live
Upgrade, you should be able easily to wade through the beginner's details to the
basic pointers you are looking for, though this document is not intended to
provide information about all the details of Live Upgrade commands and options.

If this combination tutorial and cheatsheet has any particular advantage over
Sun's original documentation or anybody else's custom Live Upgrade
documentation, it lies in this document's meticulous thoroughness on the most
basic concepts and the supposedly-most-often-used features of Live Upgrade.
Thus, you get thoroughness and important detail in a relatively short document.

The descriptions and steps here are written generically; not necessarily
specific to anybody's in-house technical standards for how to use Live Upgrade
―though such information can be added and included when it is available. This
document, precisely as written, cannot be used directly as a template for a
workplan to upgrade any systems. In other words, I've not constructed it with
blank lines in various places for you to fill in the specific parameters for
your specific Live Upgrade task. However and naturally, it can be useful as a
set of guidelines for helping to produce a workplan for a specific upgrade.

I might eventually be adding actual cheatsheets at the end of this document


but, until then, the overall structure of this document, after a few
introductory items, is as follows:

• 5 phases of a Live Upgrade procedure, from Planning through Long-Term Mgmt

• Within each phase and as much as possible, present explanations and steps
in the (hopefully) most helpful and practical order, and with just the
right amount of detail, for ....

a) quick and thorough understanding of concepts and commands;

b) understanding what needs to be done and in what order when planning


for, preparing for, and performing a Live Upgrade, and then managing
the environment after that

ALSO NOTE: As of Rev 1.0, not all the Live Upgrade procedures, included in
this rev, have been thoroughly tested. Some level of trust in Sun's original
documentation is implied. By Rev 1.3, a project team, with which I was working,
followed these steps in general and found no notable flaws to them, except the
absence of any cautions about using ksh-93 (which now appear in a few different
places in Rev 1.4). In Phases 3 and 4, those commands that include "example
screen-output" represent the performance of an entire Live Upgrade procedure,
from fully-patched Solaris-8, 2/02, to Solaris-9, 9/04, on an Ultra-10 with an
8.5-GB disk: 1-GB swap; 3-GB for Sol-8 (/ only); 4.5-GB for Sol-9 (/ only).

--John R Avery, 2005/Oct/11

Page 5 of 52
INTRODUCTION
Sun's Live Upgrade is a procedure by which a system's Solaris OE can be
upgraded, while the system and its services are fully up and running, by
creating an alternate, nonactive boot environment (BE) on separate disk-slices;
upgrading that alternate BE; and then affecting the system upgrade simply by
rebooting the system, which reduce the system's and services' downtime from
several hours to a few minutes.

This document presents the entire Live Upgrade routine in five major phases
(not an official Sun Microsystems delineation):

1. Planning
2. Syste Preparation
3. Creating a new Boot Environment
4. Upgrading the new Boot Environment
5. Switching to the new Boot Environment and
Managing All the Boot Environments on the System

Managed correctly, the fifth phase can be a long-term phase of months or


more, during which you manage multiple boot environments and sometimes switch
from one to another as desired, simply by means of a short, one-line command and
a reboot. A major-section 6 about troubleshooting, begun in 2006/January, is
not considered to be a "major phase" of Live Upgrade procedures.

Five Phases of Live Upgrade: Overview


Phase 1: Planning
* The only SunOS/Solaris commands that you run here are those to figure
out what hardware and software you have and need, in the way of disk-
space, packages, and patches. No actual Live Upgrade (LU) commands,
necessarily, unless this is not the first time you've used LU on this
system (in which case you might run a couple of show-me commands to
view the current state of the boot environments (BE) on your system).
* Figure out disk capacity; determine which disks and partitions to use.
* Determine whether creating and upgrading from CD/DVD or Flash Archive.
Phase 2: System Preparations
* The SunOS/Solaris commands you run here are those for installing and
configuring new hardware, if necessary, and any necessary packages and
patches that are missing. Again, probably no actual LU commands at
this point, if this is the first time for doing LU on this system.
* Install prerequisite packages.
* Install prerequisite patches.
* Install the two Live Upgrade packages.
* Find or prepare the Flash Archive, if to be used.
* Find or prepare a Solaris boot-image, if Flash Archive to be used.
Phase 3: Create the New Boot-Environment (BE) with "lucreate"
You typically have a fully functional BE after running the command but,
again typically, it is not yet upgraded until the next phase.
Phase 4: Upgrade the New Boot-Environment (BE) with "luupgrade"
Phase 5: Manage Multiple Boot-Environment (BE) on a System

Page 6 of 52
A Few Preliminary Definitions
IF YOU ARE NEW TO LIVE UPGRADE, READ THESE FIRST!
The following definitions do not include all the definitions included in the
glossaries of Sun's official documentation related to Live Upgrade. Those
glossaries tend to include definitions for terms that you are expected to
know by now or that you can easily look up elsewhere, such as "Jumpstart" and
"Flash Archive". A few of the following definitions are unique to this
document; particularly "new/target boot environment (N/T BE)" and
"original/source boot environment (O/S BE)".

active boot environment (ABE)


The boot environment (BE) to which the system is presently (currently)
booted. (NOTE: In some custom documentation, the acronym "ABE" is used to
mean "alternate boot environment", which seems not to be an official Sun /
Live-Upgrade term.)
boot environment (BE)
A bootable SunOS/Solaris environment that consists of a set of disk slices
(partitions) with their associated mount points and file systems, naturally
including a root (/) file system.
critical file systems
These are disk-based file systems that are generally required by
SunOS/Solaris, whether or not they are each actually separate file-systems.
These include, most notably, / (root), /usr, /var, and /opt. Swap could be
considered among these, given that a decision typically must be made
regarding how swap is managed between or among the multiple boot-environments
(BE) on a system; however, swap is sometimes managed as a shareable file
system.
inactive boot environment (IBE)
Any valid boot-environment (BE) on the system, to which the system is not
presently (currently) booted, and that is not designated to become the active
boot-environment upon the next reboot.
new/target boot environment (N/T BE)
The "new/target boot environment" --a custom term not from Sun but used in
this document-- is the one that you are creating and upgrading with the LU
procedure that you are working on at this time.
original/source boot environment (O/S BE)
The "original/source boot environment" --a custom term not from Sun but used
in this document-- is the one from which you are copying OS/OE files to
create the new/target boot environment to be upgraded in the LU procedure
that you are working on now.
shareable file systems
These are file systems that are not considered critical, to the basic
functionality of SunOS/Solaris, and that, therefore, do not required any
upgrading when the OS/OE is upgraded. They are referred to as "shareable"
because each of the boot environments, on the system, can access them from
the exact same devices and through the exact same mount-points.

Page 7 of 52
1 PHASE 1: PLANNING
Before you begin executing the actual Live-Upgrade commands, you must
investigate your system's level of preparation for the Live Upgrade and
determine what you need to do in order to prepare it. In a nutshell, these
preparations potentially involve three things:

1. Hardware, in the form of harddisk-space.


2. Sun Software Packages
3. Sun Patches

During this planning phase, you determine precisely what needs to be done to
the system so that it has all the proper disk-space, with appropriately-sized
partitions (including a bootable partition for the new / file-system); so that
it has all the prerequisite software packages and patches for a successful Live
Upgrade procedure. Thus, this planning phase is, itself, a form of preparation
for Phase 2: System Preparation, during which you actually prepare the system
for the Live Upgrade. You can essentially combine Phase 1 and Phase 2, if you
prefer.

1.1 Hardware / Disk-Space Planning


First you need to determine ....

a) From which Solaris version are you upgrading; to which Solaris version
are you upgrading?

b) Are you planning to add any new software, besides the Solaris upgrade
itself, to the file-systems that contain the upgraded OS/OE image? How
much more space will these require?

(NOTE: These might include certain software packages and patches that
are explicit prerequisites for Live Upgrade, which are mentioned
directly, later in this phase.)

c) Which file-systems do you need to transfer in the creation of the new


boot environment (BE) and which can remain on their present partitions
and simply be used, as is, by the new BE?

(NOTE: You can "merge" or "split" file-systems from your old BE to


your new BE, which can have an effect on how you analyze and evaluate
your available disk-space for the upgrade. If you are uncertain about
what these terms means, skip, for a couple of minutes, up to sub-
subsections 3.1.2 and 3.1.3 for explanations of "splitting" and
"merging" respectively, and then come back here to continue.)

d) Do you already have enough harddisk-space to support this Live Upgrade,


according to what you've determined from the previous three steps? If
not, how many more disks, and of what sizes, will you install to give
you sufficient space?

• If you intend to integrate Solaris Volume Manager (SVM) mirrors


(RAID-1) into the Live-Upgrade creation (the lucreate command) of
your new/target boot environment (N/T BE) then you need to consider
available disk-space —from both the size and the number-of-

Page 8 of 52
partitions standpoints— also with this in mind. (See sub-subsection
1.2.1 for point-by-point considerations for this issue.)

• If you intend to use Veritas Volume Manager (VxVM) volumes in your


N/T BE then there are considerations similar to those related to the
use of SVM volumes, though not exactly the same, given that,
apparently, it is not possible to integrate VxVM volumes into the
actual Live Upgrade commands and procedures. (See sub-subsection
1.2.2 for details of these considerations.)

e) Will your system be able to use at least one of your disks and
partitions, for the new boot environment (BE), as a bootable / file-
system partition? If not, what can you do about this?

After making the above determinations, you must then ....

f) Make sure you have an installable copy of the Solaris version to which
you are upgrading.

• If you plan to upgrade from CDs or DVD, do you have that media?

• If you plan to upgrade from a hard-disk copy of the installable


Solaris (whether on a local file-system or from a Distributed File-
System), has that been set up and is it available to you? Or must
you install and configure that installable image yourself?

• If you plan to upgrade from a combination of a Flash Archive and a


bootable/installation (Jumpstart) Solaris image (yes, both are
necessary for a Live Upgrade with Flash Archive), has each of these
been constructed yet and are they available to you? Or must you
construct them, yourself?

g) Order any harddisks that need to be installed to give you enough space
for the Live Upgrade.

h) Design the partitioning scheme(s) for the disk(s) on which you intend
to place the file systems for the new BE that you will be building and
upgrading.

1.2 RAID and Mirror Planning


Starting at least as early as Solaris-9, 2004/September, you can create a
Solaris Volume Manager (SVM) —formerly Solstice Disk Suite (SDS)— to create a
RAID-1 (mirrored) volume for any file-system that you create with the lucreate
command (Phase 3 in this document: see subsection 3.3). NOTE that this document
assumes that you are minimally familiar with SVM concepts and naming
conventions.
As of the most-recent release of this document, I am not aware of any way to
integrate the use of Veritas Volume Manager (VxVM) into the actual Live Upgrade
commands and procedures. Rather, it is apparent that one must create and
upgrade and reboot into the new/target boot environment (N/T BE) and then
install and integrate VxVM only after the N/T BE has become the active boot
environment (ABE). (An overview and a few considerations are roughly outlined
in subsection 3.4.)

Page 9 of 52
1.2.1 Integrate SVM with the Building of your N/T BE
If you want to use SVM in your N/T BE and to integrate the use of SVM volumes
into the Live Upgrade (LU) creation of your N/T BE (the lucreate command) then
you need to be aware of the following limitations:

Various passages of the "Solaris 9 9/04 Installation Guide" clearly imply or


dictate the following characteristics of integrating SVM into your LU routines:

• lucreate supports only "single-slice concatenations" for the submirrors of


an SVM mirrored-volume being created.
• lucreate supports the adding of no more than 3 submirrors to the mirrored
volume being built.

Then, as a matter of planning, you need to determine the following:

a) How many of your N/T BE file-systems do you want to be RAID-1


(mirrored) SVM volumes?

b) If each such SVM-mirrored volume had only one submirror (speaking


hypothetically), what size would it be?

c) Within each of those SVM-mirrored volumes, how many submirrors do


you want? (A total of three would mean an original plus two
copies.)

d) Considering the answer(s) to steps b and c above, how many other


slices and of what size do you need to make up all the submirrors
you need for all the SVM-mirrored volumes you want?

e) If you do not presently have enough total disk-space and/or enough


individual slices to make up all the submirrors you need then you
must acquire and install more disks.

1.2.2 Using VxVM Volumes in the N/T BE


As of the most-recent release of this document, I am not aware of any way to
integrate the use of Veritas Volume Manager (VxVM) into the actual Live Upgrade
commands and procedures. Rather, it is apparent that one must create and
upgrade and reboot into the new/target boot environment (N/T BE) and then
install and integrate VxVM only after the N/T BE has become the active boot
environment (ABE). (An overview and a few considerations are roughly outlined
in subsection 3.4.)

1.3 Prerequisites: Software Packages


and Patches
1.3.1 Java-2 Runtime Environment (J2RE) + Patches
NOTE: The author of this document presently recommends that you ignore this
J2RE-patch issue unless and until you encounter problems, with your Live Upgrade
(LU), that you cannot seem to explain any other way. If you want to know why

Page 10 of 52
this is being recommended, read further in this subsection. Otherwise, I
consider that the remainder of this subsection is not in keeping with an
elegant-but-thorough beginner's tutorial and cheatsheet format, except when and
where it is clear that it is actually needed. I include this only just in case
you encounter some LU problems and cannot think of what else might be the
problem. Up to the time of writing this paragraph, I have been unable to
identify any clear and unambiguous series of steps to follow in order to address
this issue proactively. My best recommendation, at this time, is simply to
apply the recommended patch-cluster for your old/source boot environment (O/S
BE), and assume, unless you encounter problems, that (a) you actually have the
J2RE installed and (b) the recommended patch-cluster has covered the J2RE
sufficiently for your LU to work.

------------------------

Sun's "Solaris 9 9/04 Installation Guide" includes the following note on


page 396:

Note – If you are running the Solaris 2.6, Solaris 7, or Solaris 8


release, you might not be able to run the Solaris Live Upgrade installer.
These releases do not contain the set of patches needed to run the Java™ 2
runtime environment [J2RE]. You must have the Java 2 runtime environment
recommended patch cluster to run the Solaris Live Upgrade installer and
install the packages. To install the Solaris Live Upgrade packages, use
the pkgadd command. Or, install the Java 2 runtime environment
recommended patch cluster that is available on http://sunsolve.sun.com.

However, it is not a simple matter to determine precisely which version of


J2RE you are running on your system. For example, note that the output of the
following command clearly indicates a Java runtime-environment but indicates
absolutely nothing about whether this is Java 1 or Java 2:

$ pkginfo -l SUNWjvrt
PKGINST: SUNWjvrt
NAME: JavaVM run time environment
CATEGORY: system
ARCH: sparc
VERSION: 1.1.8,REV=2001.05.24.14.35
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: JavaVM run time environment, includes java, appletviewer,
and classes.zip
PSTAMP: sola010524133607
INSTDATE: May 05 2005 13:19
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 66 installed pathnames
6 shared pathnames
8 directories
19 executables
21743 blocks used (approx)

You might think that the "VERSION: 1.1.8" makes this perfectly clear but no.
A quick search through sunsolve.sun.com will reveal that the "Java 2 Runtime
Environment" has version-numbers attached to it, of the format 1.x.y[.z], such
as 1.3.0 and 1.3.0.01. So, the above might actually mean "Java 2 RE, version
1.1.8".
Furthermore, the following URL points to a web-page that displays (as of
Monday:2005/May/09) "14 Results found for 'Solaris 2.6 J2RE packages'":

Page 11 of 52
http://onesearch.sun.com/search/onesearch/index.jsp?qt=Solaris%202.6%20J2RE%20packages

Most of these 14 links point to pages with installation instructions for one
release or another of the Java-2 Runtime Environment on multiple versions of
Solaris from 2.6 and upward, including further links to pages with patch
information. Unfortunately, none of these pages makes any reference, at all, to
package names. Also, extensive searches for "SUNW" package-names related to
J2RE, on docs.sun.com and sunsolve.sun.come, resulted in nothing.
And even furthermore again, the text file at
http://java.sun.com/products/archive/j2se/1.3.0_05/README.sparc says " The Java
2 SDK is available either as a set of Solaris packages or as a self-extracting
binary; the JRE is available as a self-extracting binary." The obvious
implication here is that the J2RE is available only in the form of a "self-
extracting binary". This seems to imply that J2RE might not go onto the system
as one or more actual "packages" in the usual Sun-software sense. The above
pkginfo example seems to contradict this but, again, it also indicates
absolutely no distinction between Java 1 and Java 2.

If you are reading this because you have encountered problems, with your Live
Upgrade (LU), and you are suspecting this J2RE patch issue, here is my
recommendation: Go to the above-mentioned "onesearch.sun.com" URL and start
looking through the 14 links. Notice the files that these pages indicate get
installed when you follow the instructions for installing the J2RE. Look for
these files on your system.

• Go to the above-mentioned "onesearch.sun.com" URL and start looking


through the 14 links.

• Notice the files that these pages indicate get installed when you follow
the instructions for installing the J2RE. Look for these files on your
system.

• If you find any of these files, attempt to run the string command on those
that are not ASCII files, to see if any embedded ASCII strings indicate
anything about "Java 2".

• Search sunsolve.sun.com for any strings related to those files and their
file-names, and adding the phrase "live upgrade", looking for any bug
reports and/or patches that might shed some light on your issue.

• Either of the following two URLs might also be of some small help:

http://java.sun.com/j2se/1.3/install-solaris-re.html

http://java.sun.com/j2se/1.3/install-solaris-patches.html
(J2SE stands for Java-2 Standard Edition)

1.3.2 Determine All Prerequisite Packages for Live


Upgrade
Sun's "Solaris 9 9/04 Installation Guide" includes the following table on
page 398, indicating certain packages that must be installed on your old/source
boot environment (O/S BE) before a successful Live Upgrade (LU) to Solaris 9 can
be attempted:

Upgrading from Upgrading from Upgrading from


Solaris-2.6 Solaris-7 Solaris-8
Page 12 of 52
SUNWadmap SUNWadmap SUNWadmap
SUNWadmc SUNWadmc SUNWadmc
SUNWjvrt SUNWjvrt SUNWjvrt
SUNWlibC SUNWlibC SUNWlibC
SUNWadmfw SUNWbzip
SUNWmfrun
SUNWloc

If you are planning to Live Upgrade to Solaris 8, you probably should search
docs.sun.com and/or sunsolve.sun.com for similar package prerequisites for such
an upgrade.
At this point, you should determine whether or not the relevant packages are
already installed on your O/S BE. If not, either download them or otherwise
make sure that they are available to you for installation during the appropriate
step of Phase 2 (2.3.2).

!!!***NOTE***!!!: If you will be running the "lucreate" or "luupgrade"


commands from the Korn shell, you **MUST** be using Sun's original copy of
"ksh" --version "88i"-- that comes with SunOS/Solaris!!! Some people
install an upgraded "ksh" --typically version 93e-- which has some known
bugs and, even otherwise, does not work well with some of Sun's routines.
To determine which version you are running, ....
a) # echo $0 <--to determine that you are running "ksh"
b) # <Esc> <--i.e., simply press the <Esc> key
c) # ^v <--i.e., press the <Ctrl>/<v> key-combination
Version M-11/16/88i <--Sun's original on a Solaris-9 system
Version 11/16/88i <--Sun's original on a Solaris-5.5.1 system
Version M-12/28/93e <--Add-on copy of ksh-93
Among the problems that you might encounter, if you try to run Sun's Live-
Upgrade or Flash-Archive routines with ksh-93, are (a) 2-to-4 GB limit
when attempting either to create a new boot-environment with "lucreate" or
to upgrade an alternate boot-environment with "luupgrade"; (b) 2-to-4 GB
file-size limit when creating a Flash Archive file. !!!!!!!!!!!!!!!!!!

1.3.3 Determine All Prerequisite Patches for Live


Upgrade
Sun's "Solaris 9 9/04 Installation Guide" says, at the top of p 400, "
Correct operation of Solaris Live Upgrade requires that a limited set of patch
revisions be installed for a given OS version. Before installing or running
Live Upgrade, you are required to install a limited set of patch revisions.
Make sure you have the most recently updated patch list by consulting
http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolveSM web
site."
This Info-Doc includes the following minimum-patch recommendations, as of
2005/May/03:

To upgrade from Solaris 7 SPARC:


S7 FCS sparc 106541-38 or higher Live Upgrade boot device patch
S7 FCS sparc 106938-08 or higher libresolv patches
S7 FCS sparc 107059-01 or higher sort patch
S7 FCS sparc 107171-13 or higher patchadd/patchrm patch
S7 FCS sparc 107443-23 or higher pkgadd/pkgrm patches
S7 FCS sparc 107834-04 or higher DVD-rom support
S7 8/99 sparc 108029-03 or higher prodreg patches for Live Upgrade
S7 FCS sparc 108414-06 or higher cpio patch
S7 FCS sparc 111113-02 or higher nawk patch

Page 13 of 52
S7 FCS sparc 111666-01 or higher bzcat patch
S7 FCS sparc 112590-01 or higher fgrep patch

To upgrade from Solaris 8 SPARC:


S8 FCS sparc 108434-18 or higher SUNWlibC patches
S8 FCS sparc 108435-18 or higher SUNWlibCx patches
S8 FCS sparc 108987-15 or higher patchadd/patchrm patch
S8 FCS sparc 109147-32 or higher linker patches
S8 FCS sparc 110380-05 or higher libadm patches
S8 FCS sparc 110934-21 or higher pkg utilities patch
S8 FCS sparc 111111-04 or higher nawk patches
S8 FCS sparc 111879-01 or higher prodreg patches for Live Upgrade
NOTE: This patch is needed if you have installed one of the following
packages:
SUNWwsr
S8 FCS sparc 112097-05 or higher cpio patch
S8 FCS sparc 112396-02 or higher fgrep patches
S8 FCS sparc 112279-03 or higher ALC Procedural script patch -only
NOTE: This patch is needed if you have installed one of the following
packages:
SUNW5ttfe, SUNWcbcp, SUNWcttfe. SUNWcwbcp, SUNWcxoft, SUNWgttfe,
SUNWhpcp, SUNWhttfe, SUNWkbcp, SUNWkttfe, SUNWkwbcp
2.8 U7 sparc 114251-01 or higher ALC Procedural script patch -only
NOTE: This patch is needed if you have installed one of the following
packages:
SUNWgttf, SUNWgttfe, SUNWgxfnt

See Info Doc 72099 for info about patches for upgrading from other versions
and/or platforms of Solaris.

NOTE: This Info Doc, #72099, is accessible only to persons who both (a) have
registered a login account on sunsolve.sun.com and (b) have a legitimate Sun
Service # that they can attach to this account. If you ever want to check
the most-recent version of this info doc and if you do not have both of these
things, you will need to try some other channel by which you can obtain this
information.

1. Determine which of the relevant patches are already installed on the


old/source boot environment (O/S BE); determine which are not.

2. Download or otherwise make available to yourself those patches that are


not already installed, so to be prepared to install them during Phase
2.

1.3.4 Acquire Copies of the Two Live-Upgrade (LU)


Packages
This should be a trivial step:
The LU procedures require that you have installed the two LU packages,
SUNWlur and SUNWluu, to your old/source boot environment (O/S BE). These
packages must be from the precise version and release of Solaris to which you
are planning to upgrade the system.
Because you cannot possibly perform such an upgrade without a full copy of
the installation source (on CD or DVD or Jumpstart image) and because that
source should naturally include these two packages, all you need to do is to
make sure you know from precisely where you will be sourcing your upgraded
version of Solaris. If your upgrade source is not a CD or DVD copy of the media

Page 14 of 52
then you might want to double-check that these two packages actually are
included. Otherwise, you need do nothing else for this issue at this point.

1.3.5 Determine Need for Patching the Two LU


Packages
Check http://sunsolve.sun.com to see if there are any most-recent patches for
the precise versions of the SUNWlur and SUNWluu packages that you will be
installing. Download any such patches that you find.
This is probably not necessary or even practical, because, and for example,
Sun probably doesn't provide Solaris-8 patches for Solaris-9 packages. But you
might want to keep this possibility in mind, just in case.

Page 15 of 52
2 PHASE 2: SYSTEM
PREPARATION
Through the Planning Phase for what you need for a successful Live Upgrade
procedure, you probably discovered that you need to install and/or configure a
few things, perhaps including disks, packages, and patches. In this phase, you
actually perform these pre-Live-Upgrade tasks, before actually creating and then
upgrading your new boot environment (N/T BE).

2.1 Install new hard-disks if necessary.


[Probably nothing in particular need be said here, because it must be assumed
both (a) that the reader already knows how to do this or how to research it and
figure it out, and (b) that each situation could be a little bit different,
because of differences in system types, hard-disk brands and models, and
assigned device names.]

2.1.1 Physical Installation


[Probably nothing in particular need be said here, because it must be assumed
both (a) that the reader already knows how to do this or how to research it and
figure it out, and (b) that each situation could be a little bit different,
because of differences in system types, hard-disk brands and models, and
assigned device names.]

2.1.2 Logical Installation and Configuration


After any disks have been installed, you must perform either a recofiguration
boot (ok boot -r or # reboot -- -r) or run the devfsadm command to make sure
that the OS, on your system's old/source boot environment (O/S BE), can see the
new disks and successfully assign new /dev/dsk and /dev/rdsk device-names to
them.

2.2 Prepare the Hard-Disks for the


New/Target Boot-Environment (N/T
BE).
These steps apply, whether you have had to install more hard-disks or are
simply redeploying hard-disk space that was already installed on the system.
One way or the other, you must be sure that the appropriate disks contain
partitions of the appropriate sizes for the N/T BE that you have designed for
your Live Upgrade. And you might need to high-level-format the partitions,
depending on how you plan to build and upgrade the new file-system.

Page 16 of 52
2.2.1 Partition the hard-disks.
It is assumed that your hard-disks, on which you intend to place your
new/target boot environment (N/T BE), is not necessarily partitioned precisely
as you need to contain the file-systems for your N/T BE. Partition them at this
time. (It is assumed that you already know how to do this or can look it up
elsewhere.)

2.2.2 High-Level Format the Partitions to Your


Desired File-System Type.
This step is necessary only if you intend to use a Flash Archive as the
source from which you will copy the OS/OE files onto your N/T BE. If, instead,
you are populating the N/T BE from your active boot-environment (ABE) then the
lucreate -m command will accomplish the file-system formatting for you, as part
of its create-&-copy operations.

2.3 Prepare the Active Boot


Environment (ABE) for Live Upgrade
As mentioned in subsection 1.3, Prerequisites: Software Packages and
Patches, Live Upgrade requires certain minimum packages and patches to be
installed on your active boot environment (ABE). Following subsection 1.3, you
determined precisely which packages and patches are required for a successful
Live Upgrade on your particular system with you particular ABE and upgrading to
your desired version of Solaris. Now, you will actually install those packages
and patches.

2.3.1 Java-2 Runtime Environment (J2RE) + Patches


If your system-to-be-upgraded is either Solaris 2.6 or 7 or 8, and if you
have confirmed that this system already has the Java-2 Runtime Environment
(J2RE) and the latest J2RE patches then you need do nothing here. Otherwise,
see sub-subsection 1.3.1 for considerations regarding J2RE patching and Live
Upgrade and then, depending on your informed determination, install J2RE and/or
the latest related patches.

NOTE: It is not absolutely necessary that you install both the J2RE packages
and patches before you install any of the others that you have determined
need to be installed, except where the documentation indicates certain
packages-dependencies and certain patch-dependencies or incompatibilities.
Outside of those possible issues, you can install them before or after or in
between any other packages and patches that need to be added. These Java-2
RE items are mentioned here, by themselves, only because Sun mentions them in
a special note on page 396 of the "Solaris 9 9/04 Installation Guide".

Page 17 of 52
2.3.2 Install all Live-Upgrade Prerequisite Packages
As mentioned in sub-subsection 1.3.2 of the Planning Phase, some packages are
required to be installed to your old/source boot-environment (O/S BE) before
performing the Live Upgrade.
By now, you should have determined which of these packages is already
installed and which are not already installed. You should have downloaded those
that need to be installed.
Install them now.

!!!***NOTE***!!!: If you will be running the "lucreate" or "luupgrade"


commands from the Korn shell, you **MUST** be using Sun's original copy of
"ksh" --version "88i"-- that comes with SunOS/Solaris!!! Some people
install an upgraded "ksh" --typically version 93e-- which has some known
bugs and, even otherwise, does not work well with some of Sun's routines.
To determine which version you are running, ....
a) # echo $0 <--to determine that you are running "ksh"
b) # <Esc> <--i.e., simply press the <Esc> key
c) # ^v <--i.e., press the <Ctrl>/<v> key-combination
Version M-11/16/88i <--Sun's original on a Solaris-9 system
Version 11/16/88i <--Sun's original on a Solaris-5.5.1 system
Version M-12/28/93e <--Add-on copy of ksh-93
Among the problems that you might encounter, if you try to run Sun's Live-
Upgrade or Flash-Archive routines with ksh-93, are (a) 2-to-4 GB limit
when attempting either to create a new boot-environment with "lucreate" or
to upgrade an alternate boot-environment with "luupgrade"; (b) 2-to-4 GB
file-size limit when creating a Flash Archive file. !!!!!!!!!!!!!!!!!!

2.3.3 Install the Latest Patches


As mentioned in sub-subsection 1.3.3 of the Planning Phase, you should
install all the latest patches appropriate to your old/source boot-environment
(O/S BE) before performing the Live Upgrade. For a list of those patches as of
2005/May/03, see that sub-subsection. Otherwise, search sunsolve.sun.com for
Info Doc 72099.
By now, you should already have determined which of the necessary patches are
not already on your and you should already have downloaded them or otherwise
made them available to yourself for this step.
Install those patches.

2.3.4 Remove Any Old Live-Upgrade Packages


# pkgrm SUNWluu SUNWlur

(Even if you believe that these packages are not already on your system's
old/source boot-environment (O/S BE), you might as well run the above command
anyway, just to be sure. If they are there, they will be removed; if not, you
will get harmless error-messages.)

2.3.5 Install the New Live-Upgrade Packages


If you are upgrading to Solaris-8 Release 02/02 then you need to install the
SUNWlur and SUNWluu packages from precisely that version and release. If you
are upgrading to Solaris-9 Release 09/04 then you need to install the SUNWlur
and SUNWluu packages from precisely that version and release.

Page 18 of 52
If you are installing from the 2-CD media, these two packages are located on
"CD 2 of 2". In the Solaris-9 CD, the subdirectory-path should be
<cd_mount_point>/Solaris_9/EA/products/Live_Upgrade_2.0/sparc/Packages.
(It is assumed that you already know the general command techniques for
installing packages from CD or DVD or other installation-sources for the Solaris
OE. There is nothing out of the ordinary about the installation of these two
packages.)

2.3.6 Install the Latest Patches for SUNWlur and


SUNWluu (?)
If you found and downloaded any most-recent patches for your upgrade versions
of SUNWlur and SUNWluu, during step 1.3.5 during Phase 1, install them now.

Page 19 of 52
3 PHASE 3: CREATE THE NEW
BOOT ENVIRONMENT (BE)
This phase is centered around the execution of the lucreate command.
Except for the final subsection below, 3.5, the following 3.x subsections are
each mutually exclusive; not sequential. The first, 3.1, presents variations on
creating a new/target boot environment (N/T BE) directly from an old/source boot
environment (O/S BE) already on the system. The second, 3.2, presents the
creation and populating of an N/T BE from a Solaris Flash Archive. The third,
3.3, presents steps for integrating the creation and configuration of SVM
(Solaris Volume Manager) mirrored volumes with the N/T BE creation phase (the
lucreate command). The fourth, 3.4, briefly discusses considerations for using
VxVM (Veritas Volume Manager) volumes on your N/T BE. Subsection 3.5 provides
some brief techniques for confirming that your lucreate routine has completed
successfully.

!!!***NOTE***!!!: If you will be running the "lucreate" or "luupgrade"


commands from the Korn shell, you **MUST** be using Sun's original copy of
"ksh" --version "88i"-- that comes with SunOS/Solaris!!! Some people
install an upgraded "ksh" --typically version 93e-- which has some known
bugs and, even otherwise, does not work well with some of Sun's routines.
To determine which version you are running, ....
a) # echo $0 <--to determine that you are running "ksh"
b) # <Esc> <--i.e., simply press the <Esc> key
c) # ^v <--i.e., press the <Ctrl>/<v> key-combination
Version M-11/16/88i <--Sun's original on a Solaris-9 system
Version 11/16/88i <--Sun's original on a Solaris-5.5.1 system
Version M-12/28/93e <--Add-on copy of ksh-93
Among the problems that you might encounter, if you try to run Sun's Live-
Upgrade or Flash-Archive routines with ksh-93, are (a) 2-to-4 GB limit
when attempting either to create a new boot-environment with "lucreate" or
to upgrade an alternate boot-environment with "luupgrade"; (b) 2-to-4 GB
file-size limit when creating a Flash Archive file. !!!!!!!!!!!!!!!!!!

3.1 Create a New Boot Environment


from an Old Boot Environment
Each of the following 3.x.y steps represents mutually exclusive scenarios;
they are not meant all to be run sequentially. Each represents an example of a
command-line technique for creating a new/target boot environment (N/T BE) and
populating it directly from an old/source boot environment (O/S BE), such as the
(presently) active boot environment (ABE). Because any instance of the lucreate
command can be very long, all the following examples make liberal use of the
escape character (\), for contiuing the command on the next line.

The general command-syntax, for the lucreate command, is as follows:

# lucreate [-A 'optional N/T BE description'] \


[-c <O/S BE name>] \ ◄▬ *
-n <N/T BE name> \ ◄▬ **
-m <fs_mtpt>:c#t#d#s#:<fs_type> \ ◄▬ !
-m <fs_mtpt>:merged:<fs_type> \ ◄▬ !!

Page 20 of 52
-m -:c#t#d#s#:swap \ ◄▬ #
[-m -:shared:swap] ◄▬ ##

* -- The "-c <O/S BE name>" is used to assign an official boot-environment


name (BE_name) to an old/source boot environment (O/S BE). This switch
and option can be used only when no such official BE_name has ever before
been assigned to a BE. This is done the very first time that the lucreate
command is successfully run on a system. (The command assigns such a name
to the ABE automatically, when it discovers that no such name has yet been
assigned, whether or not you use the "-c" switch and option.) (Example:
-c Sol7_BE )

** -- The "-n <N/T BE name>" must be used for assigning a BE_name to the
new/target boot environment (N/T BE). (Example: -n Sol9_BE )

! -- Each and every new critical file-system, that you intend to exist in
the N/T BE, requires that you use the "-m <fs_mtpt>:c#t#d#s#:<fs_type>"
switch and options to specify a mount point (<fs_mtpt>), a device-to-be-
mounted (/dev/dsk/c#t#d#s#, or, simply c#t#d#s#), and a file-system type
(<fs_type>). So, if you original root (/) includes /opt, /usr, and /var
simply as large directory-trees within that one file-system but you want
/opt, /usr, and /var to be separate file-systems in the N/T BE then you
must include a separate "-m" switch and set of options for root (/), for
/opt, for /usr, and for /var (this is called "splitting file-systems").
(Example: -m /usr:/dev/dsk/c1t2d3s4:ufs or -m /usr:c1t2d3s4:ufs )

!! -- If you have two or more file-systems, in your O/S BE, that you want to
be only one file-system in the N/T BE, you must use the "merged" option
(instead of a /dev/dsk/c#t#d#s# device-name) with the "-m" switch. For
example: If your O/S BE has /, /usr, and /opt as separate file-systems but
you want /usr and /opt simply to be part of the / file-system in the N/T
BE, you would specify a "-m" switch with the "merged" option for each one,
/usr and /opt. (An example is shown further below.)
(Example: -m /opt:merged:ufs )

# -- The "-m -:c#t#d#s#:swap" switch and options are for specifying a new
slice to be used for swap-space for the N/T BE. If you want multiple
swap-slices for the N/T BE then you will include a separate "-m -
:c#t#d#s#:swap" combination for each such slice. If you include no
references at all to specific swap-slices in the lucreate command then the
command assumes that you will be using the exact same swap-space for the
N/T BE as you have been using for the O/S BE.
(Example: -:c4t3d2s1:swap )

## -- If you want to use, for your N/T BE, a combination of new swap-slices
plus the old swap-space that you have been using for your O/S BE, you will
include at least one of the "-m -:c#t#d#s#:swap" combinations described
above but also exactly one "-m -:shared:swap" combination, which refers
to all the swap-space being used by your O/S BE. You never need to use
this "-m -:shared:swap" combination if you are not intending to use both
the old swap-space and some newly-created swap-slices for the N/T BE.
(Example: always the exact same content: -m -:shared:swap )

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.1.1 Scenario 1 (Create N/T BE From O/S BE)

Page 21 of 52
We begin with a scenario that is about as simple as possible. In the
old/source boot environment (O/S BE), we have only a / (root) file-system, which
contains all the other major directory-trees needed for SunOS/Solaris, such as
/opt, /var, and /usr. We want the same file-system configuration in the
new/target boot environment (N/T BE). So, we need only one "-m" option. This
will be the first Live Upgrade on this system, which means that we need to
specify a BE_name for the O/S BE. We'll also add a comment with "-A". The
actual screen-output, from this command, follows immediately below it.

# lucreate -A "My 1st LU attempt" \ ◄▬[ -A comment ]


> -c Sol8_02Feb \ ◄▬[ -c BE_name for O/S BE ]
> -n Sol9_04Sep \ ◄▬[ -n BE_name for N/T BE ]
> -m /:c0t0d0s3:ufs ◄▬[ -m specifies new partition for N/T BE "/"]
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name <c0t0d0s3> expands to device path </dev/dsk/c0t0d0s3>
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <Sol8_02Feb>.
Creating initial configuration for primary boot environment <Sol8_02Feb>.
PBE configuration successful: PBE name <Sol8_02Feb> PBE Boot Device
</dev/dsk/c0t0d0s1>.
Comparing source boot environment <Sol8_02Feb> file systems with the
file system(s) you specified for the new boot environment. Determining
which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices

Updating system configuration files.


Setting description for boot environment <Sol9_04Sep>.
Updating boot environment description database on all BEs.
Creating configuration for boot environment <Sol9_04Sep>.
Creating boot environment <Sol9_04Sep>.
Creating file systems on boot environment <Sol9_04Sep>.
Creating <ufs> file system for </> on </dev/dsk/c0t0d0s3>.
Mounting file systems for boot environment <Sol9_04Sep>.
Calculating required sizes of file systems for boot environment <Sol9_04Sep>.
Populating file systems on boot environment <Sol9_04Sep>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <Sol9_04Sep>.
Creating compare database for file system </>.
Updating compare databases on boot environment <Sol9_04Sep>.
Making boot environment <Sol9_04Sep_JRA> bootable.
Population of boot environment <Sol9_04Sep> successful.
Creation of boot environment <Sol9_04Sep> successful.
#

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

Page 22 of 52
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.1.2 Scenario 2 (Create N/T BE From O/S BE)


In this scenario, the old/source boot environment (O/S BE) includes / (root),
/usr, /opt, and /var each as separate file-systems, and you want to preserve
that configuration in the new/target boot environment (N/T BE). You intend to
use the same swap-space for the N/T BE as you have been using for the O/S BE
(so, no reference to swap in this command-example). You have never performed a
Live Upgrade on this system before now and you want to apply a custom BE_name to
the O/S BE. (Extra space, after the "-m", is simply for visual clarity.)

# lucreate -A 'First new LU BE on this system' \ ◄▬[optional comment]


> -c Sol7_BE \ ◄▬[BE_name applied to the O/S BE]
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /usr:c1t2d0s3:ufs \ ◄▬[specify new slice for new /usr]
> -m /var:c1t2d0s4:ufs \ ◄▬[specify new slice for new /var]
> -m /opt:c1t2d0s5:ufs \ ◄▬[specify new slice for new /opt]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.1.3 Scenario 3 (Create N/T BE From O/S BE)


In this scenario, /usr, /var, and /opt are all simply major directory-trees
within the / (root) file-system on the O/S BE but you want each of them to be
separate file-systems in the N/T BE. Notice that the "-m" elements of this
example are exactly the same as in the previous example, in which those three
file-systems started out as separate. In this scenario, you also want to
specify one new slice for swap-space for the N/T BE. Again, you are applying a
BE_name to the O/S BE, because you've never run Live Upgrade on this system
before now. (Extra space, after the "-m", is simply for visual clarity.)

# lucreate -A 'First new LU BE on this system' \ ◄▬[optional comment]


> -c Sol7_BE \ ◄▬[BE_name applied to the O/S BE]
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /usr:c1t2d0s3:ufs \ ◄▬[specify new slice for new /usr]
> -m /var:c1t2d0s4:ufs \ ◄▬[specify new slice for new /var]
> -m /opt:c1t2d0s5:ufs \ ◄▬[specify new slice for new /opt]
> -m -:c1t2d0s0:swap \ ◄▬[specify new slice for swap]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

Page 23 of 52
3.1.4 Scenario 4 (Create N/T BE From O/S BE)
In this scenario, /usr, /var, and /opt are all separate file-systems in the
O/S BE but you want them all to be merged as major directory-trees within the /
(root) file-system in the N/T BE; so, you use the "merged" option three times.
You also want, this time, to use a combination of the original swap-space and
some newly-defined swap-slices for the N/T BE. This time, you do not try to
apply a custom BE_name to the O/S BE (either because you don't care or because
you have previously performed a Live Upgrade on this system, which means that it
already has an official BE_name). You also do not bother with a comment this
time. (Extra space, after the "-m", is simply for visual clarity.)

# lucreate \
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /usr:merged:ufs \ ◄▬[/usr to be merged under / fs]
> -m /var:merged:ufs \ ◄▬[/var to be merged under / fs]
> -m /opt:merged:ufs \ ◄▬[/opt to be merged under / fs]
> -m -:shared:swap \ ◄▬[says to use old swap for N/T BE]
> -m -:c1t2d0s0:swap \ ◄▬[specify also a new slice for swap]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.1.5 Scenario 5 (Create N/T BE From O/S BE)


In this scenario, you want to perform an unusual merge. The /opt directory-
tree is a separate file-system in the O/S BE and you want it to be "merged" in
the N/T BE. However, you do not want it merged simply as /opt within the /
(root) file-system. Rather, you want your /opt to be moved to /usr/opt in the
N/T BE. This scenario will be run exactly as the example in Scenario 3
(subsection 3.1.4), except for this one detail.

# lucreate \
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /usr:merged:ufs \ ◄▬[/usr to be merged under / fs]
> -m /var:merged:ufs \ ◄▬[/var to be merged under / fs]
> -m /usr/opt:merged:ufs \ ◄▬[/opt to be merged under /usr fs]
> -m -:shared:swap \ ◄▬[says to use old swap for N/T BE]
> -m -:c1t2d0s0:swap \ ◄▬[specify also a new slice for swap]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

Page 24 of 52
3.1.6 Scenario 6 (Create N/T BE From O/S BE)
This scenario is substantially different from the previous four. In each of
the previous four scenarios, you did not explicitly specify the old/source boot
environment (O/S BE) and, because of this, the lucreate command simply assumed
that you wanted the O/S BE to be the currently active boot environment (ABE).
In this scenario however, you are performing Live Upgrade on a system that
already contains two or more BEs and you want your actual O/S BE, for the
lucreate command, to be a BE other than the current ABE. This means that you
must use the lucreate "-s" switch and option to specify the O/S BE that you want
to use. For this variation on the "-s" switch, your option must be the BE_name
for the O/S BE; device-names do not work here. Except for the "-s" switch and
option, this example will be based on the example from Scenario 1 (though no "-
c" here, because this is not the first time a Live Upgrade has been performed on
this system).

# lucreate -A 'First new LU BE on this system' \ ◄▬[optional comment]


> -s Sol26_BE_VersA \ ◄▬[specify source for N/T BE]
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /usr:c1t2d0s3:ufs \ ◄▬[specify new slice for new /usr]
> -m /var:c1t2d0s4:ufs \ ◄▬[specify new slice for new /var]
> -m /opt:c1t2d0s5:ufs \ ◄▬[specify new slice for new /opt]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.1.7 Scenario 7 (Create N/T BE From O/S BE)


Again, this sixth scenario is substantially different from the previous five.
The command-example is actually rather simple but it represents what might be
considered a somewhat unusual situation; so, it requires a bit of setup. The
lucreate command includes a preserve option that allows for a partition or SVM-
submirror to be used, in the creation of a file system on the N/T BE, without
destroying and copying-over its contents. Thus, the new file-system, in the N/T
BE, becomes populate with directories and files, simply by the fact that a
partition, used to create that file-system, was allowed to keep the file-system
that it contained in the first place. If you have any extra partitions that can
be used in this way, this can save copying time during the execution of the
lucreate command.
Of course, this begs the following question: If this partition does not
contain a shareable file-system, which need not be included in the lucreate
routine, but, rather, contains a critical file-system, which typically must be
included in the lucreate routine, how or why do you have a second copy of that
entire file-system, residing on its own dedicated slice? In other words, if you
have only one copy of the /opt file-system, you cannot simply use that file-
system's partition with the preserve option on the lucreate command-line,
because, in doing so, you would have two different boot environments that shared
a single copy of a critical file-system, which is not allowed. This means that,
in order to use a partition containing the /opt file-system with the preserve
option on the lucreate command-line, you absolutely must have that /opt file-
system sitting on two separate partitions --that is, one original, actually

Page 25 of 52
being used by the active boot environment (ABE), and one copy, which is
available for the lucreate routine.
But why would you have two separate and identical copies of /opt like that?
The answer, to that question, is for you to figure out. One example is that you
might have decided to create a separate copy of /opt (perhaps using a special
syntax with the tar command or with the cpio command), sometime prior to your
Live Upgrade routine, simply because you had an opportunity to make that copy at
an earlier time, in order to save yourself some time, later on, when you were to
run the lucreate routine. Whatever the reason(s) for which you might have any
such replications of your critical file-systems, the preserve option can be used
as described.
(The preserve option will probably make more sense, relative to real-life
situations, when presented again in sub-subsection 3.3.2, as an option for
moving a submirror from the O/S BE to the N/T BE.)

In this scenario, the O/S BE has a separate / (root) file-system and a


separate /opt, and we want to have this same configuration in the N/T BE. We
have only one copy of the O/S BE's / file-system; so, a file-system copy is
included from the ABE's / FS to the N/T BE's / FS. However, we somehow have two
separate copies of the ABE's /opt FS: one that the ABE is actually using and the
other --either mounted to a different mount-point or not mounted at all-- that
the ABE is not using. We use that second partition, containing the alternate
/opt FS, as the slice for /opt in the N/T BE, and we "preserve" its contents, so
to avoid having to wait for the entire /opt FS to copy from the O/S BE to the
N/T BE. We are sharing swap-space between the O/S BE and the N/T BE; no "-A"
comment and we'll pretend that the ABE has already had a BE_name applied to it
at an earlier time. So, this lucreate example is particularly short and simple:

# lucreate \
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /opt:c2t3d1s3:preserve,ufs \ ◄▬[/opt to be preserved]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]

A simple variation, of this example, is where one has, in our example, the
/opt FS replicated on a SVM (Solaris Volume Manager, formerly Solstice DiskSuite
or SDS) mirrored-volume, and you "preserve" the entire contents of that mirror
for the /opt FS in the N/T BE (more about SVM RAID in subsection 3.3):

# lucreate \
> -m /:c1t2d0s1:ufs \ ◄▬[specify new slice for new /]
> -m /opt:d40:preserve,ufs \ ◄▬[/opt to be preserved]
> -n Sol9_BE ◄▬[BE_name for the N/T BE]
[screen output omitted ... see 3.1.1]

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.2 Create a New Boot Environment with


a Flash Archive
The point, of creating a new/target boot environment (N/T BE) with a Flash
Archive, is to populate the N/T BE with the contents of the Flash Archive rather

Page 26 of 52
than directly from the file systems of a BE on the system being upgraded. The
Flash Archive might have been created from the system being upgraded or from
another system. The Flash Archive might represent an actual SunOS/Solaris
upgrade for the system or might simply be an alternate BE for the system.
The essence of the technique, for using a Flash Archive with Live Upgrade
(LU), is (a) to create empty disk-space for the N/T BE, with the lucreate
command, and then (b) to populate that disk-space from a Flash Archive, using
the luupgrade(1M) command. This means that most of the work is done in Phase 4
(subsection 4.4).
This section naturally covers only the lucreate command. The following
syntax example should be sufficient for you to construct a successful command-
line to create your N/T BE.

# lucreate -A "My 1st LU attempt" \ ◄▬[ -A comment ]


> -s - \ ◄▬[ -s "-" to not copy files to N/T BE]
> -n Sol9_04Sep \ ◄▬[ -n BE_name for N/T BE ]
> -m /:c0t0d0s3:ufs ◄▬[ -m specifies new partition for N/T BE "/"]

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.3 Create a New Boot Environment with


SVM Mirrors
The following 3.x.y sub-subsections describe different ways to integrate SVM
(Solaris Volume Manager, formerly Solstice DiskSuite or SDS) mirroring into your
actual Live Upgrade commands and procedures.

NOTE: If you are building new file-systems on new SVM mirrored volumes
(lucreate) from old file-systems that are also SVM mirrored volumes, you
should use the metastat(1M) command first, to make sure that the existing
mirrors and submirrors, from which you want to build the new, do not need
maintenance and are not busy, else your lucreate command will fail. The
lucreate command will check to see if an old/source mirrored device (used
in the command) is resyncing and will generate an error message if it
finds that this is so. (Resyncing is the copying of the contents of one
submirror to another, within a single volume, after either a submirror
failure or a system failure or the onlining or addition of a submirror to
the volume.)

NOTE: Page 403, of the "Solaris 9 9/04 Administration Guide" says "Use
the lucreate command rather than Solaris Volume Manager commands to
manipulate volumes on inactive boot environments. The Solaris Volume
Manager software has no knowledge of boot environments, whereas the
lucreate command contains checks that prevent you from inadvertently
destroying a boot environment. For example, lucreate prevents you from
overwriting or deleting a Solaris Volume Manager volume." But it further
states that you must use SVM, itself, to manage and manipulate complex
RAID devices that you have already created with SVM, the implication being
that these would be part of your old/source boot environment (O/S BE); not
part of the new/target boot environment (N/T BE) that you will create with
lucreate.

Page 27 of 52
3.3.1 Scenario 1 (N/T BE w/ SVM Mirrors)
We begin with probably the simplest scenario, in which you want to create a
new / (root) file-system mirror while creating the new/target boot environment
(N/T BE); you want to copy the contents of the / file-system from the old/source
boot environment (O/S BE), which is a typical thing to do; and it doesn't matter
as to whether or not the O/S BE's / file-system was a RAID device of any sort.
We begin with these parameters:

• Your O/S BE's / file-system contains all the critical directory-trees


needed for the Soloris OE, including /usr, /var, /opt, and so forth. So,
/ is the only file-system that you will be referencing in this lucreate
command.
• You want your N/T BE's / file-system to be a 3-way mirror (total of 3
submirrors). The newly-assigned volume (or metadevice) names of the new /
mirror, and the c#t#d#s# names and of its three submirrors, along with
their newly-assigned volume names, will all be obvious from the command
example, below. Each of the submirrors is unused space before this
operation. Nothing need be done with actual SVM commands in order to
assign the volume (metadevice) names to these four devices; all those
details are managed within the lucreate command.
• Your O/S BE and your N/T BE will share the exact same swap-space; so,
there will be no references to swap in this lucreate command.

The general command-syntax is as follows:

# lucreate [-s <O/S_BE_name>] -n <N/T_BE_name> \ ◄▬ *


-m <fs_name>:<d##>:mirror,ufs \
-m <fs_name>:<c#t#d#s#>,<d##>:attach \
-m <fs_name>:<c#t#d#s#>,<d##>:attach \
-m <fs_name>:<c#t#d#s#>,<d##>:attach

Example for this scenario:

# lucreate [-s Sol7_BE] -n Sol9_BE \ ◄▬ *


-m /:d20:mirror,ufs \ ◄▬[d20 already exists]
-m /:c1t1d1s1,d21:attach \ ◄▬[d21 created now from c1t1d1s1]
-m /:c3t3d3s3,d22:attach \ ◄▬[d22 created now from c3t3d3s3]
-m /:c5t5d5s5,d23:attach ◄▬[d23 created now from c5t5d5s5]
[screen output omitted ... see 3.1.1]

* --The "-s <O/S_BE_name>" is not necessary unless you are not using the
active boot environment (ABE) as the source for copying the contents into
the N/T BE.

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.3.2 Scenario 2 (N/T BE w/ SVM Mirrors)


This second scenario is more complex. With a typical lucreate command (not
involving a Flash Archive), the new file-systems in the N/T BE get populated
from the contents of the corresponding file-systems in the O/S BE, regardless of
whether or not any mirrored volumes are involved. In this scenario, we start

Page 28 of 52
with a mirrored volume in the O/S BE; we create a corresponding mirrored volume
in the N/T BE; and that new mirrored volume's file-system gets populated by
transferring a submirror, with all its contents intact, from the old mirror to
the new. This operation is somewhat like the story of how Eve was created from
one of Adam's ribs: the "rib" is the submirror that gets transferred from the
old to the new.

• Your O/S BE's / file-system contains all the critical directory-trees


needed for the Soloris OE, including /usr, /var, /opt, and so forth. So,
/ is the only file-system that you will be referencing in this lucreate
command.
• Your O/S BE has the / (root) file-system on a 3-way SVM mirror.
• You want your N/T BE also to have its / file-system on a 3-way SVM mirror
and you are willing for your original mirror to be without one of its
submirrors, at least temporarily.
• When you run the lucreate command, two of the submirrors, for the new
volume in the N/T BE, come from presently unused partitions but the third
submirror is extracted from the 3-way mirror used for the / file-system in
the O/S BE.
• You never use the "-s" switch with this form of the command, because the
"source", for copying directories and files into the N/T BE, is always the
submirror that you are detaching, attaching, and preserving.
• Your O/S BE and your N/T BE will share the exact same swap-space; so,
there will be no references to swap in this lucreate command.

The general command-syntax is as follows:

# lucreate -n <N/T_BE_name> \
-m <fs_name>:<d##>:mirror,ufs \
-m <fs_name>:<c#t#d#s#>,<d##>:detach,attach,preserve \
-m <fs_name>:<c#t#d#s#>,<d##>:attach \
-m <fs_name>:<c#t#d#s#>,<d##>:attach

NOTE: The "detach,attach,preserve", on the 2nd line in the above syntax,


means first to "detach" the submirror from its original volume in the O/S
BE; second to "attach" that same submirror to the new mirrored volume in
the N/T BE; third to "preserve" all the file-system contents of that
submirror, thus populating the new mirrored volume with all the
appropriate directories and files.

Example for this scenario:

# lucreate -n Sol9_BE \
-m /:c1t1d1s1,d30:mirror,ufs \ ◄▬[d30 created now from c1t1d1s1]
-m /:c3t3d3s3,d31:detach,attach,preserve \ ◄▬[d31 created now]
-m /:c5t5d5s5,d32:attach \ ◄▬[d32 created now from c5t5d5s5]
-m /:c7t7d7s7,d33:attach ◄▬[d33 created now from c7t7d7s7]
[screen output omitted ... see 3.1.1]

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.3.3 Scenario 3 (N/T BE w/ SVM Mirrors)

Page 29 of 52
This scenario is a variation on the previous one, differing only in that we
create two SVM mirrored-volumes with the lucreate command, instead of only one.
Hopefully, it will be obvious that more than two can be create with a single
lucreate routine, simply by extending this example:

# lucreate -n Sol9_BE \
-m /:d30:mirror,ufs \ ◄▬[d30 already exists]
-m /:c1t1d1s1,d31:detach,attach,preserve \ ◄▬[d31 created now]
-m /:c3t3d3s3,d32:attach \ ◄▬[d32 created now from c3t3d3s3]
-m /:c5t5d5s5,d33:attach \ ◄▬[d33 created now from c5t5d5s5]
-m /opt:c2t3d4s5,d25:ufs,mirror \ ◄▬[d25 created now from c2t3d4s5]
-m /opt:c2t4d5s6,d26:attach \ ◄▬[d26 created now from c2t4d5s6]
-m /opt:c2t5d6s7,d27:attach ◄▬[d27 created now from c2t5d6s7]
[screen output omitted ... see 3.1.1]

(Go to subsection 3.5 for suggested ways for how to confirm that your
lucreate command completed successfully.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

3.4 Create a New Boot Environment with


VxVM Volumes
Sun's "Solaris Live Upgrade 2.0 Guide", pp 76-78, give a detailed, step-by-
step explanation of what to do when "System Panics When Upgrading On Veritas
VxVm". Sun's "Solaris 9 9/04 Administration Guide", pp 644-645, give the same
explanation. Also, this same Solaris-9 guide says, on p 402, that you "can
create a new boot environment that contains any combination of physical disk
slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes". And
again, this same guide, on p 402, says that the lucreate -m command can
recognize a "Veritas Volume Manager [VxVM] volume in the form [syntax] of
/dev/vx/dsk/volume_name".
The problem with all these references, however, is that neither of these two
guides provides any detailed command-syntax or examples for actually creating
any type of VxVM volume as part of the lucreate routine. (The LU-2.0 guide
apparently could not provide any such examples, because, at that time, LU did
not support the creation of any types of RAID devices as part of the lucreate
routine.) So, it seems, presently, that Sun does not provide any explicit
directions for integrating VxVM volumes into the creation phase of the Live
Upgrade procedures. Also, one of Sun's technical employees, from New Zealand,
has stated that he firmly believes it is at least a bad idea, and likely
impossible, to successfully integrate VxVM with Live Upgrade in this way (web-
posted documents internal to Sun's Wide-Area Network).
So, this document does not presently provide any means by which to integrate
VxVM volumes into the lucreate routine, as has been provided regarding a similar
integration with SVM.
Apparently: if you want your new/target boot environment (N/T BE) to include
VxVM volumes of any sort, you must first lucreate the N/T BE without the VxVM
volumes; then reboot into the N/T BE (after first luactivate'ing it); then
install and configure VxVM as you desire.
NOTE: If you want many or most of your N/T BE's file-systems to reside on
VxVM volumes of one sort or another, you should include as few file-systems and
partitions as possible when you run your lucreate routine. This way, you can
proceed basically as follows:

Page 30 of 52
a) Determine the fewest file-systems that must be included in your
lucreate routine.

b) Execute the lucreate routine; upgrade the N/T BE; activate the N/T
BE; reboot to it.

c) Install and configure VxVM, supposedly including the "encapsulation"


of your new / file-system.

d) "Initialize" as many disks as you want; build the appropriate file-


system infrastructures on them as needed.

e) Temporarily mount other file-systems, from the O/S BE, that you want
to be copied over onto the newly-created VxVM volumes; perform the
copy (tar or cpio or your preferred technique); umount those
temporary mounts.

More and better instructions, for using VxVM with Live Upgrade, will be added
to this document as they become available or otherwise discovered.

3.5 Confirm the Success of "lucreate"


Before moving on to Phase 4 and running the luupgrade command, it is
advisable for you to confirm the successful completion of your lucreate command.
If you created the new/target boot environment (N/T BE) by sourcing the active
boot environment (ABE) or some other old/source boot environment (O/S BE) on the
system, the following steps can help you to confirm that your N/T BE was created
properly:

# lustatus <--[displays status of all boot environments on system:


look for "Yes" in the "Complete" column for this BE]
# mount /dev/dsk/<devname_for_new_/_fs> /mnt
# ls /mnt
# cat /mnt/etc/release <--[contents s/b same as /etc/release]
# cat /etc/release <--[contents s/b same as /mnt/etc/release]
# umount /mnt

Page 31 of 52
4 PHASE 4: UPGRADE THE
NEW BOOT ENVIRONMENT
!!!***NOTE***!!!: If you will be running the "lucreate" or "luupgrade"
commands from the Korn shell, you **MUST** be using Sun's original copy of
"ksh" --version "88i"-- that comes with SunOS/Solaris!!! Some people
install an upgraded "ksh" --typically version 93e-- which has some known
bugs and, even otherwise, does not work well with some of Sun's routines.
To determine which version you are running, ....
a) # echo $0 <--to determine that you are running "ksh"
b) # <Esc> <--i.e., simply press the <Esc> key
c) # ^v <--i.e., press the <Ctrl>/<v> key-combination
Version M-11/16/88i <--Sun's original on a Solaris-9 system
Version 11/16/88i <--Sun's original on a Solaris-5.5.1 system
Version M-12/28/93e <--Add-on copy of ksh-93
Among the problems that you might encounter, if you try to run Sun's Live-
Upgrade or Flash-Archive routines with ksh-93, are (a) 2-to-4 GB limit
when attempting either to create a new boot-environment with "lucreate" or
to upgrade an alternate boot-environment with "luupgrade"; (b) 2-to-4 GB
file-size limit when creating a Flash Archive file. !!!!!!!!!!!!!!!!!!

This phase is centered around the execution of the luupgrade command. The
entire point of this phase is to apply the actual OS/OE upgrade to the
new/target boot environment (N/T BE) that you just created with the lucreate
command in Phase 3. Hopefully you understand by now but, just in case: the
luupgrade command does not affect the O/S BE in any way; it affects only the N/T
BE.

Some of the following 4.x subsections are mutually exclusive: they are not
intended to be executed one right after another but, rather, one instead of
another. Notes will be included, at the end of each subsection, to point to any
other subsections to which you should go from there.

Basic Syntax-Forms for luupgrade:

1. The "-u" switch indicates to perform a basic OS-upgrade from some media-
source:

# luupgrade -u -n <BE_name> -s <install_source> [-N]

The "-n <BE_name>" is naturally for the BE to be upgraded.

The "-s <install_source>" can be Solaris-install CD (see Step 2, below) or DVD or


shared install-source from a Jumpstart server.

The "-N" switch allows you to perform a so-called "dry run", to check for the correctness of
your command-syntax and for the viability of actually accomplishing this particular upgrade,
but without actually affecting anything.
2. The "-i" switch is mostly used to continue the luupgrade step when upgrading
the new BE from multiple Solaris-install CDs. Whenever you are using the
luupgrade command to perform the upgrade from the typica 2 CDs for Solaris
installation, this "-i" command will always be used to continue the upgrade
from the second CD and any more beyond that.

Page 32 of 52
# luupgrade -i -n <BE_name> -s <install_source> \
[ -O "-nodisplay -noconsole" ] [-N]

The "-n <BE_name>" is naturally for the BE to be upgraded.

The "-s <install_source>" can be Solaris-install CD (see Step 2, below) or DVD or


shared install-source from a Jumpstart server.

The " -O '-nodisplay -noconsole' " are options, related to installer(1M), that prevent
the installer-GUI from displaying unnecessarily when finishing an OS/OE-upgrade from the
second CD of the Solaris install-media.

The "-N" switch allows you to perform a so-called "dry run", to check for the correctness of
your command-syntax and for the viability of actually accomplishing this particular upgrade,
but without actually affecting anything.
3. The "-f" switch indicates to install an OS/OE, from scratch, from a Flash
Archive.

# luupgrade -f -n <BE_name> -s <install_source> \


-a <archive_location> [-N]

The "-n <BE_name>" is naturally for the BE to be upgraded.

The "-s <install_source>" can be Solaris-install CDs or DVD or shared install-source


from a Jumpstart server.

The "-a <archive_location>" is the location of the Flash Archive.

The "-N" switch allows you to perform a so-called "dry run", to check for the correctness of
your command-syntax and for the viability of actually accomplishing this particular upgrade,
but without actually affecting anything.
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this command.)

4.1 Upgrading from Solaris-9 CDs


This example assumes that you have sufficient experience in installing the
Solaris OE from an OS image distributed across two OS-install CDs. For these
examples, we are assuming that the N/T BE's BE_name is Sol9_BE.

1. If the vold daemon is running then the CDs will mount automatically. If it
is not running, you might want to start it, so that it can run for the
duration of this procedure, to simplify things. Simply insert the Solaris-9
CD "1 of 2" into the drive and wait a few seconds for it to mount.

2. Look to see if the CD mounted successfully:

# cd /cdrom/cdrom0
# ls -al <--[If you see directory "s0", mount succeeded.]

Page 33 of 52
3. If necessary, remind yourself of the BE_name of your N/T BE:

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Sol8_BE yes yes yes no -
Sol9_BE yes no no yes -

4. Run luupgrade from the first CD (example screen-output follows command):

# luupgrade -u -n Sol9_BE -s /cdrom/cdrom0/s0

Validating the contents of the media </cdrom/cdrom0/s0>.


The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <9>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <Sol9_04Sep_JRA>.
Determining packages to install or upgrade for BE <Sol9_BE>.
Performing the operating system upgrade of the BE <Sol9_BE>.
CAUTION: Interrupting this process may leave the boot environment
unstable or unbootable.
Upgrading Solaris: 10% completed
[NOTE: You might want to run lustatus, while this luupgrade command is
running, to note that the "Copy Status" says "UPGRADING".]
[NOTE: above percentage increases as upgrade proceeds, until ....]
Upgrading Solaris: 100% completed
Installation of the packages from this the media is complete.
Adding operating system patches to the BE <Sol9_BE>.
The operating system patch installation is complete.
INFORMATION: </var/sadm/system/logs/upgrade_log> contains a log of the
upgrade operation.
INFORMATION: </var/sadm/system/data/upgrade_cleanup> contains a log of
cleanup operations required.
WARNING: <1> packages failed to install properly on boot environment
<Sol9_BE>.
INFORMATION: </var/sadm/system/data/upgrade_failed_pkgadds> on boot
environment <Sol9_BE> contains a list of packages that failed to
upgrade or install properly.
WARNING: <470> packages must be installed on boot environment <Sol9_BE>.
INFORMATION: </var/sadm/system/data/packages_to_be_added> on boot
environment <Sol9_BE> contains a list of packages that must be
installed on the boot environment for the upgrade to be complete. The
packages in this list were not present on the media that was used to
upgrade this boot environment.
INFORMATION: If the boot environment was upgraded using one media of a
multiple media distribution, for example the Solaris CD media, you must
continue the upgrade process with the next media. Complete the upgrade by
using the luupgrade <-i> option to install the next media of the
distribution. Failure to complete the upgrade process with all media of
the software distribution makes the boot environment unstable.
INFORMATION: Review the files listed above on boot environment
<Sol9_BE>. Before you activate the boot environment, determine if
any additional system maintenance is required or if additional media of
the software distribution must be installed.
The Solaris upgrade of the boot environment <Sol9_BE> is partially
complete.

Page 34 of 52
#

● Note all the INFORMATION: and WARNING: items in the screen output.
● All the references, to log-files underneath the /var/sadm/system subdirectory, are meant to
refer to the N/T BE, not the O/S BE or ABE (active).
● At this point, all "packages to be added" should simply be those that must be installed from
the Solaris installation CD, "2 of 2", which you will do further below.
● This entire procedure could take approximately one hour.
● The "-u" calculates the capacity, on the target disks, for holding all the files to be installed.
● Remove the CD if it does not automatically eject when this command is finished.
5. Optional: If it seems to be needed, mount the N/T BE and view the log files
mentioned in the INFORMATION: and WARNING: items from the screen output of
the "luupgrade -u" command:

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Sol8_BE yes yes yes no -
Sol9_BE yes no no yes -

# lumount Sol9_BE
# cd /.alt.Sol9_BE
# cd ./var/sadm/system
# pwd
/.alt.Sol9_BE/var/sadm/system
# ls -al
total 10
drwxr-xr-x 5 root sys 512 May 13 15:23 .
drwxr-xr-x 10 root sys 512 May 25 11:11 ..
drwxr-xr-x 3 root sys 512 May 25 11:33 admin
drwxr-xr-x 3 root sys 512 May 25 11:33 data
drwxr-xr-x 2 root sys 512 May 25 10:34 logs
# cd data
# ls -al
total 412
drwxr-xr-x 3 root sys 512 May 25 11:33 .
drwxr-xr-x 5 root sys 512 May 13 15:23 ..
drwxr-xr-x 2 root other 512 May 25 11:33 .virtualpkgs2
-rw-r--r-- 1 root sys 32 May 25 11:33 locales_installed
-rw-r--r-- 1 root other 173235 May 25 11:33 packages_to_be_added
lrwxrwxrwx 1 root other 26 May 25 11:33 upgrade_cleanup ->
upgrade_cleanup_2005_05_25
-rw-r--r-- 1 root sys 16239 May 25 11:24
upgrade_cleanup_2005_05_25
-r--r--r-- 1 root other 10 May 25 11:07
upgrade_failed_pkgadds
# cat upgrade_failed_pkgadds
SUNWxwopt
# pkginfo -l SUNWxwopt ◄▬[Running this command based on assumption
that there might already be a copy of this
failed package installed on your ABE.]
PKGINST: SUNWxwopt
NAME: nonessential MIT core clients and server extensions
CATEGORY: system
ARCH: sparc
VERSION: 6.4.1.3800,REV=0.1999.12.15

Page 35 of 52
BASEDIR: /usr
VENDOR: Sun Microsystems, Inc.
DESC: nonessential MIT core clients and server extensions
PSTAMP: stomp19991215153354
INSTDATE: May 13 2005 22:16
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 80 installed pathnames
6 shared pathnames
8 directories
54 executables
3036 blocks used (approx)

[Based on the "NAME:" saying "nonessential", I choose to ignore this


failure.]

# cd ../logs
# more upgrade_log
[output omitted]
# cd /
# luumount Sol9_BE
# ls -al /.alt.Sol9_BE
/.alt.Sol9_BE: No such file or directory

6. Insert the second CD and look to see that it mounted successfully:

# cd /cdrom/cdrom0
# ls -al <--[If you see a Solaris_# directory, mount succeeded.]

7. Run luupgrade from the second CD (example screen-output follows command):

# luupgrade -i -n Sol9_BE -s /cdrom/cdrom0 \


> -O "-nodisplay -noconsole"

Validating the contents of the media </cdrom/cdrom0>.


The media is a standard Solaris media.
The media contains a standard Solaris installer.
The media contains <Solaris_2_of_2> version <9>.
Mounting BE <Sol9_04Sep_JRA>.
Running installer on BE <Sol9_04Sep_JRA>.

Sun Microsystems, Inc.


Binary Code License Agreement
Live Upgrade

[license output omitted]

INFORMATION: </var/sadm/system/logs/upgrade_log> contains a log of the


upgrade operation.
INFORMATION: </var/sadm/system/data/upgrade_cleanup> contains a log of
cleanup operations required.
WARNING: <1> packages failed to install properly on boot environment
<Sol9_BE>.
INFORMATION: </var/sadm/system/data/upgrade_failed_pkgadds> on boot
environment <Sol9_BE> contains a list of packages that failed to
upgrade or install properly.
INFORMATION: Review the files listed above on boot environment
<Sol9_BE>. Before you activate the boot environment, determine if
any additional system maintenance is required or if additional media of
the software distribution must be installed.

Page 36 of 52
Unmounting BE <Sol9_04Sep_JRA>.
The installer run on boot environment <Sol9_BE> is complete.

● Because of the "-O ’-nodisplay -noconsole’", the usual GUI-display will not appear
on the screen for the installation/upgrade from this second CD. This is normal practice,
because there is typically nothing to answer or specify for the second CD. This second CD
could take approximately 30 minutes to complete.
● The "-i" switch must be used to install more software from any CDs after the first one.
● Note all the INFORMATION: and WARNING: items in the screen output.
● All the references, to log-files underneath the /var/sadm/system subdirectory, are meant to
refer to the N/T BE, not the O/S BE or ABE (active).
8. After an appropriate interval, check to see if the upgrade has completed:

# lustatus Sol9_BE
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Sol8_BE yes yes yes no -
Sol9_BE yes no no yes -

9. Optional: If it seems to be needed, mount the N/T BE and view the log files
mentioned in the INFORMATION: and WARNING: items from the screen output of
the "luupgrade -i" command. (See Step 5 for example commands and output.)

10. Apply the latest "recommended patch-cluster" for Solaris-9 to the N/T BE.
(You supposedly downloaded this patch cluster to your desired directory, such
as /tmp or /var/tmp, during Phase 1 of this Live Upgrade procedure.

# cd /var/tmp/patches

# unzip 9_Recommended.zip

# luupgrade -t -n Sol9_BE -s /var/tmp/patches/9_Recommended \


> `cat /var/tmp/patches/9_Recommended/patch_order`
[screen-output omitted]

● During the running of this command, the output of "df -k" shows that the appropriate file-
systems, for the N/T BE, have been temporarily mounted to temporary directory /a.
● During the running of this command, lustatus output will not reveal any activity or special
status for the N/T BE that is being patched:
# lustatus Sol9_BE
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Sol8_BE yes yes yes no -
Sol9_BE yes no no yes -

(Now, proceed to PHASE 5: MANAGE THE BOOT ENVIRONMENTS, to learn how to


activate your N/T BE and reboot into it, and to run other management commands on
the multiple boot-environments on your systems.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these commands.)

Page 37 of 52
4.2 Upgrading from Solaris-9 DVD
(NOTE: At this time, it is assumed that DVD-based installation is similar
enough to CD-based installation and that the audience for this document is
experienced and intelligent enough to figure out the differences, so that
a detailed explanation for DVD-based installation is not necessary after
subsection 4.1. If any reader disagrees on this point and has any
important details to provide, they are welcome to contact me.)

(Now, proceed to PHASE 5: MANAGE THE BOOT ENVIRONMENTS, to learn how to


activate your N/T BE and reboot into it, and to run other management commands on
the multiple boot-environments on your systems.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these commands.)

4.3 Upgrading from a Jumpstart Image


Running the luupgrade -u command, using a Jumpstart image as the installation
source, is essentially the same as running the command using a CD or DVD as the
source. Here are the two primary things you need to know about this variation:

• You will probably use only the "luupgrade -u" command:

With a CD-based "luupgrade", you must first use "luupgrade -u", for the
first CD as installation-source; then you must use "luupgrade -i", for the
second CD as installation-source; then "luupgrade -i" again for each one
of any further CDs after that.
By contrast, a Jumpstart image typically contains all the packages,
within a single "Product" subdirectory, that normally require 2 CDs to
contain. So, you should need only the "luupgrade -u" command, unless you
are also luupgrade'ing with some supplementary software that is not
included in the SUNWXall installation-cluster (aka the "Entire
Distribution Plus OEM Support").

• The specific path you use for the "-s" switch:

The option, for the "-s" switch, must be the grandparent directory of
the "Product" subdirectory in a Jumpstart configuration.
For example: If you have installed your Jumpstart configuration to a
directory call "/export/js9" and if the full path to the "Product"
subdirectory is ....

/export/js9/Solaris_9/Product

then the option, for the "-s" switch, is as in the following command-
example:

# luupgrade -u -n Sol9 -s /export/js9

Note that "/export/js" is not the "parent" directory for the "Product"
subdirectory but, rather, what might loosely be called the "grandparent"
directory --that is, one level up from the parent.

Page 38 of 52
(Now, proceed to PHASE 5: MANAGE THE BOOT ENVIRONMENTS, to learn how to
activate your N/T BE and reboot into it, and to run other management commands on
the multiple boot-environments on your systems.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these commands.)

4.4 Upgrading from Flash Archive


By now, you will need to have figured out what Flash Archive you are using
and where it is located. You will need to have determined that your intended
Flash Archive is, from the standpoints of disk-space and kernel-architecture and
peripherals, compatible with the system that you intend to "luupgrade".
The N/T BE needs to have been created with the "-s -" option of the lucreate
command (as described in subsection 3.2), so that you are installing the Flash
Archive onto a set of empty partitions —because, ironically, a Flash Archive
cannot be used directly as the source for an actual upgrade with the Live
Upgrade procedure.
The luupgrade(1M) syntax, for installing from a Flash Archive, is very
simple. The following generic example, followed by the syntax and options notes
below, should provide all the info you need to successfully construct the
command-line as you need it.

# luupgrade -f \ ◄▬[ -f to specify Flash Archive as OS source ]


> -n <BE_name> \ ◄▬[ -n BE_name of N/T BE ]
> -s <OS-image> \ ◄▬[ -s bootable OS-image, such as boot-server ]
> -a <FLAR_image> ◄▬[ -a location of Flash Archive file ***]

***-- The various forms of syntax, for how to express the FLAR-image path, are
as follows:

Type Format
NFS nfs://<server/path/to/FLAR_file> retry #

HTTP -*
http://<server>:/</path/to/FLAR_file> <options>

HTTPS -*
https://<server>:/<path/to/FLAR_file> <options>

FTP -*
ftp://<user>:<pswd>@<server>/<path/to/FLAR_file> <options>
or
-*
ftp://<user>:<pswd>@<server>:<port> </path/to/FLAR_file> <options>

Local Device -**


local_device <device_logical_name> </path/to/FLAR_file>

Local Tape local_tape <logical_device_name> <file_position_#_on_tape>

Local File local_file </path/to/FLAR_file>

* -- Options, for "http", "https" and "ftp", include the following:

keyword value purpose


------- ----- -------
auth basic <username> <password> To allow for automatic login to HTTP
servers that require authentication.
(Only with http and https; apparently not
ftp.)

Page 39 of 52
timeout <#_of_minutes> The number of minutes, without data from
the HTTP server, before which the
Jumpstart routine attempts to disconnect,
reconnect, and then restart at the best
available spot in the installation.

proxy <host>:<port> The hostname and http-port-# of any HTTP-


proxy-server that might be necessary to
reach the HTTP server housing the FLAR.

**- One more possible option for "local_device" is to specify, after the path,
the file-system type; all you need is the value, no keyword. However, FS-
types "ufs" and "hsfs" are assumed, in that order: If FS-type is not
specified, the Jumpstart routine attempts first to mount the resource as
"ufs" and, if that fails, then attempts to mount the resource as "hsfs". (Of
course, within SunOS/Solaris the "hsfs" type includes compatibility with ISO-
9660.)

(Now, proceed to PHASE 5: MANAGE THE BOOT ENVIRONMENTS, to learn how to


activate your N/T BE and reboot into it, and to run other management commands on
the multiple boot-environments on your systems.)

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these commands.)

Page 40 of 52
5 PHASE 5: MANAGE THE
BOOT ENVIRONMENTS
We have reserved the activation of your new/target boot environment (N/T BE),
and the rebooting of your system into the N/T BE, to this phase because,
strictly speaking, these steps have nothing to do with the actual upgrading
covered in Phase 4.

The following 5.x subsections are not necessarily mutually exclusive.


However, they are each self-contained; not necessarily intended to be performed
together in any particular sequence. The notable exceptions to this are the
first two subsections, 5.1 and 5.2, which are intended to be performed, in the
order presented, after the completion of Phase 4.

5.1 Activate the Newly-Upgraded Boot


Environment (and Reboot To It)
A boot environment is "active" if the system is presently (currently) booted
from it. A boot environment is "activated" if it has been designated to be the
boot environment for the system's next boot.

1. Check for --and fix, if necessary-- the following conditions that can cause
your reboot, into the new BE, to fail:

• If the diag-switch? parameter (at the OBP level) is set to true then the
reboot, into the new BE, will fail. This is because there is something,
about that setting, that prevents a certain LU routine from automatically
changing the value of the boot-device parameter, which is absolutely
necessary in order for the reboot, into the new BE, to be successful.

To check and fix this setting:

# eeprom diag-switch?
diag-switch?=true <--[If "=false", no need to change.]

# eeprom diag-switch?=false

# eeprom diag-switch?
diag-switch?=false

• [....]

2. Run lustatus to make sure that the BE, that you want to activate, is in a
state in which it can be activated.

Page 41 of 52
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------ -------- ------ --------- ------ --------
Larry_BE yes yes yes no -
Mo_BE yes no no yes -
Curly_BE no no no yes -

Notice, from the above command-output, that ....


-- the Larry_BE is now active and is scheduled to be active upon reboot;
-- the Mo_BE is not now active and is not scheduled to be active upon reboot;
-- the Curly_BE cannot be either active or activated for reboot, because it
is not yet "complete" (an lucreate or luupgrade is acting on it at this
moment).

3. To check which environment is presently activated for the next boot:

# /usr/sbin/luactivate

(A somewhat-related command is lucurr(1M), which displays the {presently} active boot


environment.)
4. To activate a particular boot environment for the next system boot (example
screen-output follows the command):

# luactivate Sol9_BE
WARNING: <1> packages failed to install properly on boot environment
<Sol9_BE>.
INFORMATION: </var/sadm/system/data/upgrade_failed_pkgadds> on boot
environment <Sol9_BE> contains a list of packages that failed to
upgrade or install properly. Review the file before you reboot the system
to determine if any additional system maintenance is required.
WARNING: The following files have changed on both the current boot
environment <Sol8_BE> and the boot environment to be activated
<Sol9_BE>:
/etc/group
INFORMATION: The files listed above are in conflict between the current
boot environment <Sol8_BE> and the boot environment to be activated
<Sol9_BE>. These files will not be automatically synchronized from
the current boot environment <Sol8_BE> when boot environment
<Sol9_BE> is activated.

**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot
environment:

1. Enter the PROM monitor (ok prompt).

2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

Page 42 of 52
At the PROM monitor (ok prompt):
For boot to Solaris CD: boot cdrom -s
For boot to network: boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

mount -Fufs /dev/dsk/c0t0d0s1 /mnt

4. Run <luactivate> utility with out any arguments from the current boot
environment root slice, as shown below:

/mnt/sbin/luactivate

5. luactivate, activates the previous working boot environment and


indicates the result.

6. Exit Single User mode and reboot the machine.

**********************************************************************

Activation of boot environment <Sol9_BE> successful.


#

5. Run lustatus again to check the results of your above luactivate command:

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol8_02Feb_JRA yes yes no no -
Sol9_04Sep_JRA yes no yes no -
#

NOTE: If, at this point, you execute "eeprom boot-device", you will see
that the boot-device parameter, in the PROM, has not yet changed to the
N/T BE boot-partition. This will occur only after you perform the reboot
command.

6. It is not necessary to reboot to a particular BE immediately after you have


activated it with the above command. However, whenever you are ready to boot
into that "activated" BE:

# init 6

--OR--

# shutdown -y -i 6 -g <#> Rebooting to different BE.

During the downside of the reboot (shortly after the system begins
going down), about 8 to 10 "Live Upgrade:" information lines will be sent
to the console.
During the upside of the reboot, 3 "Live Upgrade:" information lines
will be sent to the console, and various configuration files, from the
previously-active BE will be copied-over or appended-to their counterparts
in the newly-activated BE. These files include /etc/passwd and
/etc/group, among others.

NOTE: If you have been using this document simply to learn how to create and
upgrade and configure a new BE, and to reboot the system to it in order to

Page 43 of 52
affect an OS/OE upgrade, this subsection basically finishes your tasks, as
long as you are not experiencing any problems upon the reboot. If you are,
the next subsection covers the technique for "falling back" to the
previously-active boot environment.

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of these LU commands.)

5.2 Falling Back to the Previous Boot


Environment
In case your system experiences problems during or after a reboot to a newly-
configured BE (to which you are just now booting or have just now booted), the
following steps allow you to "fall back" to your previous boot environment,
rebooting back to it.

1. If you actually have successfully booted all the way from the new BE and you
can login, you can simply try executing the /sbin/luactivate command:

# /sbin/luactivate <BE_name_for_fallback>

--OR--

# /sbin/luactivate
Do you want to fallback to activate boot environment <disk name>
(yes or no)? yes

If the above "fallback" prompt fails to appear or if you cannot execute


either of the above two commands in the first place or if you seem to get
fatal errors, proceed to the next steps. Otherwise, you can simply reboot
after completing either of the above commands.

2. If you were not able to successfully execute the /sbin/luactivate command, to


fallback-to and reboot-from the previous BE, you must now go to the PROM
Monitor (aka, "the ok prompt") and boot to single-user mode from an alternate
copy of the OS. After getting to the ok prompt, run one of the following
commands:

ok boot cdrom -s <--[boot from local OS-install CD or DVD]

--OR--

ok boot net -s <--[boot from local network-boot server]

--OR--

ok boot <alt_disk> -s <--[boot from alternate boot-disk on the


system --for example, "disk3:B"]

3. After logging in as the super-user, you might check the integrity of the /
(root) file-system of your fallback BE:

# fsck [-y] <mount_point>

Page 44 of 52
4. Now, mount the / (root) file-system of your (presently) active BE (the BE
from which you want to fall back, not "to which") to some temporary mount-
point, such as /mnt:

# mount /dev/dsk/<c#t#d#s#> /mnt

5. Run the luactivate command from this temporary mount:

# /mnt/sbin/luactivate

This command should produce screen-output that confirms the success of the
fallback activation.

(You can now unmount this file-system mounted in Step 4 but it is not
necessary.)

6. Reboot the system, and it should boot from the fallback BE:

# init 6

--OR--

# shutdown -y -i 6 -g 0

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with any of the LU commands.)

5.3 Check the BE_name of the Active


Boot Environment
To determine the BE_name of the active boot environment (ABE), the one from
which the system is presently booted), run the following command:

# /usr/sbin/lucurr

If you have the / (root) file-system of some non-active BE mounted to a


temporary mount-point, such as /mnt, you can run the following command to
determine the BE_name of the BE that owns that file-system:

# /usr/sbin/lucurr /mnt

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)

5.4 Display Status of Any or All Boot


Environments
To display the status of all BEs:

# /usr/sbin/lustatus
BE_name Complete Active ActiveOnReboot CopyStatus
---------------------------------------------------------------

Page 45 of 52
Larry_BE yes yes no -
Mo_BE yes no yes SCHEDULED
Curly_BE no no no -

To display the status of a particular BEs:

# /usr/sbin/lustatus Mo_BE
BE_name Complete Active ActiveOnReboot CopyStatus
---------------------------------------------------------------
Mo_BE yes no yes SCHEDULED

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)

5.5 Compare Boot Environments


If you need to compare the contents of your active boot environment to some
nonactive BE on your system, you can do so with the lucompare(1M) command.
(There is no way to compare the contents of two nonactive BEs directly against
each other.)
For this to work, the nonactive BE (a) must be "complete" and (b) must not be
involved in another lucompare operation or in an lucopy operation or have any
lucopy operation scheduled.

# /usr/sbin/lucompare <BE_name_of_nonactive_BE>

If you want to compare only certain files between the two BEs, you can
construct an ASCII-formatted file containing a list of those files —each
expressed with a full, absolute pathname— to be compared, and reference that
file in the command:

# /usr/sbin/lucompare -i <filename> <BE_name_of_nonactive_BE>

If you want to compare only all nonbinary files between the two BEs, use the
"-t" switch:

# /usr/sbin/lucompare -t <BE_name_of_nonactive_BE>

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)

5.6 Delete a Nonactive Boot


Environment
If you want to delete a nonactive BE, you must first be sure that it meets
all of the following criteria:

• The BE must be complete (not in the midst of any operation that could
change its status).
• The BE cannot be activated for the next reboot.
• The BE cannot have any file-systems mounted with lumount.

# /usr/sbin/ludelete <BE_name>

Page 46 of 52
(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)

5.7 Change the BE_name of a Boot


Environment
If you want to rename a nonactive BE (apparently you cannot rename the active
BE), you must first be sure that it meets all of the following criteria:

• The BE must be complete (not in the midst of any operation that could
change its status).
• The BE cannot have any file-systems mounted with either lumount or mount.

Any name you choose must meet the following criteria:

• It cannot exceed 30 characters in length.


• It ASCII characters that have no special meaning to the unix shells (that
is, no metacharacters).
• It must be unique on the system.

# /usr/sbin/lurename -e <BE_name_to_be_changed> -n <new_BE_name>

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)

5.8 View the File-System Configuration


of a Boot Environment
# /usr/sbin/lufslist <BE_name>
Filesystem fstype size(Mb) Mounted on
------------------------------------------------------------------
/dev/dsk/c1t2d3s4 swap 512.23 -
/dev/dsk/c4t3d2s1 ufs 4849.30 /
/dev/dsk/c0t1d2s3 ufs 512.00 /opt
/dev/dsk/c3t2d1s0 ufs 1024.48 /usr

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)

5.9 Mount and Unmount an Entire Boot


Environment (BE)
The lumount and luumount commands can be used to mount or unmount
(respectively), in a single command, all the file-systems associated with a
particular, nonactive boot environment (BE). In performing this single-command
mounting, the mount-point directory is dynamically created. This technique

Page 47 of 52
avoids the hassles of creating multiple mount-points for multiple file-systems
and of running multiple mount and umount commands to mount and unmount them.
The simplest syntax for the lumount command is:

# lumount <BE_name>

--OR--

# lumount <BE_name> <custom_mount-point>

By default, lumount will mount all the file-systems, for a particular BE, to
the subdirectory "/.alt.<BE_name>". This is the result of the first instance of
the command, above. With the second instance of the command, you have decided
that you want to mount the BE to a different mount-point, which you specify at
the end of the command but which you do not need to create ahead of time.

So, if the fishmonger BE has separate file-systems for /, /opt, and /var, the
command "lumount fishmonger" will mount these three file-systems as follows:

fishmonger's / --> /.alt.fishmonger/


fishmonger's /opt --> /.alt.fishmonger/opt
fishmonger's /var --> /.alt.fishmonger/var

Or the command "lumount fishmonger /whitefish" will mount as follows:

fishmonger's / --> /whitefish/


fishmonger's /opt --> /whitefish/opt
fishmonger's /var --> /whitefish/var

The simplest syntax for the luumount command is:

# luumount <BE_name>

Simply be sure that nobody's PWD is within the BE's mounted tree and then run
the above command. All the BE's file-systems will be unmounted and the mount-
point directory, that was dynamically created when you ran the lumount command,
will dynamically be deleted when you run the luumount command.

(See subsection 5.10 for information about the "-o <outfile>" and "-l
<error_log>" options that can be used with this LU command.)

5.10 Live Upgrade "-o outfile" and "-l


error_log" Options
With many of the Live Upgrade commands, there are two options that can be
tacked on for recording information regarding the operation of the command.
These are the "-o <outfile>" and "-l <error_log>" options.
The "-o <outfile>" option allows you to store all normal command-output text
to the <outfile> that you specify, besides (not instead of) the normal target
for such command-output on your system.
The "-l <error_log>" option allows you to store all error and status messages
to the <error_log> that you specify, besides (not instead of) the normal target
for such command-output on your system.
The following table indicates which Live Upgrade commands support which of
these two options:

Page 48 of 52
Command -o <outfile> -l <error_log>
luactivate  
lucancel  
lucompare 
lucreate  
lucurr  
ludelete  
ludesc  
lufslist  
lumake  
lumount  
lurename  
lustatus  
luumount  
luupgrade  

5.11 Force Synchronization Between


Boot Environments
When you activate (luactivate) a new boot environment (BE) and then reboot
into it, part of that reboot sequence includes the synchronizing of various
configuration-files and configuration-directories between the O/S BE and the N/T
BE, such as the /etc/passwd file and the /etc/dfs directory.
There might be times when you would want to force such a synchronization,
between two BEs on the same system, without actually rebooting from one into the
other. One of the prime examples for wanting to do this is that you would be
maintaining two or more BEs on the same system, and rebooting into each one at
various times in order to perform some operations that are specific to the
contents of whichever BE you are booting into.
In order to force a resynchronization between the presently active boot
environment (ABE) and one of your nonactive BEs, simply run the following
variation of the luactivate command:

# /usr/sbin/luactivate -s <nonactive_BE_name>

CAUTION!!!: Perform this operation with caution when working with different
BEs that use different releases of SunOS/Solaris. You might be resyncing
between an ABE that has a later release of Solaris and a nonactive BE that
has an earlier release, and the newer config-files and directories, that get
resynched to the nonactive BE, might not be compatible with the older release
of SunOS/Solaris installed there. This might cause your next reboot, into
that older BE, to fail.

5.12 Live Upgrade Standard


Configuration Files and Log Files
There are certain log files and configuration files that are basic to the way
Live Upgrade works and whose existence and full pathname are set in the design
of Live Upgrade. Here is a list of some of these:

Page 49 of 52
5.12.1 Live Upgrade Configuration Files
• /etc/lu

/etc/lu is the default location for all built-in Live Upgrade


configuration-files.

• /etc/lu/swapslices

/etc/lu/swapslices can optionally be used with lucreate's "-M" switch, so


that one can avoid having many swap-slices listed on the lucreate command-
line. See Sun's "Solaris 9 9/04 Administration Guide", pp 427-429, for
more details on how to use this file.

• /etc/lu/synclist

/etc/lu/synclist the list of directories and files that are to be


synchronized between the previous active BE and the new active BE during
the reboot to the new active BE, including a single keyword, per line,
regarding whether the old file is to overwrite (OVERWRITE) or be appended
(APPEND) to or be prepended (PREPEND) to the new file. This file exists
by default and contains a standard list of directories and files to be
synchronized, including the following:

/var/mail OVERWRITE
/var/spool/mqueue OVERWRITE
/var/spool/cron/crontabs OVERWRITE
/var/dhcp OVERWRITE
/etc/passwd OVERWRITE
/etc/shadow OVERWRITE
/etc/opasswd OVERWRITE
/etc/oshadow OVERWRITE
/etc/group OVERWRITE
/etc/pwhist OVERWRITE
/etc/default/passwd OVERWRITE
/etc/dfs OVERWRITE
/var/log/syslog APPEND
/var/adm/messages APPEND

The following directories and files represent a partial list of those that
also could be added to the synclist:

/var/yp OVERWRITE
/etc/mail OVERWRITE
/etc/resolv.conf OVERWRITE
/etc/domainname OVERWRITE

• /usr/share/lib/xml/dtd/lu_cli.dtd.<num>

XML DTD for the "-X" option (not discussed in this document) available
with most LU commands.

• [....]

Page 50 of 52
5.12.2 Live Upgrade Log Files
• /etc/lutab

/etc/lutab lists all the LU-managed boot environments (BEs) on your


system.

• [....]

6 Troubleshooting Tips
These troubleshooting tips are being added as they are found and as time
allows. This major section, of this document, is not considered to represent a
"major phase" of the LU procedures.
As this new section is first being created, it is expected that the tips will
be presented in no particular order, other than that in which they are
discovered and happen to be added to the document.

6.1 Failure to Reboot into the new BE


Sometimes, everything will seem to have executed without any problems (or, at
least, without any major problems) and, yet, when you perform the following two
steps ...

# luactivate <New_BE>

# init 6

... the system reboots into the old Boot Environment (BE) that you were trying
to leave; effectively ignoring the new BE into which you were trying to boot!

This author has seen this problem in two different situations, described as
follows:

6.1.1 diag-switch? set to "true"


In the first instance, one of the troubleshooters found this command in the
"/etc/lu/DelayUpdate/activate.sh" script:

/etc/lib/lu/lubootdev '/sbus@3,0/SUNW,fas@3,8800000/sd@c,0:a' '/dev/dsk/c0t0d0s0'

He decided to try running that command by hand, and got the following output
(note the underlined portion):

/etc/lib/lu/lubootdev: ERROR: Unable to get current boot devices.


/etc/lib/lu/lubootdev: INFORMATION: The system is running with
the system boot PROM diagnostics mode enabled. When diagnostics
mode is enabled, Live Upgrade is unable to access the system boot
device list, causing certain features of Live Upgrade (such
as changing the system boot device after activating a boot
environment) to fail. To correct this problem, please run
the system in normal, non-diagnostic mode. The system might
have a key switch or other external means of booting the
Page 51 of 52
system in normal mode. If you do not have such a means, you
can set one or both of the EEPROM parameters 'diag-switch?'
or 'diagnostic-mode?' to 'false'. After making a change,
either through external means or by changing an EEPROM
parameter, retry the Live Upgrade operation or command.

So, the "diag-switch?" parameter (Boot PROM) was checked. Indeed, it was set
to "true". It's value was changed to "false" (using the "eeprom" command).
Then then "luactivate" and "init 6" commands were executed again and, this time,
the system booted into the new BE without further incident.

6.1.2 If at first you don't succeed, ...


In the second instance, the team had already checked the Boot-PROM's "diag-
switch?" parameter and confirmed that it was already set to "false". So: when
the "luactivate" and "init 6" commands produced the same problematic result as
was experienced before (see the intro for this subsection 6.1), the team assumed
that it would have no choice but to spend some minutes or hours performing
similar research and troubleshooting as in the previous instance (subsection
6.1.1).
This was a production environment and this first failed reboot had taken
place at a time when the team could not perform such troubleshooting right then
and there. Actually, a few weeks had to pass before the next attempt. When
that next "luactivate" and "init 6" attempt occurred after literally nothing had
been done to the system during the interim few weeks, the system booted into the
new BE without any incident; without so much as a "how do you do".
The team has no idea as to (a) why the first reboot-attempt (to the new BE)
failed or (b), given that the first attempt had failed, why the second attempt,
weeks later, succeeded without any more effort other than simply trying it
again.

The lesson?: Apparently, the lesson is this: If at first your Live-Upgrade


reboot fails; double-check you steps and then simply try again. Maybe you'll
get lucky and it'll work the second time.

Page 52 of 52

You might also like