Professional Documents
Culture Documents
Solaris Zones Clone
Solaris Zones Clone
Procedure
2. Change the details of the new zone that differ from the existing one (e.g. IP
address, data set names, network interface etc.)
# vi zone2.cfg
This took around 5 minutes to clone a 1GB zone (see notes below)
6. Verify both zones are correctly installed
0 global running /
0 global running /
# zlogin -C zone2
-- Down: the zone has completed the shut down process and is down - this
is a temporary state, leading to "Installed"
==============================================
=================================
flar command
==============================================
===============
-R : set root directory of the master operating system. For above example the
root OS is /.
-x : exclude the specific directory from the image. For above example flar
image created will not contain the directory /s10 & /sharedspace.
The output image defined in the end of command. For above example the
result (called solaris9.flar) will be stored in /s10 directory
For UFS:
For ZFS:
Validation
Generate the zone manifest of the zone that will be migrated. Do not forget
to specify -n to the detach subcommand to generate the manifest without
actually executing the command.
srce_svr#
Password:
zone2mv.manifest 100% |
********************************************************| 1983 KB 00:00
srce_svr#
Validate the zone in the destination server. This is done with the attach
subcommand. Again, specify -n to tell zoneadm to do the validation without
actually attaching the zone.
dest_svr#
Address any issues version conflicts, etc. before proceeding to the next
step.
Migration
srce_svr#
srce_svr#
tar/gzip the zonepath. The length of this operation will depend on the amount
of data in the zonepath. If the zones data is stored in the SAN, the zones
diskgroup becomes unavailable when the it is halted. Therefore, only the
actual operating system files will be tarred/gzipped.
srce_svr# cd /export/zones
a zone2mv/ 0K
a zone2mv/root/ 0K
a zone2mv/root/sbin/ 0K
a zone2mv/root/sbin/dhcpinfo 10K
a zone2mv/root/sbin/tnctl 19K
a zone2mv/root/sbin/uname 10K
a zone2mv/root/sbin/sync 10K
a zone2mv/root/sbin/uadmin 10K
a zone2mv/root/sbin/sh 94K
a zone2mv/root/sbin/rc0 2K
a zone2mv/root/sbin/gabport 12K
a zone2mv/root/sbin/vxlicinst 755K
{snip}
:
a zone2mv/SUNWdetached.xml 1984K
srce_svr#
Password:
zone2mv.tar.gz 100% |
********************************************************| 502 MB 00:00
srce_svr#
Alternatively, the .tar.gz file can be copied to the SAN volume before it is
stopped and deported.
Stop the zones disk volume(s) and deport the disk group(s) from the source
server. In this case, were assuming that the SAN volumes are managed using
Veritas Volume Manager.
srce_svr#
Import the disk group(s) in the destination server and start the volume(s). You
may need to run vxdctl enable in dest_svr to make vxconfigd recognize the
disks prior to importing the disk groups.
SR NAME KSTATE
dest_svr# cd /export/zones
{snip}
dest_svr#
Create the zone specifying the same zonepath as in the source server.
zonecfg:zone2mv>
zonecfg:zone2mv> info
zonename: zone2mv
zonepath: /export/zones/zone2mv
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
fs:
dir: /opt/crystal
special: /dev/vx/dsk/dgzoneap/volzoneap
raw: /dev/vx/rdsk/dgzoneap/volzoneap
type: vxfs
options: []
net:
address: 10.2.69.207
physical: aggr1
attr:
name: comment
type: string
zonecfg:zone2mv> commit
zonecfg:zone2mv> exit
dest_svr#
Attach the zone and perform validation checks. The -u option to the attach
subcommand intructs zoneadm to patch the zone with Solaris updates that
have been installed in the destination server.
The file within the zone contains a log of the zone update.
dest_svr#
For Solaris 10 10/08: If the source system is running an older version of the
Solaris system, it might not generate a correct list of packages when the zone
is detached. To ensure that the correct package list is generated on the
destination, you can remove the SUNWdetached.xml file from the zonepath.
Removing this file will cause a new package list to be generated by the
destination system.
This is not necessary with the Solaris 10 5/09 and later releases as they do
not use SUNWdetached.xml.
Patch 138888-08 contains bug fixes post Solaris 10 10/08 (Update 6).
dest_svr#
dest_svr#
Password:
# uname -a
If you run into this error booting the zone, it is zoneadms way of saying that
the SAN volume mount point is busy.
Make sure that no process is running off it or has files open on it. It could be
you using the mount point as your current working directory. If this is the
case, the volume will not mount and the zone will not boot.
Dean says:
Should be:
sun1#cd /zone
sun2#zonecfg -z zone1
zonecfg:zone1>commit
zonecfg:zone1>exit