Download as rtf, pdf, or txt
Download as rtf, pdf, or txt
You are on page 1of 15

Cloning a Solaris Zone

Procedure

1. Export the configuration of the zone you want to clone/copy

# zonecfg -z zone1 export > zone2.cfg

2. Change the details of the new zone that differ from the existing one (e.g. IP
address, data set names, network interface etc.)

# vi zone2.cfg

3. Create a new (empty, unconfigured) zone in the usual manner based on


this configuration file

# zonecfg -z zone2 -f zone2.cfg

4. Ensure that the zone you intend to clone/copy is not running

# zoneadm -z zone1 halt

5. Clone the existing zone

# zoneadm -z zone2 clone zone1

Cloning zonepath /export/zones/zone1...

This took around 5 minutes to clone a 1GB zone (see notes below)
6. Verify both zones are correctly installed

# zoneadm list -vi

ID NAME STATUS PATH

0 global running /

- zone1 installed /export/zones/zone1

- zone2 installed /export/zones/zone2

7. Boot the zones again (and reverify correct status)

# zoneadm -z zone1 boot

# zoneadm -z zone2 boot

# zoneadm list -vi

ID NAME STATUS PATH

0 global running /

5 zone1 running /export/zones/zone1

6 zone2 running /export/zones/zone2

8. Configure the new zone via its console (very important)

# zlogin -C zone2

Zone states: It can be in one of these states

-- Configured: configuration was completed and committed


-- Incomplete: Transition state during install or uninstall operation

-- Installed: the packages have been successfully installed

-- Ready: the virtual platform has been established

-- Running: the zone booted successfully and is now running

-- Shutting down: the zone is in the process of shutting down - this is a


temporary state, leading to "Down"

-- Down: the zone has completed the shut down process and is down - this
is a temporary state, leading to "Installed"

==============================================
=================================

flar command

==============================================
===============

# flarcreate -n solaris9 -c -S -R / -x /sharedspace -x /s10 /s10/solaris9.flar

There are some option for flarcreate command :

-n : defines name of the flar image

-c : add compression to the image result

-S : do not include sizing information

-R : set root directory of the master operating system. For above example the
root OS is /.

-x : exclude the specific directory from the image. For above example flar
image created will not contain the directory /s10 & /sharedspace.

The output image defined in the end of command. For above example the
result (called solaris9.flar) will be stored in /s10 directory

#flarcreate -n testflar -R / -S /export/fsi/testflar.flar

#flar info tesflar.flar

For UFS:

# flarcreate -n "Solaris 10 10/09 build" -S -c -x /var/tmp/ /var/tmp/S10-


1009.ufs.archive.sun4u-`date +'%Y%m%d%H%M'`

For ZFS:

# flarcreate -n "Solaris 10 10/09 build" -S -c /var/tmp/S10-


1009.zfs.archive.sun4u-`date +'%Y%m%d%H%M'`

How To Migrate A Solaris Zone

Validation

Generate the zone manifest of the zone that will be migrated. Do not forget
to specify -n to the detach subcommand to generate the manifest without
actually executing the command.

srce_svr# zoneadm -z zone2mv detach -n > zone2mv.manifest

srce_svr#

Copy the zones manifest to the destination server.


srce_svr# scp zone2mv.manifest root@dest_svr:/var/tmp

Password:

zone2mv.manifest 100% |
********************************************************| 1983 KB 00:00

srce_svr#

Validate the zone in the destination server. This is done with the attach
subcommand. Again, specify -n to tell zoneadm to do the validation without
actually attaching the zone.

dest_svr# zoneadm attach -n /var/tmp/zone2mv.manifest

dest_svr#

Address any issues version conflicts, etc. before proceeding to the next
step.

Migration

Halt the zone in the source server.

srce_svr# zoneadm -z zone2mv halt

srce_svr# zoneadm -z zone2mv list -v

ID NAME STATUS PATH BRAND IP

- zone2mv installed /export/zones/zone2mv native shared

srce_svr#

Detach the halted zone.

srce_svr# zoneadm -z zone2mv detach

srce_svr# zoneadm -z zone2mv list -v


ID NAME STATUS PATH BRAND IP

- zone2mv configured /export/zones/zone2mv native shared

srce_svr#

Make sure that the detached zone is in configured state.

tar/gzip the zonepath. The length of this operation will depend on the amount
of data in the zonepath. If the zones data is stored in the SAN, the zones
diskgroup becomes unavailable when the it is halted. Therefore, only the
actual operating system files will be tarred/gzipped.

srce_svr# cd /export/zones

srce_svr# tar cvf - zone2mv | gzip -c > zone2mv.tar.gz

a zone2mv/ 0K

a zone2mv/root/ 0K

a zone2mv/root/sbin/ 0K

a zone2mv/root/sbin/dhcpinfo 10K

a zone2mv/root/sbin/tnctl 19K

a zone2mv/root/sbin/uname 10K

a zone2mv/root/sbin/sync 10K

a zone2mv/root/sbin/uadmin 10K

a zone2mv/root/sbin/sh 94K

a zone2mv/root/sbin/pfsh symbolic link to sh

a zone2mv/root/sbin/rc0 2K

a zone2mv/root/sbin/gabport 12K

a zone2mv/root/sbin/vxlicinst 755K

{snip}
:

a zone2mv/SUNWdetached.xml 1984K

srce_svr#

Copy the .tar.gz zonepath file to the destination server

srce_svr# scp zone2mv.tar.gz root@dest_svr:/export/zones

Password:

zone2mv.tar.gz 100% |
********************************************************| 502 MB 00:00

srce_svr#

Alternatively, the .tar.gz file can be copied to the SAN volume before it is
stopped and deported.

Stop the zones disk volume(s) and deport the disk group(s) from the source
server. In this case, were assuming that the SAN volumes are managed using
Veritas Volume Manager.

srce_svr# vxvol -g dgzoneap stopall

srce_svr# vxdg deport dgzoneap

srce_svr#

Import the disk group(s) in the destination server and start the volume(s). You
may need to run vxdctl enable in dest_svr to make vxconfigd recognize the
disks prior to importing the disk groups.

dest_svr# vxdg import dgzoneap

dest_svr# vxvol -g dgzoneap startall

dest_svr# vxprint -g dgzoneap -hrt

DG NAME NCONFIG NLOG MINORS GROUP-ID

ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT


DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE

RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL

RL NAME RVG KSTATE STATE REM_server REM_DG REM_RLNK

CO NAME CACHEVOL KSTATE STATE

VT NAME RVG KSTATE STATE NVOLUME

V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX


UTYPE

PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID


MODE

SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE


MODE

SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM


MODE

SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE


MODE

DC NAME PARENTVOL LOGVOL

SP NAME SNAPVOL DCO

EX NAME ASSOC VC PERMS MODE STATE

SR NAME KSTATE

dg dgzoneap default default 36000 1253236930.45.srce_svr

dm volzoneap-01 c2t5006016139A02018d4s2 auto 65536 52360448 -

v volzoneap - ENABLED ACTIVE 52360448 SELECT - fsgen

pl volzoneap-02 volzoneap ENABLED ACTIVE 52360448 CONCAT -


RW

sd volzoneap-01-01 volzoneap-02 volzoneap-01 0 52360448 0


c2t5006016139A02018d4 ENA
dest_svr#

Unpack the zonepath file in the destination server.

dest_svr# cd /export/zones

dest_svr# gunzip < zone2mv.tar.gz | tar xvf -

x zone2mv, 0 bytes, 0 tape blocks

x zone2mv/root, 0 bytes, 0 tape blocks

x zone2mv/root/bin symbolic link to ./usr/bin

x zone2mv/root/usr, 0 bytes, 0 tape blocks

x zone2mv/root/system, 0 bytes, 0 tape blocks

x zone2mv/root/system/object, 0 bytes, 0 tape blocks

x zone2mv/root/system/contract, 0 bytes, 0 tape blocks

x zone2mv/root/platform, 0 bytes, 0 tape blocks

x zone2mv/root/etc, 0 bytes, 0 tape blocks

x zone2mv/root/etc/dhcp, 0 bytes, 0 tape blocks

x zone2mv/root/etc/dhcp/inittab6, 1836 bytes, 4 tape blocks

x zone2mv/root/etc/dhcp/inittab, 6256 bytes, 13 tape blocks

{snip}

x zone2mv/SUNWdetached.xml, 2030763 bytes, 3967 tape blocks

dest_svr#

Configure the zone.

dest_svr# zonecfg -z zone2mv

zone2mv: No such zone configured


Use 'create' to begin configuring a new zone.

Create the zone specifying the same zonepath as in the source server.

zonecfg:zone2mv> create -a /export/zones/zone2mv

zonecfg:zone2mv>

View the configuration and make any required adjustments.

zonecfg:zone2mv> info

zonename: zone2mv

zonepath: /export/zones/zone2mv

brand: native

autoboot: false

bootargs:

pool:

limitpriv:

scheduling-class:

ip-type: shared

inherit-pkg-dir:

dir: /lib

inherit-pkg-dir:

dir: /platform

inherit-pkg-dir:

dir: /sbin

inherit-pkg-dir:

dir: /usr

fs:
dir: /opt/crystal

special: /dev/vx/dsk/dgzoneap/volzoneap

raw: /dev/vx/rdsk/dgzoneap/volzoneap

type: vxfs

options: []

net:

address: 10.2.69.207

physical: aggr1

defrouter not specified

attr:

name: comment

type: string

value: \"PRD Application Zone\"

zonecfg:zone2mv> commit

zonecfg:zone2mv> exit

dest_svr#

Attach the zone and perform validation checks. The -u option to the attach
subcommand intructs zoneadm to patch the zone with Solaris updates that
have been installed in the destination server.

dest_svr# zoneadm -z zone2mv attach -u

Getting the list of files to remove

Removing 3285 files

Remove 208 of 208 packages

Installing 3428 files

Add 209 of 209 packages

Installation of these packages generated warnings: HAGENT MAGENT


Updating editable files

The file within the zone contains a log of the zone update.

dest_svr# zoneadm -z zone2mv list -v

ID NAME STATUS PATH BRAND IP

- zone2mv installed /export/zones/zone2mv native shared

dest_svr#

For Solaris 10 10/08: If the source system is running an older version of the
Solaris system, it might not generate a correct list of packages when the zone
is detached. To ensure that the correct package list is generated on the
destination, you can remove the SUNWdetached.xml file from the zonepath.
Removing this file will cause a new package list to be generated by the
destination system.

This is not necessary with the Solaris 10 5/09 and later releases as they do
not use SUNWdetached.xml.

Patch 137137-09 is Solaris 10 10/08 (Update 6) kernel patch.

Patch 138888-08 contains bug fixes post Solaris 10 10/08 (Update 6).

Patch 139555-08 is Solaris 10 5/09 (Update 7) kernel patch.

You are now ready to boot the migrated zone.

dest_svr# zoneadm -z zone2mv boot

dest_svr#

dest_svr# zoneadm -z zone2mv list -v

ID NAME STATUS PATH BRAND IP

20 zone2mv running /export/zones/zone2mv native shared

dest_svr#

dest_svr# zlogin -C zone2mv


[Connected to zone 'zone2mv' console]

zone2mv console login: root

Password:

Last login: Thu Feb 18 18:38:23 on pts/5

Sun Microsystems Inc. SunOS 5.10 Generic January 2005

You have mail.

# uname -a

SunOS zone2mv 5.10 Generic_139555-08 sun4v sparc SUNW,SPARC-


Enterprise-T5120

If you run into this error booting the zone, it is zoneadms way of saying that
the SAN volume mount point is busy.

zoneadm: zone 'zone2mv': "/usr/lib/fs/vxfs/mount


/dev/vx/dsk/dgzoneap/volzoneap /export/zones/zone2mv/root/opt/eclipse"
failed with exit code 16

zoneadm: zone 'zone2mv': call to zoneadmd failed

Make sure that no process is running off it or has files open on it. It could be
you using the mount point as your current working directory. If this is the
case, the volume will not mount and the zone will not boot.

3 comments - What do you think? Posted by root - 24 February 2010 at


9:52 pm

Categories: Solaris, Veritas Tags: global zone, migrate, non-global zone,


solaris, Veritas, volume manager, vxvm

3 Responses to How To Migrate A Solaris Zone

Dean says:

27 April 2011 at 6:06 am


You need to specify the zone name in the zoneadm attach command. If not, it
will core dump:

# zoneadm attach -n /tmp/zone2mv.manifest

zoneadmSegmentation Fault (core dumped)

Should be:

zoneadm -z zone2mv attach -n /tmp/zone2mv.manifest

Migrating Zone to a Different Machine on Solaris 10

sun1#zoneadm -z zone1 halt

sun1#zoneadm -z zone1 detach

sun1#cd /zone

sun1#tar -cf zone1.tar zone1

sun1# scp zone1.tar root@10.1.1.2:/tmp && rm zone1.tar

sun2# mkdir /zone

sun2#cd /zone && tar -xf /tmp/zone1.tar

sun2#zonecfg -z zone1

zonecfg:zone1 >create -a /zone/zone1

zonecfg:zone1>commit

zonecfg:zone1>exit

sun2#zoneadm -z zone1 attach

You might also like