Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Oracle VM 3:

Planning Storage for Oracle VM Centric DR using Site Guard


OR AC L E W H IT E P AP ER | SE PT EM B ER 2 0 1 5 | SN 2 1 81 1
Table of Contents

Introduction 1
Best Practices for Storage Arrays 2
Access to storage APIs is required 2
Understanding requirements for storage replication 2
Replicated storage must always remain unavailable 2
Reverse replication relationship during site transition 3
Delete unused storage repositories 4
Configuring ZFS storage appliances for storage replication 5
Best Practices for Storage Protocols 6
Both SAN and NAS are supported 6
File level protocol is the most flexible for DR 6
Block level protocols can be challenging 6
Best Practices for Pool File Systems 7
Do not replicate pool file systems 7
Back up pool file systems 7
Keep Pool File Systems in separate volumes/projects 7
Non-clustered server pools 8
Best Practices for Storage Repositories 9
Organizing repositories 9
Use a descriptive naming scheme 9
Organize repositories by business system 10
Limitations of simple names with Enterprise Manager 11
Use a modular approach for storage repositories 11
Site Guard operation plans operate on storage repositories 11
Group related storage by volume (ZFS project) 12
Excluding Oracle VM guests from disaster recovery 13
Don’t deploy single-use repositories 14
Keep Oracle VM guest objects in the same repository 14
Virtual disks Vs physical disks for booting guests 15

II | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


Introduction
What does it take to design and implement a complete Oracle VM Centric disaster recovery solution
using Oracle VM and Oracle Site Guard? This guide is your key to success when planning how to
deploy storage with our Oracle VM Centric DR solution path 3.

This paper explains concepts, best practices and requirements needed to plan and implement storage
for your DR environment specifically for our Oracle VM Centric DR using Oracle Site Guard to
orchestrate failovers and switchovers . Detailed steps for installing or configuring storage are beyond
the scope or purpose of this guide.

1 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


Best Practices for Storage Arrays
The solution path for Oracle VM Centric DR using Site Guard comes with ZFS storage appliance integration out-of-
the-box. Other storage platforms can be used, but are not currently integrated into Site Guard. However, Site
Guard is highly extensible so you can write custom automation for any other storage platform using any
programming language you are comfortable using. Your custom automation can easily be incorporated into Site
Guard. Oracle cannot be responsible for troubleshooting or debugging the custom automation you write if you
choose to do so.

We chose to begin w ith the Oracle ZFS Storage Appliance since it gracefully handles reversing remote replication
relationships between sites and block level protocols are not at all challenging. This is the only storage platform w e
have found so far that replicates the page83 ID betw een sites, which alleviates the need for automating complex
logic, maintaining flat files to track relationships of devic e special file names betw een sites and modif ying the vm.cfg
file during every site transition.

Access to storage APIs is required


Disaster recovery with Site Guard must be tightly integrated with storage automation. This is a requirement since
Site Guard must be able to validate and orchestrate the transition of storage between sites. A successful transition
of storage from one site to another cannot depend on systems or storage administrators performing storage related
activities independently of Site Guard partway through a DR operation plan.

This solution path automates the entire failover or sw itchover process to decrease the likelihood of human error and
provide a quic ker time to recovery. If operational governance within your organization prohibits access to storage by
Site Guard, then you should instead follow our Solution Path 2 Oracle VM Centric DR using custom automation.

You w ill need to configure Enterprise Manager with the credentials for root on all ZFS Storage Appliances at all
sites. Alternatively , you can create a non-root account on each of the ZFS appliances as long as the user account
has root equivalent authorization to perform all storage replication and share management required by the Site
Guard agent

Please refer to white paper SN21004: Implementing Oracle VM Centric DR using Site Guard for specif ic steps
needed to add ZFS Storage Appliance credentials to Enterprise Manager.

Understanding requirements for storage replication


The following principles and requirements are applicable for all storage platforms whether you are using Oracle
Unified Storage, Pillar Axiom, Dell Compellent, EMC, Fujitsu, Hitachi Data Systems, HP 3PAR, Netw ork Appliance
or any other supported storage platform for Oracle VM.

Site Guard will automatically ensure all of the following requir ements are met at the appropriate time during
switchovers and failovers when using Oracle ZFS storage appliances. However, your storage administrator will
need to understand the following concepts when configuring storage replication to ensure site transitions are
successful using Site Guard. These requirements are not optional.

Replicated storage must always remain unavailable


All replicated storage, including storage repositories must remain unavailable or invisible to the alternate site Oracle
VM Manager and servers until a sw itchover or failover occurs. NFS and/or SAN storage that is replicated to storage
arrays at alternate sites must remain unavailable to the Oracle VM servers at the alternate site during normal day-to-
day operations.

2 | Oracle VM 3: Planning Storage for Oracle VM Centric DR using Site Guard


Making storage repositories prematurely available to Oracle VM servers at alternate DR sites will seriously degrade
performance of Oracle VM Manager even if the storage is in a read-only mode

The diagram shown in Figure 1 below illustrates the state storage replication must be in for the successful day-to-
day operation of Oracle VM. The left hand side of the illustration shows storage in active use at a primary SiteA.
The SAN and NFS storage is presented to the SiteA Oracle VM servers for use as storage repositories or physical
disks passed directly to the Oracle VM guests. Any NFS storage mounted directly on the Oracle VM guest
operating systems are als o exported to the Oracle VM guests running on the Oracle VM servers at SiteA Pool1.

SiteA ZFS Appliance SiteB ZFS Appliance Key Concept


SiteA Pool1 project Replicate to SiteB SiteA Pool1 replica
LUNs for repositories LUNs for repositories

SiteA Available to OVM servers LUNs passed to VMs LUNs passed to VMs Unavailable to OVM servers SiteB
Pool1 NFS for repositories NFS for repositories Pool1
Available to OVM guests NFS passed to VMs NFS passed to VMs Unavailable to OVM guests

SiteA Pool2 project Replicate to SiteB SiteA Pool2 replica


LUNs for repositories LUNs for repositories

SiteA Available to OVM servers LUNs passed to VMs LUNs passed to VMs Unavailable to OVM servers SiteB
Pool2 NFS for repositories NFS for repositories Pool2
Available to OVM guests NFS passed to VMs NFS passed to VMs Unavailable to OVM guests

Figure 1: Replicated storage is not av ailable until a site transition occurs

The two boxes in the middle represent the ZFS appliances at SiteA and SiteB. Notice the ZFS projects for SiteA
Pool1 and Pool2 are replicated to the ZFS appliance residing at SiteB. The data refreshes from SiteA to SiteB can
be either scheduled or continuous depending on your requirements. The key concept shown on the far right of the
illustration shows replicated storage is not presented (neither mapped nor exported) to the Oracle VM servers or
guests at SiteB – not even as read-only.

This is not an issue when using Oracle ZFS appliances for your storage platform since remote replication does not
allow NFS exports or LUN mapping/presentation to servers as long as a ZFS project is an active. However, if you
are using another storage platform, then you need to ensure the replication model follows the example show in
Figure 1 above.

Reverse replication relationship during site transition


Whether you transition Oracle VM guests from SiteA to SiteB using a switchover or a failover, the roles of the sites
are essentially reversed. SiteB is now the primary and SiteA is the alternate. To be successful, you must first
refresh the data between sites, sever the replication relationship from SiteA to SiteB, create a new replication
relationship from SiteB to SiteA and then present the storage to the Oracle VM servers and guests at SiteB. You
must not leave the replication flowing from SiteA to SiteB if your SiteA Oracle VM guests are running at SiteB. Site
Guard orchestrates all of this automatically before performing any recovery tasks at SiteB.

To illustrate this point, Figure 2 below shows the same model as Figure 1 above. However, notice the replication
relationship has been reversed; this is indic ated where the red text indicates a Key Concept showing tw o large
arrows between the ZFS appliances in the middle of the illustration. The other key concept shown on the left of the
diagram indicates that storage is now replicated from SiteB to SiteA but is not presented or exported to the SiteA
Oracle VM servers or guests. If you compare both Figure 1 and Figure 2 you will see that roles of the sites have
been completely reversed.

3 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


Key Concept SiteA ZFS Appliance SiteB ZFS Appliance
SiteA Pool1 replica Replicate to SiteA SiteA Pool1 project
LUNs for repositories LUNs for repositories

SiteA Unavailable to OVM servers LUNs passed to VMs LUNs passed to VMs Available to OVM servers SiteB
Pool1 NFS for repositories NFS for repositories Pool1
Unavailable to OVM guests NFS passed to VMs NFS passed to VMs Available to OVM guests

SiteA Pool2 replica Replicate to SiteA SiteA Pool2 project


LUNs for repositories LUNs for repositories

SiteA Unavailable to OVM servers LUNs passed to VMs LUNs passed to VMs Available to OVM servers SiteB
Pool2 NFS for repositories NFS for repositories Pool2
Unavailable to OVM guests NFS passed to VMs Key Concept NFS passed to VMs Available to OVM guests

Figure 2: Rev erse the direction of storage replication

There is a slight difference in the timing depending on the type of site transition. Site Guard automatically reverses
the replic ation relationship as one of the last tasks when vacating the primary SiteA. It is very important that the
relationship is reversed prior to deleting any storage objects from the Manager if you are using

Failovers are different because Site Guard has to assume the primary SiteA is completely inaccessible due to
catastrophic failure. Therefore, Site Guard will ignore errors during its attempts to delete unused storage, storage
repositories and Oracle VM guests from the SiteA Oracle VM Manager during the failover.

The process for a failback als o differs slightly from a switchback since Site Guard assumes the storage at SiteA was
not vacated correctly during a failover, so it attempts to clean up SiteA before vacating SiteB. This means Site
Guard attempts to delete the old, empty SiteA ZFS projects that are no longer relevant as well as the artif acts of the
old storage repositories and Oracle VM guests, then begins the normal failback process; from this point, the process
is essentially a switchover from SiteB back to SiteA.

Delete unused storage repositories


Site Guard automatically releases ownership and then deletes all storage repositories being transitioned to other
sites as shown in Figure 3 below on the left hand side of the diagram. This is done prior to the final synchronization
of data betw een sites to ensure the released ownership is propagated to the alternate site. This allows the Oracle
VM Managers at alternate sites to take ownership of foreign repositories during a sw itchover. Oracle VM guests are
removed from the Oracle VM Manager database when the repositories are deleted. Deleting the Oracle VM guests
is non-destructiv e and simply removes the records from the Oracle VM Manager database – no Oracle VM guests
are destroyed.

SiteA Pool SiteA ZFS Appliance SiteB ZFS Appliance SiteB Pool
SiteA Pool1 replica Replicate to SiteB SiteA Pool1 replica
SiteA Repository1 Presented LUN for repository1 LUNs for repositories Unavailable
SiteA Repository2 Presented LUN for repository2 LUNs passed to VMs Unavailable
SiteA Repository3 Presented NFS for repository3 NFS for repositories Unavailable
SiteA Repository4 Presented NFS for repository4 NFS passed to VMs Unavailable

Key Concepts

Figure 3: Ensure all storage repositories are deleted f rom primary site during a switchov er

Notice in Figure 3 above that the storage repositories are not yet available to the SiteB server pool. As noted in the
previous subsection, the replicated repositories are not made available to the SiteB Oracle VM servers until the
replication relationship has been reversed.

4 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


Figure 4 below shows the final stage of a site transition where the SiteA repositories are now owned by the SiteB
Oracle VM Manager. The illustration shows on the left hand side that the storage repositories no longer exist at
SiteA once a sw itchover has completed. The right hand side shows the SiteA repositories owned and presented to
the Oracle VM servers in the SiteB server pool. It is very important that the SiteA storage repositories only be
available at one site or the other, never at multiple sites at the same time.

SiteA Pool SiteA ZFS Appliance SiteB ZFS Appliance SiteB Pool
SiteA Pool1 replica Replicate to SiteA SiteA Pool1 replica
Unavailable LUN for repository1 LUNs for repositories Presented SiteA Repository1
Unavailable LUN for repository2 LUNs passed to VMs Presented SiteA Repository2
Unavailable NFS for repository3 NFS for repositories Presented SiteA Repository3
Unavailable NFS for repository4 NFS passed to VMs Presented SiteA Repository4

Key Concepts

Figure 4: Ensure all storage repositories are deleted f rom primary site during a switchov er

Deleting the storage repositories is an important step in the switchover process since it helps avoid performance
problems and potential mistakes from human error. Signif icant performance problems aris e when the artif acts of
storage repositories remain after a sw itchover and are still presented to servers in a pool. The agent on the Oracle
VM servers will attempt to mount the storage repositories during periodic storage refreshes throughout the day. The
storage refresh locks the Oracle VM Manager until the process times out waiting to mount the repositories whic h in
turn prevents systems administrators from doing any work in the Oracle VM Manager.

Configuring ZFS storage appliances for storage replication


There are specif ic requirements for configuring and naming SAN target and initiator groups on all ZFS Storage
Appliances that are part of your DR envir onment. These requirements are critical to the successful transition of
replicated storage from one site to another; site transitions will be impossible if target and initiator groups are not
implemented correctly. The most important requir ement is ensuring the target and initiator groups for SAN storage
are exactly the same at all sites if you are using Fibre Channel as the storage protocol for your DR environment.

The illustration in Figure 5 below shows the iSCSI initiator and target groups names are exactly the same at all sites.
The group names shown in the illustrations below are just examples; the names are completely up to your unique
requirements as long as it is the names are the same at all DR sites.

Even though the group names are the same at all sites, the actual iSCSI IQNs for the iSCSI initiators on the Oracle
VM servers and ZFS appliances are different at each site; this means the replicated LUNs will be presented to the
correct servers at each site automatically when Site Guard reverses replication relationship during a failover or
switchover. This is the magic.

SiteA ZFS appliances SiteB ZFS appliances SiteC ZFS appliances


iscsi_igroup iscsi_igroup iscsi_igroup Same initiator group name
initiator: myserver1-iqn initiator: myserver4-iqn initiator: myserver7-iqn
initiator: myserver2-iqn initiator: myserver5-iqn initiator: myserver8-iqn
Different servers at each site
initiator: myserver3-iqn initiator: myserver6-iqn initiator: myserver9-iqn

iscsi_tgroup iscsi_tgroup iscsi_tgroup Same target group name


target: myzfs1-iqn target: myzfs3-iqn target: myzfs5-iqn
target: myzfs2-iqn target: myzfs4-iqn target: myzfs6-iqn Different IQNs at each site

Figure 5: iSCSI target and imitator group names must identical at all sites if y ou are using iSCSI SAN instead of NFS

5 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


The illustration in Figure 6 below shows the same requir ement if you are using Fibre Channel as the storage
protocol for your DR envir onment. Just like iSCSI, the initiator and target groups names must be exactly the same
at all sites. However, the WWPNs for the Fibre HBAs on the Oracle VM servers and ZFS appliances are different at
each site; this means the replicated LUNs will be presented to the correct servers at each site automatically when
Site Guard reverses replication relationship during a failover or switchover.

SiteA ZFS appliances SiteB ZFS appliances SiteC ZFS appliances


fcp_igroup fcp_igroup fcp_igroup Same initiator group name
initiator: myserver1-wwpn initiator: myserver4-wwpn initiator: myserver7-wwpn
initiator: myserver2-wwpn initiator: myserver5-wwpn initiator: myserver8-wwpn
Different servers at each site
initiator: myserver3-wwpn initiator: myserver6-wwpn initiator: myserver9-wwpn

fcp_tgroup fcp_tgroup fcp_tgroup Same target group name


target: myzfs1-wwpn target: myzfs3-wwpn target: myzfs5-wwpn
target: myzfs2-wwpn target: myzfs4-wwpn target: myzfs6-wwpn Different WWPNs at each site

Figure 6: Fibre Channel target and imitator group names must identical at all sites if you are using FCP SAN instead of NFS

The target and initiator group names are the most important concept to understand – this is a requir ement. Ensuring
the group names are the same for all sites w ill be challenging if you are using ZFS appliances that are alr eady in
use.

There are a few more things you need to configure on the ZFS appliances other than the group names. Please refer
to white paper SN21004: Implementing Oracle VM Centric DR using Site Guard for specif ic steps needed to
prepare the ZFS Storage Appliances for your DR envir onment.

Best Practices for Storage Protocols


Both SAN and NAS are supported
You can use FCP, iSCSI and NFS or a combination of all three storage protocols for any Oracle VM centric dis aster
recovery solution.

File level protocol is the most flexible for DR


NFS is a file level storage protocol. If the business systems being hosted on the DR platform will allow it, we
recommend using NFS instead of SAN protocols such as FCP or iSCSI. Use NFS for pool file systems, storage
repositories and storage passed directly to the Oracle VM guests (virtual machines) if at all possible. Use DNFS for
Oracle databases if your design specif ic ation supports it.

You can use FCP, iSCSI and NFS or a combination of all three storage protocols for any Oracle VM centric dis aster
recovery solution. However, you will find that NFS is the easiest to work with in terms of transitioning storage
between sites. It is easier to forcefully take ownership of NFS storage repositories during a failover and there is less
challenge for the Oracle VM guests to reestablish access with NFS file systems at the partner site.

Block level protocols can be challenging


Block level protocols such as FCP and iSCSI are not really an issue for storage repositories. However, it takes a
few more steps to forcefully take ownership during a failover and even a non-forceful take ownership during a
switchover.

6 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


The additional steps are all built into the Site Guard solution so it doesn’t impact how you configure Site Guard. But,
even though additional complexities are handled without any effort on your part, the few extra steps w ill add a little
more time to the recovery process at the alternate site. This additional time may be signif icant for server pools with
hundreds of LUNs. Our initial testing during development has shown that processing LUNs adds about 20 to 30
seconds per LUN during the discovery and refresh operations during a site transition.

The real challenge with block level protocols comes w ith SAN LUNs passed directly to Oracle VM guests, you have
to change the device special file worldwide ID in each virtual machine configuration file after a transition from one
site to another (WWID, also known as the page83 ID).

Best Practices for Pool File Systems


Do not replicate pool file syste ms
Pool file systems are unique to each server pool and are of no use to server pools at alternate sites. You should not
waste time, effort or network bandwidth replicating pool file systems to alternate sites.

Back up pool file systems


If you are using clustered server pools , then take regular, periodic backups of the pool file system for each server
pool. Losing a pool file system is a catastrophic event since it w ill reboot all of the servers and the stop all Oracle
VM guests in the pool. Further, the Oracle VM guests cannot be restarted by Oracle VM until the pool file system is
either rebuilt or restored. We feel that restoring is quicker than rebuilding a new pool file system and server pool.

Keep Pool File Syste ms in separate volumes/projects


Alw ays ensure pool file systems never reside in the same ZFS project (volume) as storage repositories that are part
of a DR plan.

Pool file systems are never replicated to alternate sites and are never part of any dis aster recovery solution using
Oracle VM. To illustrate the problem using ZFS as an example, Site Guard reverses storage replic ation of entire
ZFS Projects during a switchover or failover. This action makes an entire ZFS project (volume) and everything
contained in the project completely unavailable to the Oracle VM Manager and servers at the DR site being vacated
during a switchover or failover.

Therefore, all servers in a server pool at the DR site being vacated will reboot and then be unable to run any vir tual
machines after rebooting if your pool file system(s) reside in any ZFS Projects containing storage repositories that
are transitioned to other sites.

Pool file systems should reside in ZFS projects that are not replicated and do not contain any storage that needs to
be transitioned to other DR sites. Notice in Figure 7 below that there are tw o server pools represented by the boxes
on the left hand side of the diagram and a ZFS appliance on the right. Each of the server pools have a pool file
system that resides in a dedic ated ZFS project with a single share for the pool file system.

You can use any supported storage protocol for pool file systems; in this case SiteA Pool1 uses an NFS share
contained in ZFS project SiteA_Pool1_infra and SiteA Pool2 uses a SAN share contained in ZFS project
SiteA_Pool2_infra. Als o notice that neither of the ZFS projects containing pool file systems are not replicated to
other DR sites.

7 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


SiteA ZFS Appliance
SiteA Pool1 SiteA_Pool1_infra (ZFS project) Project not replicated

SiteA Pool1 PoolFS NFS SiteA_Pool1_PoolFS (NFS share)

SiteA_Pool1_DR_group1 (ZFS project) Replicate project to SiteB for Pool1


SiteA Pool1 Repository1 NFS
SiteA_Pool1_repo1 (NFS share)
SiteA Pool1 Repository2 NFS
SiteA_Pool1_repo2 (NFS share)

SiteA Pool1 Repository3 NFS SiteA_Pool1_repo3 (NFS share)

SiteA Pool2 SiteA_Pool2_infra (ZFS project) Project not replicated

SiteA Pool2 PoolFS SAN SiteA_Pool2_PoolFS (SAN share)

SiteA_Pool2_DR_group1 (ZFS project) Replicate project to SiteB for Pool1


SiteA Pool2 Repository2 SAN
SiteA_Pool2_repo1 (SAN share)
SiteA Pool2 Repository2 SAN
SiteA_Pool2_repo2 (SAN share)

SiteA Pool2 Repository3 SAN SiteA_Pool2_repo3 (SAN share)

Figure 7: Keep pool f ile systems separate f rom storage repositories

The ZFS project can contain other shares that are not part of a DR plan, but it is recommended that you do not
include shares for storage repositories in any ZFS projects containing pool file systems for the follow ing two
reasons:
» Keeping pool file systems separate from other shares ensures server pools are not brought down simply because
a runaway process on an Oracle VM server or guest inadvertently fills 100% of the space allocated to a ZFS
project.
» Keeping non-DR storage repositories in their own projects allows you to easily include those projects into a Site
Guard operation plan at any time in the future w ithout dis rupting anything or involving outages to make the
change

Non-clustered server pools


There is no requirement that you deploy only clustered server pools in your DR environment. Non-clustered server
pools do not incorporate a pool file system so this may be an attractive alternativ e under the right cir cumstance.

Non-clustered server pools will reduce the single point of failure represented by the pool file system, reduce the
amount of storage allocated to overhead and eliminate the need to back up the pool file system. However, non-
clustered server pools do not support OCFS2 storage repositories nor any high availability features of Oracle VM.

You may deploy both clustered and non-clustered server pools in the same DR environment. Site Guard doesn’t
care if you are transitioning storage repositories from one server pool type to another as long as an OCFS2
repository is not being transitioned to a non-clustered server pool.

8 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


Notice in Figure 8 below that the SiteA pool2 SAN repositories must be transitioned to the clustered server pool at
SiteB, whereas the NFS repositories from SiteA Pool1 & 2 can transitioned to both the clustered and non-clustered
server pool at SiteB.

SiteA Pool1 (non-clustered)


SiteA Pool1 Repository1 NFS

SiteA Pool1 Repository2 NFS

SiteA Pool1 Repository3 NFS

SiteA Pool1 Repository4 NFS


SiteB Pool1 (non-clustered)

SiteA Pool2 (clustered) SiteB Pool2 (clustered)


SiteA Pool2 Repository1 NFS

SiteA Pool2 Repository2 NFS

SiteA Pool1 Repository3 SAN

SiteA Pool2 Repository4 SAN

Figure 8: SAN repositories can only be transitioned to clustered serv er pools

This is another reason we recommend NFS over SAN for storage repositories (see section on storage protocols
above). The choic e to deploy non-clustered server pools is up to you and the unique requirements for each server
pool.

Best Practices for Storage Repositories


A successful dis aster recovery environment is highly dependent on the way you design and deploy storage
repositories as well as other storage objects such as NFS exports and physic al disks passed dir ectly to Oracle VM
guests. You need to develop a robust storage architecture that is reliable, scalable, easy to maintain and easy to
reallocate or repurpose. The follow ing best practices are essential to achieving a reliable disaster recovery solution
that will allow you to trigger site switchovers and failovers w ith confidence.

Organizing repositories
It is fine to have many storage repositories, but you should organiz e your business systems into repositories
dedicated to particular business systems. For example, if you have three different business systems such as BAM,
CRM and SCM, then create repositories meant to contain only the virtual machines for BAM, repositories meant only
for CRM and repositories meant only for SCM.

Use a descriptive naming scheme


Develop a descriptiv e naming scheme for your storage repositories. We use generic names for repositories
throughout this document that include the DR site and pool name to illustrate concepts. In reality, this is a very poor
naming scheme since it doesn’t really identify the purpose or contents of the storage repositories.

How ever, we highly recommend that you alw ays incorporate the primary site name into each repository name. This
is because the simple name w ill automatically appear when a repository is discovered at an alternate site, which in
turn makes it immediately obvious to systems administrators that there are foreign Oracle VM guests running at an
alternate site as well as which site they came from; this dramatically reduces the chance for confusion and human
error.

9 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


You might also consider including something like “DR” and “nonDR” in the simple name for storage repositories.
This w ill help system administrators quickly identify storage repositories that are intended to be part of a DR
operation plan, whic h in turn reduces the chance f or human error.

Organize repositories by business system


Oracle VM guests should be organiz ed into repositories by business system. For the purposes of this document, a
business system means a collection of Oracle VM guests running databases, middlew are and applications for a
single purpose or business value.

For example, perhaps you have a customer sales portal that is compris ed of three Oracle VM guests running data
stores, two virtual machines running middlew are, three vir tual machines as applic ation s ervers and four VMs acting
as web portals with load balancing. These would all be part of a single business system. That single business
system should be contained in as many repositories as needed, but only Oracle VM guests related to the business
system reside in the repositories.

As shown in Figure 9 below, you might further divide these into development, user acceptance testing and
production systems. There are a few reasons this kind of division into discrete business systems makes a lot of
sense:

» You don’t have to transition an entire server pool to another site as a single unit. Notice in Figure 9 below that the
repositories in each of the tw o server pools are independent of each other since they are organized by business
system; this means each repository can be freely transitioned to a different server pool at a different site.
» You have the complete freedom to transition a single business system while leaving the others running at the
primary site. For example, transition the production sales support system running on SiteA Pool2 to another pool
at SiteD
» You can promote entire business systems from development into production by using Site Guard to transition the
system to another server pool at the same site or to another server pool at a different site
» Backups are easier because all of the VMs in the repositories can be quiesced at the same time w ithout capturing
data from other unrelated VMs that can’t be put into a transaction consis tent state at the same time
» Restores are easier because it is highly likely that all VMs w ill need to be restored at the same time; restoring
everything in a repository without worrying about overwriting unrelated VMs that don’t need to be restored
SiteA Pool1 SiteA Pool2
SiteA Dev Corp Procurement Alternate DR site is SiteB SiteA Prod Corp Procurement Alternate DR site is SiteD

SiteA Dev Customer Support Alternate DR site is SiteB SiteA Prod Customer Support Alternate DR site is SiteD

SiteA Dev Data Center Ops Alternate DR site is SiteC SiteA Prod Data Center Ops Alternate DR site is SiteC

SiteA Dev Sales Support Alternate DR site is SiteC SiteA Prod Sales Support Alternate DR site is SiteD

SiteA UAT Customer Support Not part of any DR plan

SiteA UAT Sales Support Not part of any DR plan

Figure 9: Organize Oracle VM guests into repositories by business system

Perhaps your company provides priv ate and public cloud services. In this case you might organiz e your Oracle VM
guests into storage repositories by customer and then business systems for each customer as shown it Figure 10
below.

10 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


SiteA Pool1 SiteA Pool2
SiteA Dev Customer1 BAM Alternate DR site is SiteB SiteA Prod Customer1 BAM Alternate DR site is SiteD

SiteA Dev Customer1 CRM1 Alternate DR site is SiteB SiteA Prod Customer1 CRM1 Alternate DR site is SiteD

SiteA Dev Customer1 CRM2 Alternate DR site is SiteB SiteA Prod Customer1 CRM2 Alternate DR site is SiteD

SiteA Dev Customer2 CRM1 Alternate DR site is SiteC SiteA Prod Customer2 CRM1 Alternate DR site is SiteD

SiteA Dev Customer2 CRM2 Alternate DR site is SiteC SiteA Prod Customer2 CRM2 Alternate DR site is SiteE

SiteA Dev Customer2 ERP Alternate DR site is SiteC SiteA Prod Customer2 ERP Alternate DR site is SiteE

Figure 10: Organize Oracle VM guests into repositories by customer, then business system

The above two examples are simply starting points to help illustrate the necessity of designing your storage in a
robust, modular fashion for maximum flexibility and easy identif ic ation. In this way, you can easily change the
alternate DR site of any single business system, or group of Oracle VM guests by simply changing a single
parameter in a Site Guard operation plan.

Limitations of simple names with Enterprise Manager


Unlike Oracle VM Manager, Enterprise Manager for Oracle VM does not allow the use of white spaces or special
characters for simple names of objects. If you are using Enterprise Manager to manage the Oracle VM
environments at various sites, then you must use underscores in place of white space and you may not use special
characters at all.

Using Enterprise Manager to manage Oracle VM across the various DR sites is recommended, but optional.

Use a modular approach for storage repositories


Site Guard basically transitions storage repositories from site to site. All Oracle VM guests contained in a storage
repository will be stopped and transitioned to another site as a unit. There are a few requir ements in the way you
design and deploy your storage that make this solution successful. The following the subsections illustrate the best
practices that need to be followed when designing the deployment architecture for storage.

Site Guard operation plans operate on storage repositories


The first thing that needs to be understood is that different repositories, NFS shares/file systems and LUNs related
to the Oracle VM guests in a single server pool can be transitioned to completely different DR sites. Notice in Figure
11 below that the indiv idual storage repositories in each server pool at SiteA are transitioned to different server pools
residing at completely different DR sites.

11 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


SiteA Pool1
SiteA Pool1 Repository1 NFS
SiteB Pool1

SiteA Pool1 Repository2 NFS


SiteB Pool2

SiteA Pool1 Repository3 SAN

SiteA Pool1 Repository4 SAN

SiteC Pool1
SiteA Pool2
SiteA Pool2 Repository1 NFS

SiteA Pool2 Repository2 NFS

SiteA Pool2 Repository3 SAN SiteD Pool1


SiteA Pool2 Repository4 SAN SiteD Pool2

Figure 11: Design each DR operation plan around storage repositories that will be transitioned as a unit

The example illustrated above is modular in nature, allowing a lot of latitude in the way you organize and execute
Site Guard operation plans. For example, the storage repositories above can be organiz ed into different Site Guard
operation plans in a few different ways.

» You could create a single Site Guard operation plan that transitions all of repositories to the different sites shown
in Figure 11 above concurrently
» You could create three individual Site Guard operation plans that include:
» Operation plan 1: SiteA Pool1 Repository1, Repository2 and Repository3 transitioned to two different
server pools at SiteB
» Operation plan 2: SiteA Pool1 Repository4 and SiteA Pool2 Repository1 transitioned to SiteC
» Operation plan 3: SiteA Pool2 Repository2, Repository3 and Repository4 transitioned to two different
server pools at SiteD
» Or, you could create five individual Site Guard operation plans that include:
» Operation plan 1: SiteA Pool1 Repository1 and Repository2 transitioned to Pool1 at SiteB
» Operation plan 2: SiteA Pool1 Repository3 transitioned to Pool2 at SiteB
» Operation plan 3: SiteA Pool1 Repository4 and SiteA Pool2 Repository1 transitioned to Pool1 at SiteC
» Operation plan 4: SiteA Pool2 Repository2 transitioned to Pool1 at SiteD
» Operation plan 5: SiteA Pool2 Repository3 and Repository4 transitioned to Pool2 at SiteD
These fictional Site Guard operation plans can be executed independently of each other at different times on
different days, or all at the same time. Individual operation plans can even be combined together in any order to
transition a subset of storage repositories. Using Figure 11 above as an example, Site Guard would allow you to
transition just the two SAN repositories in SiteA Pool1 and the two NFS repositories in SiteA Poo2 while leaving all
the others running at SiteA.

Group related storage by volume (ZFS project)


The way you group storage repositories on your storage platform is very important in order to achieve the concept
illustrated in Figure 11 above. This includes other storage objects related to the Oracle VM guests; everything must
be grouped within the same ZFS project or volume as shown in Figure 12 below. Notic e that the project named
SiteA_Pool1_Infra containing the pool file system is not replicated to any other sites.

12 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


SiteA Pool1 SiteA ZFS Appliance
SiteA_Pool1_infra (ZFS project) Project not replicated

SiteA_Pool1_DR_group1 (ZFS project) Replicate project to SiteB for Pool1

SiteA Pool1 Repository1 NFS SiteA_Pool1_repo1 (NFS share)

NFS for myguest1 SiteA_Pool1_myguest1 (NFS share)

NFS for myguest2 SiteA_Pool1_myguest2 (NFS share)

SiteA_Pool1_repo2 (NFS share)


SiteA Pool1 Repository2 NFS

SiteA_Pool1_DR_group2 (ZFS project) Replicate project to SiteB for Pool2

SiteA Pool1 Repository3 SAN SiteA_Pool1_repo3 (SAN share)

SiteA_Pool1_DR_group3 (ZFS project) Replicate project to SiteC for Pool1


SiteA Pool1 Repository4 SAN SiteA_Pool1_repo4 (SAN share)

LUN for myguest7 SiteA_Pool1_myguest7 (SAN share)

LUN for myguest8 SiteA_Pool1_myguest8 (SAN share)

Figure 12: Group related storage objects by ZFS project or v olume

Excluding Oracle VM guests from disaster recovery


Not all Oracle VM guests and storage associated with a server pool need to be part of a Site Guard operation plan.
For example, let’s assume you want to exclude the Oracle VM guests associated w ith SiteA Pool1 Repository2
and 3 as shown in Figure 13 below. However, it is very important to understand that you cannot exclude an
individual VM contained in an individual repository; every Oracle V M guest running or not residing in a repository will
be transitioned w ith everything else in the ZFS project containing the individual storage respoitory.

SiteA Pool1 SiteA ZFS Appliance


SiteA_Pool1_infra (ZFS project) Project not replicated

SiteA_Pool1_DR_group1 (ZFS project) Replicate project to SiteB for Pool1


SiteA Pool1 Repository1 NFS SiteA_Pool1_repo1 (NFS share)

NFS for myguest1 SiteA_Pool1_myguest1 (NFS share)

NFS for myguest2 SiteA_Pool1_myguest2 (NFS share)

SiteA_Pool1_nonDR_group1 (ZFS project) Project not replicated


SiteA Pool1 Repository2 NFS SiteA_Pool1_repo2 (NFS share)

SiteA Pool1 Repository3 SAN SiteA_Pool1_repo3 (SAN share)

SiteA_Pool1_DR_group3 (ZFS project) Replicate project to SiteC for Pool1


SiteA Pool1 Repository4 SAN SiteA_Pool1_repo4 (SAN share)

LUN for myguest7 SiteA_Pool1_myguest7 (SAN share)

LUN for myguest8 SiteA_Pool1_myguest8 (SAN share)

Figure 13: Group related storage objects by ZFS project or v olume

You simply need to do tw o things when deploying the solution to exclude Oracle VM guests in a server pool from
being part of a Site Guard operation plan:
» Ensure all storage associated with the Oracle VM guests contained in either of the storage repositories reside in
ZFS projects or volumes separate from other projects that are part of a DR plan. Notice in Figure 13 above that

13 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


SiteA_Pool1_repo2 and SiteA_Pool1_repo3 ZFS shares both reside in a project named
SiteA_Pool1_nonDR_group1
» Do not include either storage repository in any Site Guard operation plans
It is very simple to add the Oracle VM guests into a Site Guard operation plan at any point in the future. Just begin
replicating the ZFS project to another site and then add the repositories to a new or existing Site Guard operation
plan.

Don’t deploy single-use repositories


Do not attempt to deploy a storage repository scheme where a single repository is meant to contain only a single
Oracle VM guest. For example, don’t create 50 different repositories for 50 different Oracle VM guests. This is not a
scalable solution and is very hard for systems administrators to maintain correctly.

Such a solution adds a signif icant amount of unnecessary overhead during discovery at the alternate DR site
seriously degrading performance of Oracle VM and signific antly increasing the overall time to recovery.

Keep Oracle VM guest objects in the same repository


Keep the vm.cfg file and all virtual disks for a single virtual machine w ithin the same storage repository. Do not be
tempted to spread the virtual dis ks for a single Oracle VM guest between different storage repositories. This is very
hard for system administrators manage and maintain correctly , very confusing during transitions between partner
sites, will make your DR solution much less flexible and ultimately becomes an almost impossible task during
storage consolidation projects down the road.

Figure 14 below shows an example of a very poor deployment of files associated w ith Oracle VM guests. Notice
that each of the two Oracle VM guests named m yguest1 and myguest2 have configuration files and virtual disks
spread across all four storage repositories.

SiteA Pool1
SiteA Pool1 Repository1 NFS SiteA Pool1 Repository2 NFS SiteA Pool1 Repository3 SAN SiteA Pool1 Repository4 SAN

vm.cfg for myguest1 vdisk2 for myguest1 vdisk3 for myguest1 vdisk4 for myguest1

vdisk1 for myguest1 vdisk2 for myguest2 vdisk3 for myguest2 vdisk4 for myguest2

vm.cfg for myguest2

vdisk1 for myguest2 Poor deployment of configuration files


and virtual disks

Figure 14: Very poor deploy ment of v irtual disks and conf iguration f iles f or Oracle VM guests

Figure 15 below illustrates how files for Oracle VM guests should be deployed. Notice that the configuration files
and virtual disks all reside in the same repository. As we noted above, you can have many storage repositories for a
single business system comprised of many vir tual machines; just ensure that all files associated with each virtual
machine all reside in the same repository.

14 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


SiteA Pool1
SiteA Pool1 Repository1 NFS

vm.cfg for myguest1 vm.cfg for myguest2

vdisk1 for myguest1 vdisk1 for myguest2

vdisk2 for myguest1 vdisk2 for myguest2

vdisk3 for myguest1 vdisk3 for myguest2

vdisk4 for myguest1 vdisk4 for myguest2

Figure 15: Good deployment of v irtual disks and conf iguration f iles f or Oracle VM guests

Virtual disks Vs physical disks for booting guests


There is no requirement that Oracle VM guests use virtual disks at all. Oracle VM guests can boot from physic al
disks that are presented to the Oracle VM servers. Essentially, storage repositories in this case would only contain
the vm.cfg files for the Oracle VM guests and absolutely no virtual disks. The choice of dis k type for the Oracle VM
guests is entirely up to you and your unique requirements.

How ever, you need to be consis tent in your choice for ease of maintenance and reliability of your solution. Pick one
disk type over the other and then use that dis k type throughout your entire DR environment; consistency is the key
to success.

15 | ORACLE VM 3: PLANNING STORAGE FOR ORACLE VM CENTRIC DR USING SITE GUARD


Oracle Corporation, W orld Headquarter s W orldwide Inquiries
500 Oracle Par kway Phone: +1.650.506.7000
Redwood Shores , CA 94065, USA Fax: +1.650.506.7200

C ON N EC T W IT H U S

Blogs.oracle.com/virtualization Copyright © 2015, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the
contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
Facebook.com/OracleVirtualization fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
Twitter.com/ORCL_Virtualize means, electronic or mechanical, for any purpose, without our prior written permission.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
oracle.com
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and
are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are
trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0915

Oracle VM 3: Planning Storage for Oracle VM Centric DR using Site Guard


September 2015
Author: Gregory King
SN21811-0.4

You might also like