Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

IBM Oracle Center | November 2011

Oracle DB High Availability on


IBM Power Systems
Alain Cyr
Share Best Practices for demanding IT IT Architect
cyralain@fr.ibm.com
environments

Frédéric Dubois
IT Specialist
Fred.dubois@fr.ibm.com
Agenda

IBM Power Technologies for Oracle

High-Availability and Disaster Recovery

IBM Live Partition Mobility with Oracle Database Demo

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Scalability with IBM POWER Systems and Oracle

Oracle Certifies for the O/S


&
All Power servers run the same AIX
= Power 795
Complete flexibility for Oracle workload
Power 780
deployment

Power 770
Consistency
Binary compatibility
Mainframe-inspired reliability
Support for virtualization
Power 750 AIX, Linux and IBM i OS
Power 740 2S/4U Corporate Enterprise Downtime
Power 720 1S/4U (Hours per Year)
IBM AIX POWER
Sun Solaris / SPARC
HP UX 11/ PA RISC Power Systems
Apple MAC
Power 730 2S/2U HP UX 11/ HP Integrity with AIX deliver
PS Blades Power 710 1S/2U Red Hat Enterprise
Windows Server 2008 99.997% uptime
Windows Server 2003
Open Source Linux

3
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
IBM’s Ten-Year March to UNIX Leadership
The largest shift of customer spending in UNIX history

UNIX Server Rolling Four Quarter Average Revenue Share


45%
POWER6
POWER6 Active Memory
POWER6 PowerVM Lx86 Sharing
40% Live Partition
Mobility
POWER5
Micro-
Micro-Partitioning
35%
POWER7
Shared Storage Pools
30%

POWER4
Dynamic LPARs
25%
POWER6
Shared Processor Pools

20%

HP Sun/Oracle IBM
15%
Q 0
Q 0
Q 1
Q 1
Q 1
Q 1
Q 2
Q 2
Q 2
Q 2
Q 3
Q 3
Q 3
Q 3
Q 4
Q 4
Q 4
Q 4
Q 5
Q 5
Q 5
Q 5
Q 6
Q 6
Q 6
Q 6
Q 7
Q 7
Q 7
Q 7
Q 8
Q 8
Q 8
Q 8
Q 9
Q 9
Q 9
Q 9
Q 0
Q 0
Q 0
0
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
10
20
30
40
11
21
31
41
Q

Source: IDC Server Tracker, Feb 2011

4 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Enablement: Product Roadmap Interlocks – Oracle and IBM

A Collaborative Continuous Process between Oracle and IBM to ensure the Oracle
Certification of IBM SWG and STG products at its most Current Releases with Oracle
Product Releases *
Applications Unlimited (PSFT, JDE, Siebel CRM, E-Business Suite)
Fusion Applications
Business Intelligence and EPM (BI Apps, OBI EE, Hyperion EPM)
Retail GBU (Retek, 360Commerce, ProfitLogic)
Communications GBU (Portal Software, MetaSolv)
Insurance GBU (AdminServer, Skywire)
Edge Applications: G-Log OTM, Agile PLM, Demantra
Oracle Technology (DB and RAC, Fusion Middleware, Enterprise Mgr)

Focus on Currency and Parity


IBM Cross-Brand Technology Focus (IBM SWG and STG Products): extended technical
advocates from Dev Labs
* Continuous evaluation as new companies are acquired

5 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Oracle DB Certification History

For 23+ years, IBM has delivered the best infrastructure components available in the market to support
customers who have selected Oracle as their SW provider

6 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Fusion Apps is Available on Power/AIX (4Q’11)

•HUGE NEWS: Fusion Apps is available on AIX 6.1 concurrent with Oracle’s
Base Development Platform!!!

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Oracle Certifies Active Memory Expansion (AME) on IBM Power/AIX

Another Proof point that Oracle is committed to “red on blue”


Oracle Virtualization Matrix
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Oracle Power Technology Adoption Roadmap

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Mix Workload Types for Best Ressources Usage

Reach 70 to 80% CPU average usage


Shared
Oracle CPU Consolidate different workload profiles
18
Applications and better optimize the resources

Micro-partitionning
Development environment could be
Oracle 12 consolidated into the production
Middleware infrastructure, workloads can be
isolated and resources can be shared
6
Oracle
Database – CPU resources for Dvt can be
0 given a lower priority than
production as needed by business
Dev & Test
• Shared Processor Pool
• Memory dynamic LPAR
• Different VIO Server to isolate
Storage I/Os for Dvt environment

10 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Virtualize Network for Oracle with PowerVM virtualization

Virtual Ethernet
Partitions can be interconnected with Virtual
Ethernet network.
Oracle
Virtual Ethernet is easy, flexible, performance and
Applications it’s integrated at the low level of the system
(microcode)

Virtual Ethernet Devices


Oracle Does not require any hardware for internal
interconnection, can be bridged to the outside
Middleware network infrastructure thought a VIO Server

VLANs
Host Ethernet Adapter (also called IVE)
Oracle
HEA switch can be configured and logical ports can
Database be assigned to partitions for network interconnection
HEA (also called IVE) is a hardware switch adapter
connected to the internal Bus of the processor(s)
Dev & Test (GX bus) and can connect to the external network
VIO Server infrastructure
Oracle supports both Virtual Ethernet and Host
Ethernet Adapter technologies
Enterprise
Storage Network

You can build heterogeneous network configuration and


mix virtual and physical infrastructure at the client LPAR

11 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Virtualize Storage for Oracle with PowerVM virtualization

Virtual Fiber Chanel (NPIV)


Partitions can be interconnected with Virtual
Oracle Fiber Chanel adapters to the SAN throught
an VIO Server and an NPIV adapter.
Applications Virtual Fiber Chanel is is easy, flexible, and
performance and it’s integrated at the low
Oracle level of the system (microcode/Porwer
Virtual SCSI or Hypervisor). VIO Server maps the Virtual to
Middleware Physical fiber chanels
Fiber Chanel Devices
Oracle NPIV protocol does not require any hardware
for internal interconnection, SAN disks are
Database directly assigned from the SAN

Virtual SCSI
Dev & Test Physical Storage is assigned to the VIO
VIO Server
Server, and virtual mapping of the disks is
done at the VIO Server level.
VIOS provides Storage management
techniques. (Multipathing, Logical Volume
Storage Manager)
Oracle supports both NPIV and Virtual
SCSI protocols

12 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


The Benefits of PowerVM for Oracle environments

Consolidate and Virtualize the Oracle Workloads to optimizes the IT resources

– Create Logical Partitions instead of using a server for each workload


• Mix different Workloads to screw CPU peaks (Production and Oracle
Test/Dev)
Applications
• Reduce global number of CPUs
Oracle
– Virtualize Server resources

Micro-partitionning
• Optimizes CPU usage, using Shared Processor Pool Middleware

• Create a Virtual I/O Server partition to share physical


Oracle
resources and simplify I/O infrastructure 18
Database
12

6 Shared CPU
Test & Dev
0

+ many more PowerVM features to reduce costs, improve


flexibility, scalability, reliability VIO Servers

– SMT – LPM
– MVSPP – AMS Power System
– Dedicated Shared Processor – AME
– NPIV Storage Network
– CuOD

13 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Processor cores for Oracle: use DLPAR and micro partitions

During the day Database partition has workload peaks


– Node1 database is defined as uncapped mode, and is able to get free capacity from the
Shared Processor Pool (SPP).
• Uncapped mode gets additional CPU up to the number of Virtual Processors
• You can change the number of VPs using Dynamic LPAR
• Weight parameter defines the priority for the uncapped partitions

– Test and Development partitions will be capped and will not pick up CPU cycles from the
Shared Processor Pool

Minimize license cost on core usage and define a Virtual Shared Processor Pool with a CPU
capacity Entitlement.
– Host DB partitions in a Virtual Shared Processor Pool
– Core licensing is based on the VSPP Capacity

14 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Power Technology for Oracle Building Block:
IBM Power System, PowerVM Virtualization and Oracle Software

Technology Building Block is based on


Oracle Apps IBM Power system and PowerVM
AIX LPAR 70-80%
• Shared Processor pool
• Physical or Multitple virtual Pools
Oracle DB • Uncapped partitions for PROD
AIX LPAR
• Capped partitions for DEV
Shared
• VIO Server virtualization
Processor Oracle DB
• VIO Server 2.1
Pool(s) AIX LPAR
• 8Gbps NPIV FC adapters for Storage access
• Ethernet adapters shared in VIO servers for virtual
network access
Oracle DB and /or Host Ethernet Adapters (IVE)
AIX LPAR • AIX 7.1, 6.1, 5.3 or 5.2 (WPARs)
Shared Ethernet
• 2 VIO servers for Storage and network redundancy
Shared Ethernet • Oracle single instance or Real Application Cluster
Adapter(s) VIO VIO Adapter(s)
Server 1 Server 2 • CRS/ASM/RAC 11g (Support for virtualized network
NPIV FC NPIV FC
Adapter(s) Adapter(s) and NPIV virtual disks
• Consolidate DB Server Lpars up to 70 % to 80% of the
server CPU capacity

SAN
LAN

15
15 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
11/3/2011
Agenda

IBM Power Technologies for Oracle

High-Availability and Disaster Recovery

IBM Live Partition Mobility with Oracle Database Demo

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


The Causes of Unavailability

99,999% = 5 minutes outage per year


High
99,99% = 1 hour outage per year Availability
Availability levels

99,95% = 4 hours outage per year


Superior Where are you ?
99,90% = 43 hours outage per year Availability
99% = 3 days outage per year

98% = 7 days outage per year Standard


Availability
97% = 11 days outage per year

• Flood • Cpu
• Electrical problem • Memory
• Air-conditioning • Disk
Live Partition • …. • Electrical outage
Mobility improves it •…

Live Partition
Mobility improves it

• Backup
• Upgrade
• diagnostic
•…

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


High Availability: Provide the Right HA/DR Architecture

Single Active / Passive Active / Active Extended Solution


Load Balancing
OAS
Router
(MAA)
Application Weblogic Load Balancing +
OracleAS Guard Load Balancing
Tier Router Failover mode
IBM Websphere
IBM PowerHA
Oracle Database 11g
Oracle Database 10g
Backup
-RMAN RAC + Data Guard
-User backups Oracle Data Guard Real Application Extended RAC
-TSM Clusters (RAC)
Database IBM PowerHA Oracle Streams
Recovery
Tier - RMAN IBM Infosphere
- Oracle Flashback RAC + PowerHA
-Media Recovery
-TSM
Clustering CRS PowerHA CRS PowerHA CRS PowerHA
Server
CFS ASM JFS2 ASM GPFS JFS2 ASM GPFS ASM GPFS
Infra Virtualization SAN Volume Controller PowerVM: micro-p, VIOs, NPIV, AME, AMS, ...
FlashCopy Remote replication Remote replication
Storage - Metro & Global - Metro & Global
RAID
Mirroring (PPRC) Mirroring (PPRC)
VolumeCopy
- Stretch & Fabric - Stretch & Fabric
SnapMirror Metro Cluster Metro Cluster
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Provide Different Level of Service

Cold Fail-Over Infrastructure

Protect all components


• Third Party Applications
Oracle Oracle • Oracle Applications
Micro-partitionning

Applications Micro-partitionning Applications • Oracle Middleware


Oracle Oracle • Oracle Database
Middleware Middleware

Database Database
• Cold Failover with downtime

PowerHA or Oracle Grid Infrastructure


• PowerHA for any product
VIO Server VIO Server or
Oracle clusterware for Oracle
products or DataGuard

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


HA Step 1: Active/Passive is Cold Fail-over solution

Production Server Passive Server Cold Fail-Over architecture is an HA solution with


downtime

IBM PowerHA or Oracle DataGuard


• Requires a minimum downtime of the DB
• DataGuard replays the logs on a standby DB
Production and uses a 2nd copy of the original database
Standby
• PowerHA can use the same Database storage
DB Instance 1 LPAR or volumes.
Instance 1 • CPU and Memory resources for the passive
server are not wasted, they are reserved and
can be automatically activated with Capacity on
Demand
DB Instance 2
Stanby LPAR or • Simple process for switch over operation to DR and
Instance 2 recovery to production server
• Optimize the overall infrastructure with consolidation
of other workloads (i.e. Development, Test, …) and
15 30 Capacity on Demand
22,5 • Workload balancing across the servers with PowerVM
7,5
15 features, LPM, micro-partitioning, .
0
CPU% 7,5

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Provide Different Level of Service

Use PowerVM and Oracle Grid Infrastructure

Provide High Availability and Scalable


Solutions for the entire stack
Oracle Applications
Micro-partitionning

Micro-partitionning

(Cluster Implementation) • No downtime on node failure


• Rolling Upgrade Patching
Oracle Middleware
(Cluster Implementation) • Increase Workload treatment by
adding nodes with no downtime
Real Application Cluster Database

Oracle Grid Infrastructure • Combine RAC and PowerVM to Scale


Right, in and out of the box !
VIO Server VIO Server

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


HA Step 2: Active/Active increases Application/DB availability

Active/active mode with Real Application Cluster for High Availability

15
Oracle Real Application Cluster (RAC) is
Oracle
flexible architecture
10
Applications – Workload balancing across the
nodes (partitions) of the servers
Micro-partitionning

Micro-partitionning
Oracle 5
– Easy maintenance as 1 node can be
Middleware
stopped without Application
0
Oracle disruption
Automated workload balancing
Real Application Cluster Combine RAC and PowerVM
Virtualization features
Test Dev – Define RAC nodes with Micro
partitioning and uncapped mode and
high priority
VIO Server VIO Server
– Define Test and Development with
15 15
Micro partitioning and low priority or
Server A 10
Server B 10 capped mode
5 5
More resources per application
0 0
increase average usage

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


HA Step 3: Combine IBM Power, active/active Oracle RAC + DR

Production Site DR Site


Reduce downtime and delay the fail-over process
• Easy maintenance as cluster nodes can be
stopped with minimum disruption
• Define RAC nodes with Micro partitioning and
uncapped mode and high priority
Production Cluster 1
But it is still is poor CPU usage
DB Instance 1 DB Instance 1 Standby
DB
• Workload peaks at the same time can’t take full
Clusterware Clusterware benefit of CPU virtualization
Instance 1
• Idle CPU is wasted and multiplied by number of
Production Cluster 2 servers in the cluster
Stanby • Could be more flexible infrastructure for
DB Instance 2 DB Instance 2
DB provisioning, maintenance and failover
Clusterware Clusterware
Instance 2
operations

Oracle RAC Cluster • Free resources are reserved in DR server to get


30
Automated workload balancing the additional workload in case of a node
22,5
15 15
failure/maintenance
15

7,5 7,5
7,5

0 CPU% 0
0

Do not host only RAC(s) DB in the server …


DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
HA Step 3: the global picture

Production Site DR Site


Test / Dev Cluster 1 Test/Dev

DB Instance 1 DB Instance 1 Another single • Test and Development are different


Clusterware Clusterware DB
workloads profiles than Production
• Mix Production/DR and Test
Production Cluster 1 Production environment to optimize resources
Standby
DB Instance 1 DB Instance 1 • Define Test and Development workloads
DB
Clusterware Clusterware Instance 1 as less priority without impact on
activities
Standby
Production Cluster 2 • Less hardware resources
Stanby
DB Instance 2 DB Instance 2
DB • Simplified and Flexible IT infrastructure
Clusterware Clusterware
Instance 2
• Less administration and maintenance
VIOS1 VIOS2 VIOS1 VIOS2 VIOS1 VIOS2

30
15 15

22,5

15
7,5 7,5

7,5

0
0 0

SAN
SAN LAN
LAN LAN
LAN SAN
SAN
Storage services
(i.e. Flash copy/PPRC/MetroMirror)

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Provide Different Level of SLA

Combine PowerVM Virtualization and Oracle Grid Infrastructure

Provide :

Oracle Applications Oracle Oracle • Cold Failover


Micro-partitionning

Micro-partitionning

Micro-partitionning

Micro-partitionning
(Cluster Implementation) Applications Applications • Workload
Live
Oracle Middleware Oracle Oracle migration
(Cluster Implementation) Middleware Middleware • High
Availability
Oracle Real Application Cluster Database Database
• Scalability
Oracle Grid Infrastructure

VIO Server VIO Server VIO Server VIO Server

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Storage Virtualization is . . .

Technology that makes one set of resources look and


feel like another set of resources, preferably with more
Logical desirable characteristics…
Representation A logical representation of resources not constrained
by physical limitations
– Hides some of the complexity

Storage Virtualization – Adds or integrates new function with existing


services
– Can be nested or applied to multiple layers of a
system

Physical
Resources

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Consolidate & Virtualize Storage

Use IBM SAN Volume Controler


rootvg Oracle ASM IBM GPFS

Make changes to the storage


Virtual Virtual Virtual Virtual Virtual without disrupting host
Disk Disk Disk Disk Disk
applications
SAN
Combine the capacity from
SAN Volume Controller multiple arrays into several
Advanced Copy Services Storage Pools

Apply common copy services


DS4000 Storage Pools HP
across the storage pool
HDS
EMC Manage the storage pool from
DS5000
DS8000
XiV
Sym
a central point

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Provide a Flexible and Optimized Architecture

Load Balancers Use Oracle & IBM


Oracle Applications Oracle Oracle Technologies
(Cluster Implementation) Applications Applications

Oracle Middleware Oracle Oracle


(Cluster Implementation) Middleware Middleware

Micro-partitionning
Micro-partitionning

Micro-partitionning

Micro-partitionning
Oracle Single Access Name (my.cluster.com)

Oracle Real Application Cluster RAC One DB Database


Optimize Server
Oracle Grid Infrastructure ressources usage !
VIO Server VIO Server VIO Server VIO Server

Add/Remove server
ressources on the fly …
Oracle ASM IBM GPFS
Add/Remove server(s) on
Virtual Virtual Virtual Virtual Virtual the fly …
Disk Disk Disk Disk Disk

SAN Add/Remove Storage on the


SAN Volume Controller fly …
Advanced Copy Services HP
Reallocate ressources on
Storage Pools the fly
DS4000 HDS
EMC
DS5000 XiV Sym No disruptions !
DS8000

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Agenda

IBM Power Technologies for Oracle

High-Availability and Disaster Recovery

IBM Live Partition Mobility with Oracle Database Demo

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Make a Flexible IT: Manage and Move the Workload

Use IBM Live Partition Mobility


LPM migrates a partition from a source server to target server without shutdown

LPM is easy maintenance


Oracle Other Oracle
LPM is Live Workload Management
Applications Application
Micro-partitionning

Micro-partitionning
Oracle Oracle
Middleware Middleware Examples of LPM operations:
– Database partition runs a batch
Oracle
and Server A is overloaded, CPU
Database
is 100% busy, while Server B has
free CPU capacity
Test & Dev Test & Dev
• Migrate Test & Dev partition to
server B and free
VIO Server VIO Server corresponding resource for
Database partition on server A
24 24

Server A
18 18
Server B – You need to maintain Server A,
12 12
Migrate Oracle database partition
6 6 to Server B without disruption
0 0

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Provide Live Workload Flexibility with LPM: details

LPAR-1 LPAR-2 LPAR-3 LPAR-4 VIOS VIOS LPAR-1 LPAR-2 LPAR-3 LPAR-4
Migration Migration
Controller Controller

Oracle Def 1
Oracle
P P P P Def 2 P P PP PP
P P P P P P P P P P P Def 3 P P P P PP PP PP P P P
Def 4
AIX Kernel AIX Kernel AIX Kernel AIX Kernel AIX Kernel AIX Kernel AIX Kernel AIX Kernel
Hypervisor Hypervisor
Ethernet
Partition Mobility Requires: SAN Partition Mobility Steps
• POWER6 Validation
• AIX 5.3 / 6.1 or Linux Copy memory pages
Boot
• All resources must be “Virtualized” Host to target systems
•No real resources Transfer
• SAN storage environment Data Turn off Host resources
•SAN Boot, temp space, same network Activate Target resources
The number of Oracle cores needed does not change before and after the migration

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Oracle Live Partition Mobility (LPM) demo
(is better explanation than just slides)

Power6 Power7
Application Server (swingbench) simulates
Other Another
users and generates workload to the Oracle
Apps 1 DB DB
Swingbench
The Oracle DB is run as a Real Application
Any other
Cluster (RAC) with 3 nodes in the Power6
Apps
DNS server.
DB Instance Web server You need to:
Clusterware perform a maintenance on the left
Other server/location
Node A
Apps 1 or run additional workload on the Power6
server
Node B
or migrate the production to a new Power7
LPM server
Node C

VIOS1 VIOS2 VIOS1 VIOS2 This is just a few (11) clicks on the HMC
HMC 15

GUI and a few minutes process


15

7,5

7,5

SAN
SAN LAN
LAN LPM the RAC node and make it fly !!

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


The non official demo scenario, just as an example

Just try to migrate one RAC node without stopping the cluster on it.
In case of any disruption of the node during the migration:
the node will get out of the cluster and will reboot ;-(
the workload will run on the remaining node(s). This is regular behavior of the RAC cluster

This scenario is not yet supported, and certification tests are under progress, so you must
not use it for production purpose. (only for test). This demo process without cluster stop
on the migrated RAC node is a non official disclosure

The infrastructure is based on PowerVM virtualization using VIO Servers on both source
and target servers.

It is basic installation without any customization and tuning


SAN disks of RAC nodes are mapped and shared to all VIO servers. It’s done using VSCSI, but it’s
easier to setup as NPIV and Virtual Fiber Adapters
ASM is setup to share the disks across the nodes of the Oracle cluster. It could have been GPFS,
it is also supported.

The RAC interconnect network is trunked with other networks to the physical network
infrastructure through a single physical adapter of the VIOS.

33 11/3/2011 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


LPM the RAC scenario

LPM is certified for Oracle single instance,


certification tests for RAC are under
Power6 Power7
progress
Other Another
Apps 1 DB Stop the Oracle cluster on the node,
DNS
Any other
keep AIX alive.
App server Apps crsctl stop crs
Swingbench
Web server
DB Instance Click on the HMC GUI and LPM the node,
Clusterware
Other
remaining RAC nodes run the workload
Node A and get CPU cycles (uncapped micro-
Apps 1
partitions). You can also add RAM using
Node B
DLPAR
Node C
Restart the Oracle cluster on the
VIOS1 VIOS2 VIOS1 VIOS2 migrated node
HMC
crsctl start crs
15

15

7,5

7,5

SAN
SAN LAN
LAN http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101965

34 11/3/2011 DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


Thanks for your attention

DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation


35

You might also like