Professional Documents
Culture Documents
Doc
Doc
Doc
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ORACLE CAREER
Top 50 Exadata Interview Questions
Getting ready for a Exadata interview?
Make sure to refresh your knowledge by reviewing this list of Exadata Interview Questions.
What environment is a good fit for Exadata?
Exadata was originally designed for the warehouse environment. Later it was enhanced for use in
the OLTP databases as well.
What are the advantages of Exadata?
The Exadata cluster allows for consistent performance while allowing for increased throughput.
As load increases on the cluster the performance remains consistent by utilizing inter-instance
and intra-instance parallelism.
It should not be expected that just moving to Exadata will improve performance. In most cases it
will especially if the current database host is overloaded.
What is the secret behind Exadatas higher throughput?
Exadata ships less data through the pipes between the storage and the database nodes and other
nodes in the RAC cluster.
Also its ability to do massive parallelism by running parallel processes across all the nodes in
the cluster provides it much higher level of throughput.
It also has much bigger pipes in the cluster using Infiniband interconnect for inter-instance data
block transfers as high as 5X of fiberchannel networks.
What are the different Exadata configurations?
The Exadata Appliance configuration comes as a Full Rack, Half Rack, Quarter Rack or 1/8th
rack.
The Full Rack X2-2 has 6 CPUs per node with Intel Xeon 5670 processors and a total of 8
Database Server nodes. These servers have 96GB of memory on each node. A total of 14 Storage
server cells communicate with the storage and push the requested data from the storage to the
compute nodes.
The Half Rack has exactly half the capacity. It has 6 CPUs per node with core Intel Xeon 5670
processors and a total of 4 Database Server nodes. It has 96GB of memory per database server
node with a total of 7 Storage server cells.
The Exadata is also available in the 1/8th Rack configuration.
What are the key Hardware components?
DB Server
Cisco Switch
Smart Scan,
Storage Index
Cell and Grid Disk are a logical component of the physical Exadata storage. A cell or Exadata
Storage server cell is a combination of Disk Drives put together to store user data. Each Cell
Disk corresponds to a LUN (Logical Unit) which has been formatted by the Exadata Storage
Server Software. Typically, each cell has 12 disk drives mapped to it.
Grid Disks are created on top of Cell Disks and are presented to Oracle ASM as ASM disks.
Space is allocated in chunks from the outer tracks of the Cell disk and moving inwards. One can
have multiple Grid Disks per Cell disk.
What is IORM?
Hybrid Columnar compression, also called HCC, is a feature of Exadata which is used for
compressing data at column level for a table.
It creates compression data units which consist of logical grouping of columns values typically
having several data blocks in it. Each data block has data from columns for multiple rows.
This logarithm has the potential to reduce the storage used by the data and reduce disk I/O
enhancing performance for the queries.
The different types of HCC compression include:
Query Low
Query High
Archive High
Archive Low
What is Flash cache?
Four 96G PCIe flash memory cards are present on each Exadata Storage Server cell which
provide very fast access to the data stored on it.
This is further achieved by also provides mechanism to reduces data access latency by retrieving
data from memory rather than having to access data from disk. A total flash storage of 384GB
per cell is available on the Exadata appliance.
It is a feature of the Exadata Software which enhances the database performance many times
over. It processes queries in an intelligent way, retrieving specific rows rather than the complete
blocks.
It applies filtering criteria at the storage level based on the selection criteria specified in the
query.
It also performs column projection which is a process of sending only required columns for the
query back to the database host/instance.
What are the Parallelism instance parameter used in Exadata?
You can use the calibrate commands at the cellcli command line.
What are the ways to migrate onto Exadata?
Oracle DataGuard
Traditional Export/Import
Tablespace transportation
Some of the operations that are offloaded from the database host to the cell servers are:
Predicate filtering
Join processing
Backups
What is cellcli?
This is the command line utility used to managed the cell storage.
How do you create obtain info on the Celldisks?
At the cellcli command line you can issue the list celldisk command.
At the cellcli command you would need to issue the create grididsk all .. command.
What are the cellinit.ora and the cellip.ora files used for?
These files have the hostnames and the ip address of all the nodes in the cluster. They are used to
run commands on remote database and cellserver nodes from a local host.
Example:
cat /etc/oracle/cell/network-config/cellinit.ora
ipaddress1=192.168.47.21/24
$ cat /etc/oracle/cell/network-config/cellip.ora
cell=192.168.47.21:5042
cell=192.168.47.22:5042
cell=192.168.47.23:5042
What operating systems does Exadata support?
Exadata has traditionally run Oracle Linux OS. Recently, Solaris has also been made available on
this engineered system.
ORACLE CAREER
Top 30 RAC Interview Questions That Helped Me. Are You
Prepared?
Getting ready for a RAC interview? Make sure to refresh your knowledge by reviewing this list
of RAC Interview Questions.
What is cache fusion?
In a RAC environment, it is the combining of data blocks, which are shipped across the
interconnect from remote database caches (SGA) to the local node, in order to fulfill the
requirements for a transaction (DML, Query of Data Dictionary).
What is split brain?
When database nodes in a cluster are unable to communicate with each other, they may continue
to process and modify the data blocks independently. If the
same block is modified by more than one instance, synchronization/locking of the data blocks
does not take place and blocks may be overwritten by others in the cluster. This state is called
split brain.
When an instance crashes in a single node database on startup a crash recovery takes place. In a
RAC enviornment the same recovery for an instance is performed by the surviving nodes called
Instance recovery.
What is the interconnect used for?
It is a private network which is used to ship data blocks from one instance to another for cache
fusion. The physical data blocks as well as data dictionary blocks are shared across this
interconnect.
How do you determine what protocol is being used for Interconnect traffic?
One of the ways is to look at the database alert log for the time period when the database was
started up.
What methods are available to keep the time synchronized on all nodes in the
cluster?
Either the Network Time Protocol(NTP) can be configured or in 11gr2, Cluster Time
Synchronization Service (CTSS) can be used.
What files components in RAC must reside on shared storage?
Spfiles, ControlFiles, Datafiles and Redolog files should be created on shared storage.
Where does the Clusterware write when there is a network or Storage missed
heartbeat?
The ocrconfig -showbackup can be run to find out the automatic and manually run backups.
If your OCR is corrupted what options do have to resolve this?
You can use either the logical or the physical OCR backup copy to restore the Repository.
How do you find out what object has its blocks being shipped across the instance
the most?
The VIP is an alternate Virtual IP address assigned to each node in a cluster. During a node
failure the VIP of the failed node moves to the surviving node and relays to the application that
the node has gone down. Without VIP, the application will wait for TCP timeout and then find
out that the session is no longer live due to the failure.
How do we know which database instances are part of a RAC cluster?
You can query the V$ACTIVE_INSTANCES view to determine the member instances of the
RAC cluster.
The Cluster Health Monitor (CHM) stores operating system metrics in the CHM repository for
all nodes in a RAC cluster. It stores information on CPU, memory, process, network and other
OS data, This information can later be retrieved and used to troubleshoot and identify any cluster
related issues. It is a default component of the 11gr2 grid install. The data is stored in the master
repository and replicated to a standby repository on a different node.
What would be the possible performance impact in a cluster if a less powerful node
(e.g. slower CPUs) is added to the cluster?
All processing will show down to the CPU speed of the slowest server.
What is the purpose of OLR?
Oracle Local repository contains information that allows the cluster processes to be started up
with the OCR being in the ASM storage ssytem. Since the ASM file system is unavailable until
the Grid processes are started up a local copy of the contents of the OCR is required which is
stored in the OLR.
What is the default memory allocation for ASM?
In 10g the default SGA size is 1G in 11g it is set to 256M and in 12c ASM it is set back to 1G.
How do you backup ASM Metadata?
You can use md_backup to restore the ASM diskgroup configuration in-case of ASM diskgroup
storage loss.
What files can be stored in the ASM diskgroup?
Datafiles
Redo logfiles
Spfiles
In 12c the files below can also new be stored in the ASM Diskgroup
Password file
This is the parameter which controls the number of Allocation units the ASM instance will try to
rebalance at any given time. In ASM versions less than 11.2.0.3 the default value is 11 however it
has been changed to unlimited in later versions.
What is a rolling upgrade?
A patch is considered a rolling if it is can be applied to the cluster binaries without having to
shutting down the database in a RAC environment. All nodes in the cluster are patched in a
rolling manner, one by one, with only the node which is being patched unavailable while all
other instance open.
What are some of the RAC specific parameters?
CLUSTER_DATABASE
CLUSTER_DATABASE_INSTANCE
ACTIVE_INSTANCE_COUNT
UNDO_MANAGEMENT
The Grid software is becoming more and more capable of not just supporting HA for Oracle
Databases but also other applications including Oracles applications. With 12c there are more
features and functionality built-in and it is easier to deploy these pre-built solutions, available for
common Oracle applications.
What components of the Grid should I back up?
You can run the opatch lsinventory -all_nodes command from a single node to look at the
inventory details for all nodes in the cluster.
Golden Gate:
What type of Topology does Oracle Goldengate support?
GoldenGate supports the following topologies. More details can be found here.
Unidirectional
Bidirectional
Peer-to-peer
Broadcast
Consolidation
Cascasding
Manager
Extract
Pump
Replicate
Oracle Database
TimesTen
MySQL
IBM DB2
Informix
Teradata
Sybase
Enscribe
SQL/MX
Goldengate supports both DML and DDL Replication from the source to target.
Sequences: Are not supported. The work around is use to use odd/even, range or
concatenate sequences.
LAG: This should be minimized. If a customer says that there will not be any LAG due to
network or huge load, then we dont need to deploy CRDs. But this is not the case always
as there would be some LAG and these can cause Conflicts.
CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kind of DMLs
that can be used to detect and resolve them.
Packaged Application: These are not supported as it may contain data types which are not
support by OGG or it might not allow the application modification to work with OGG.
Are OGG binaries supported on the Database File System (DBFS)? What files
can be stored in DBFS?
No, OGG binaries are not supported on DBFS. You can however store parameter files, data files
(trail files), and checkpoint files on DBFS.
What are the differences between the Classic and integrated Capture?
Classic Capture:
The Classic Capture mode is the traditional Extract process that accesses the database
redo logs (optionally archive logs) to capture the DML changes occurring on the objects
specified in the parameter files.
At the OS level, the GoldenGate user must be a part of the same database group which
owns the database redo logs.
There are some data types that are not supported in Classic Capture mode.
In the Integrated Capture mode, GoldenGate works directly with the database log mining
server to receive the data changes in the form of logical change records (LCRs).
IC mode does not require any special setup for the databases using ASM, transparent data
encryption, or Oracle RAC.
This feature is only available for oracle databases in Version 11.2.0.3 or higher.
It also supports various object types which were previously not supported by Classic
Capture.
This Capture mode supports extracting data from source databases using compression.
List the minimum parameters that can be used to create the extract process?
The following are the minimium required parameters which must be defined in the extract
parameter file.
EXTRACT NAME
USERID
EXTTRAIL
TABLE
Manager
Extract
Replicat
Gobals
Parameter list
Macro body
Sample:
MACRO #macro_name
PARAMS (#param1, #param2, )
BEGIN
< macro_body >
END;
I want to configure multiple extracts to write to the same exttrail file? Is this
possible?
Only one Extract process can write to one exttrail at a time. So you cant configure multiple
extracts to write to the same exttrail.
Password Encryption.
Network Encryption.
What are the different password encrytion options available with OGG?
You can encrypt a password in OGG using
Is there a way to check the syntax of the commands in the parameter file without
actually running the GoldenGate process
Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting. If there
is any error you will see it.
How can you increase the maximum size of the read operation into the buffer
that holds the results of the reads from the transaction log?
If you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE
parameter to control the read size for ASM Databases.
What information can you expect when there us data in the discard file?
When data is discarded, the discard file can contain:
1. Discard row details
2. Database Errors
3. Trail file number
What command can be used to switch writing the trail data to a new trail file?
You can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER
How can you determine if the parameters for a process was recently changed
When ever a process is started, the parameters in the .prm file for the process is written to the
process REPORT. You can look at the older process reports to view the parameters which were
used to start up the process. By comparing the older and the current reports you can identify the
changes in the parameters.
BATCHSQL
GROUPTRANSOPS
INSERTAPPEND
What are the most common reasons of an Extract process slowing down?
Some of the possible reasons are:
Insufficient memory on the Extract side. Uncommitted, long running transactions can
cause writing of a transaction to a temporary area (dirtmp) on disk. Once the transaction
is committed it is read from the temporary location on the file system and converted to
trail files.
What are the most common reasons of the Replicat process slowing down?
Some of the possible reasons are:
Blocking sessions on the destination database where non-Goldengate transactions are also
taking place on the same table as the replicat processing.
If using DBFS, writing & reading of trail files may be slow if SGA parameters are not
tuned.
My extract was running fine for a long time. All of a sudden it went down. I
started the extract processes after 1 hour. What will happen to my committed
transactions that occurred in the database during last 1 hour?
OGG checkpoint provides the fault tolerance and make sure that the transaction marked for
committed is capture and captured only once. Even if the extract went down abnormally, when
you start the process again it reads the checkpoint file to provide the read consistency and
transaction recovery.
Why would you segregate the tables in a repllication configuration? How would
you do it?
In OGG you can configure replicat at the data at the schema level or at the table level using
TABLE parameter of extract and MAP parameter of replicat.
For replicating the entire database you can list all the schemas in the database in the
extract/replicat parameter file.
Depending the amount of redo generation you can split the tables in a schema in multiple
extracts and replicats to improve the performance of data replication. Alternatively youcan also
group a set of tables in the confiuration by the application functionality.
Alternatively you may need to remove tables which have long running transactions in a seperate
extract process to eliminte lag on the other tables.
Lets say that you have a schema named SCOTT and it has 100 hundred tables.
Out of these hundred tables, 50 tables are heavily utilized by application.
To improve the overall replication performance you create 3 extract and 3 replicats as follows:
Ext_1/Rep_1 > 25 tables
Ext_2/Rep_2 > 25 tables
Ext_3/Rep_3 > 50 tables
Ext_1/Rep_1 and Ext_2/Rep_2 contains 25 table each which are heavily utilized or generate
more redo.
Ext_3/Rep_3 contains all the other 50 tables which are least used.
Troubleshoot:
How can we report on long running transactions?
The WARNLONGTRANS parameter can be specified with a threshold time that a transaction
can be open before Extract writes a warning message to the ggs error log.
Example: WARNLONGTRANS 1h, CHECKINTERVAL 10m
What command can be used to view the checkpoint information for the extract
process?
The RESTARTCOLLISION parameter is used to skip ONE transaction only in a situation when
the GoldenGate process crashed and performed an operation (INSERT, UPDATE & DELETE) in
the database but could not checkpoint the process information to the checkpoint file/table. On
recovery it will skip the transaction and AUTOMATICALLY continue to the next operation in
the trail file.
When using HANDLECOLLISION GoldenGate will continue to overwritten and process
transactions until the parameter is removed from the parameter files and the processes restarted.
How do you view the data which has been extracted from the redo logs?
The logdump utility is used to open the trail files and look at the actual records that have been
extracted from the redo or the archive log files.
What does the RMAN-08147 warning signify when your environment has a
GoldenGate Capture Processes configured?
This occurs when the V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required
by the GoldenGate Capture process and RMAN is trying to delete the archived logs. The
RMAN-08147 error is raised when RMAN tries to delete these files.
When the database is open it uses the DBA_CAPTURE values to determine the log files required
for mining. However if the database is in the mount state the V$ARCHIVED_LOG.
NEXT_CHANGE# value is used.
See MetaLink note: 1581365.1
How would you look at a trail file using logdump, if the trail file is Encrypted?
You must use the DECRYPT option before viewing data in the Trail data.
List few useful Logdump commands to view and search data stored in OGG trail
files.
Below are few logdump commands used on a daily basis for displaying or analyzing data stored
in a trail file.
$ ./logdump to connect to the logdump prompt
logdump> open /u01/app/oracle/dirdat/et000001 to open a trail file in logdump
logdump> fileheader on to view the trail file header
logdump> ghdr on to view the record header with data
logdump> detail on to view column information
logdump> detail data to display HEX and ASCII data values to the column list
logdump> reclen 200 to control how much record data is displayed
logdump> pos 0 To go to the first record
logdump> next (or simply n) to move from one record to another in sequence
logdump> count counting records in a trail
MISC:
Why should I upgrade my GoldenGate Extract processes to Integrated Extract?
Oracle is able to provide faster integration of the new database features by moving the
GoldenGate Extraction processes into the database. Due to this, the GoldenGate Integrated
Extract has a number of features like Compression which are not supported in the traditional
Extract. You can read more about how to upgrade to Integrated Extract and more about
Integrated Delivery. Going forward preference should be give to create new extracts as
Integrated Extracts and also to upgrade existing traditional Extracts.
With Integrated Delivery, where can we look for the performance stats?
Yes with 12c, performance statistics are collected in the AWR repository and the data is available
via the normal AWR reports.
What are the steps required to add a new table to an existing replication setup?
The steps to be executed would be the following:
Obtain starting database SCN and Copy the source table data to the target database
What does the GoldenGate CSN equate to, in the Oracle Database?
It is equivalent of the Oracle database SCN transaction number.
Direct Load Faster but doesnt support LOB data types (12c include support for LOB)
Direct Bulk Load Uses SQL*LOAD API for Oracle and SSIS for MS SQL SERVER
File to replicat Fast but the rmtfile limit is 2GB. If the table cant be fit in 1 rmtfile you
can use maxfiles but the replicat need to be registered on the target OGG home to read
the rmtfiles from source.
File to Database utility depending on the target database, use SQL*LOAD for Oracle
and SSIS for MS SQL SERVER and so on.
Oracle GoldenGate initial loading reads data directly from the source database tables without
locking them. So you dont need downtime but it will use database resources and can cause
performance issues. Take extra precaution to perform the initial load during the non-peak time so
that you dont run into resource contention.
I have a table called TEST on source and target with same name, structure and
data type but in a different column order. How can you setup replication for this
table?
OGG by default assumes that the sources and target tables are identical. A table is said to be
identical if and only if the table structure, data type and column order are the same on both the
source and the target.
If the tables are not identical you must use the parameter SOURCEDEFS pointing to the source
table definition and COLMAP parameter to map the columns from source to target.
Check to make sure that the Extract has processed all the records in the data source
(Online Redo/archive logs)
GGSCI> send extract , logend
(The above command should print YES)
Make sure that the primary extract is reading the end of the redolog and that there is no
LAG at all for the processes.
Source:
1. Stop the primary extract
2. Stop the pump extract
3. Stop the manager process
4. Make sure all the processes are down.
Target:
1. Stop replicat process
2. Stop mgr
3. Make sure that all the processes are down.
4. Proceed with the maintenance
5. After the maintenance, proceed with starting up the processes:
Source:
1. Start the manager process
2. Start the primary extract
What are the basic resources required to configure Oracle GoldenGate high
availability solution with Oracle Clusterware?
There are 3 basic resources required:
Virtual IP
Shared storage
Action script
12c:
What are some of the key features of GoldenGate 12c?
The following are some of the more interesting features of Oracle GoldenGate 12c:
Coordinated Replicat
What are the different data encyption methods available in OGG 12c?
In OGG 12c you can encrypt data with the following 2 methods:
1) Encrypt Data with Master Key and Wallet
2) Encrypt Data with ENCKEYS
The difference between classic mode and coordinated mode is that Replicat is multi-threaded in
coordinated mode. Within a single Replicat instance, multiple threads read the trail
independently and apply transactions in parallel. Each thread handles all of the filtering,
mapping, conversion, SQL construction, and error handling for its assigned workload. A
coordinator thread coordinates the transactions across threads to account for dependencies
among the threads.
What are the changes at the pump level in 12c when using integrated delivery?
The trail generated by the extract process is read by Integrated Delivery and Logical Chase
Records (LCR) are created. These LCRS are then shipped over the network to the destination
database.
Integrated delivery is the new 12c mechanism of sending extract trail to the destination in an
Oracle enviornment. Coordinated delivery is the new mechanism to send data between nowOracle databases.
RMAN:
Interview Questions & Answer on RMAN
What is RMAN?
Recovery Manager (RMAN) is a utility that can manage your entire Oracle backup and recovery
activities.
What is the difference between using recovery catalog and control file?
When new incarnation happens, the old backup information in control file will be lost. It will be
preserved in recovery catalog.
In recovery catalog we can store scripts.
Recovery catalog is central and can have information of many databases.
Can we use same target database as catalog?
No, The recovery catalog should not reside in the target database (database should be backed up),
because the database cant be recovered in the mounted state.
How do you know that how much RMAN task has been completed?
By querying v$rman_status or v$session_longops
From where list & report commands will get input?
Both the commands command quering v$ and recovery catalog views. V$BACKUP_FILES or
many of the recovery catalog views such asRC_DATAFILE_COPY or RC_ARCHIVED_LOG.
Command to delete archive logs older than 7days?
RMAN> delete archivelog all completed before sysdate-7;
Type of I/O device being read or written to, either a disk or an sbt_tape
How do you identify the block corruption in RMAN database? How do you fix it?
Using v$block_corruption view you can find which blocks corrupted.
RMAN> block recover datafile <fileid> block <blockid>;
Using the above statement You recover the corrupted blocks. First check whether the block is
corrupted or not by using this command
SQL>select file# block# from v$database_block_corruption;
file# block
2 507
the above block is corrupted
conn to Rman
To recover the block use this command
RMAN>blockrecover datafile 2 block 507;
the above command recover the block 507
Now just verify it..
Rman>blockrecover corruption list;
How do you clone the database using RMAN software? Give brief steps? When do you use
crosscheck command?
Check whether backup pieces proxy copies or disk copies still exist.
Two commands available in RMAN to clone database:
1) Duplicate
2) Restore.
List some of the RMAN catalog view names which contain the catalog information?
RC_DATABASE_INCARNATION RC_BACKUP_COPY_DETAILS
RC_BACKUP_CORRUPTION
RC_BACKUP-DATAFILE_SUMMARY
How do you install the RMAN recovery catalog?
Steps to be followed:
1) Create connection string at catalog database.
2) At catalog database create one new user or use existing user and give that user a
recovery_catalog_owner privilege.
3) Login into RMAN with connection string
a) export ORACLE_SID
b) rman target catalog @connection string
4) rman> create catalog;
5) register database;
What is the difference between physical and logical backups?
In Oracle Logical Backup is which is taken using either Traditional Export/Import or Latest
Data Pump. Where as Physical backup is known when you take Physical O/s Database related
Files as Backup.
What is RAID? What is RAID0? What is RAID1? What is RAID 10?
RAID: It is a redundant array of independent disk
RAID0: Concatenation and stripping
RAID1: Mirroring
How to enable Fast Incremental Backup to backup only those data blocks that have
changed?
SQL> ALTER DATABASE enable BLOCK CHANGE TRACKING;
How do you set the flash recovery area?