Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 19

Oracle Database Administration (DBA) Interview Questions (Technical)

1. What is an Oracle Instance?

A database is used to describe the physical files used to store information (data files, control files, redo log files)

Oracle defines the term instance as the memory structure and the background processes used to access data from a
database (SGA, PGA)

2. What information is stored in Control File?

A control file contains information about the associated database that is required for access by an instance, both at
startup and during normal operation. Control file information can be modified only by Oracle; no database administrator
or user can edit a control file.

Among other things, a control file contains information such as:

• The database name


• The timestamp of database creation
• The names and locations of associated datafiles and redo log files
• Tablespace information
• Datafile offline ranges
• The log history
• Archived log information
• Backup set and backup piece information
• Backup datafile and redo log information
• Datafile copy information
• The current log sequence number
• Checkpoint information

3. When you start an Oracle DB which file is accessed first?

INIT.ora

When Oracle is trying to open your database, it goes through three distinct stages, Startup (nomount), Mount Open

When you issue the startup command, the first thing the database will do is enter the nomount stage. During the
nomount stage, Oracle first opens and reads the initialization parameter file (init.ora) to see how the database is
configured. After the parameter file is accessed, the memory areas associated with the database instance are allocated.
Also, during the nomount stage, the Oracle background processes are started.

When the startup command enters the mount stage, it opens and reads the control file. The control file is a binary file
that tracks important database information, such as the location of the database datafiles. In the mount stage, Oracle
determines the location of the datafiles, but does not yet open them. Once the datafile locations have been identified,
the database is ready to be opened.

The last startup step for an Oracle database is the open stage. When Oracle opens the database, it accesses all of the
datafiles associated with the database. Once it has accessed the database datafiles, Oracle makes sure that all of the
database datafiles are consistent.

4. What is the Job of SMON, PMON processes?

* SMON - System Monitor process recovers after instance failure and monitors temporary segments and extents.
SMON in a non-failed instance can also perform failed instance recovery for other failed RAC instance.
* PMON - Process Monitor process recovers failed process resources. If MTS (also called Shared Server Architecture)
is being utilized, PMON monitors and restarts any failed dispatcher or server processes. In RAC, PMON’s role as
service registration agent is particularly important.

* DBWR - Database Writer or Dirty Buffer Writer process is responsible for writing dirty buffers from the database block
cache to the database data files. Generally, DBWR only writes blocks back to the data files on commit, or when the
cache is full and space has to be made for more blocks. The possible multiple DBWR processes in RAC must be
coordinated through the locking and global cache processes to ensure efficient processing is accomplished.

* LGWR - Log Writer process is responsible for writing the log buffers out to the redo logs. In RAC, each RAC instance
has its own LGWR process that maintains that instance’s thread of redo logs.

* ARCH – The optional Archive process writes filled redo logs to the archive log location(s). In RAC, the various ARCH
processes can be utilized to ensure that copies of the archived redo logs for each instance are available to the other
instances in the RAC setup should they be needed for recovery.

5. What is Instance Recovery?

When an Oracle instance fails, Oracle performs an instance recovery when the associated database is re-started.
Instance recovery occurs in two steps:

 Cache recovery: Changes being made to a database are recorded in the database buffer cache. These changes are
also recorded in online redo log files simultaneously. When there are enough data in the database buffer cache, they
are written to data files. If an Oracle instance fails before the data in the database buffer cache are written to data files,
Oracle uses the data recorded in the online redo log files to recover the lost data when the associated database is re-
started. This process is called cache recovery.

 Transaction recovery: When a transaction modifies data in a database, the before image of the modified data is
stored in an undo segment. The data stored in the undo segment is used to restore the original values in case a
transaction is rolled back. At the time of an instance failure, the database may have uncommitted transactions. It is
possible that changes made by these uncommitted transactions have gotten saved in data files. To maintain read
consistency, Oracle rolls back all uncommitted transactions when the associated database is re-started. Oracle uses the
undo data stored in undo segments to accomplish this. This process is called transaction recovery.

6. What is written in Redo Log Files?

The redo log makes it possible to replay SQL statements.


Before Oracle changes data in a datafile it writes these changes to the redo log. If something happens to one of the
datafiles, a backed up datafile can be restored and the redo, that was written since, replied, which brings the datafile to
the state it had before it became unavailable.

7. How do you control number of Datafiles one can have in an Oracle database?

The number of data files in oracle database is controlled by the initialization parameter DB_FILES

setting this value too high can cause DBWR issues. before 9i maximum no of datafiles in a oracle
database is 1022. after 9i and this limit is applicable to the no of data files in a tablespace.

8. How many Maximum Datafiles can there be in an Oracle Database?

No limit from 10g

9. What is a Tablespace?

A tablespace is a logical storage unit of an Oracle database. It is a logical unit because a tablespace is not visible in the
file system of the computer on which the database is present.
SYSTEM is the default tablespace of an Oracle database that stores data dictionary tables and indexes. The tablespace
builds a bridge between the Oracle database and the file system in which the table's or the index's data is stored.

One can use multiple tablespaces that offer flexibility in performing database operations, such as separation of one
application's data from another's data, storing different tablespace data files on separate disk drives to avoid input-
output contentions, maintaining backups for individual tablespaces, and many more.

10. What is the purpose of Redo Log files?

11. Which default Database roles are created when you create a Database?

Connect, resource, DBA, public, XDBAADMIN, OEM_ADVISOR, IMP_FULL_DATABASE, eXP_FULL_DATABAS

12. What is a Checkpoint?

A checkpoint performs the following three operations:

1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in
the buffer cache with the datafiles on disk.
It's the DBWR that writes all modified databaseblocks back to the datafiles.
2. The latest SCN is written (updated) into the datafile header.
3. The latest SCN is also written to the controlfiles.

The update of the datafile headers and the control files is done by the LGWR(CKPT if CKPT is enabled). As of version
8.0, CKPT is enabled by default.

13. Which Process reads data from Datafiles?

Reading in Oracle is completely handled by Server processes. All instruction (reading or writing) from clients processes
first goes to server process.

Reading the Data

• The server process first checks the buffer cache for the presence of data.
• If not found then only copy the data from datafile to buffer cache.
• Then send the data to the client

14. Which Process writes data in Datafiles?

DBWR process

15. Can you make a Datafile auto extendible. If yes, how?

ALTER TABLESPACE xxxxx AUTOEXTEND ON;

16. What is a Shared Pool?

The Oracle shared pool contains Oracle's library cache, which is responsible for collecting, parsing, interpreting, and
executing all of the SQL statements that go against the Oracle database. Hence, the shared pool is a key component,
so it's necessary for the Oracle database administrator to check for shared pool contention.
The shared pool is like a buffer for SQL statements. Oracle's parsing algorithm ensures that identical SQL statements
do not have to be parsed each time they're executed. The shared pool is used to store SQL statements, and it includes
the following components:

* The library cache * The dictionary cache * Control structures

17. What is kept in the Database Buffer Cache?

Database Buffer cache is one of the most important components of System Global Area (SGA). Database Buffer Cache
is the place where data blocks are copied from datafiles to perform SQL operations. Buffer Cache is shared memory
structure and it is concurrently accessed by all server processes.

18. How many maximum Redo Logfiles one can have in a Database?

specified in MAXLOGFILES during database creation

19. What is difference between PFile and SPFile?

When an Oracle Instance is started, the characteristics of the Instance are established by parameters specified within the
initialization parameter file. These initialization parameters are either stored in a PFILE or SPFILE. SPFILEs are available in
Oracle 9i and above. All prior releases of Oracle are using PFILEs.

SPFILEs provide the following advantages over PFILEs:

o An SPFILE can be backed-up with RMAN (RMAN cannot backup PFILEs)


o Reduce human errors. The SPFILE is maintained by the server. Parameters are checked before changes are
accepted.
o Eliminate configuration problems (no need to have a local PFILE if you want to start Oracle from a remote
machine)
o Easy to find - stored in a central location

A PFILE is a static, client-side text file that must be updated with a standard text editor like "notepad" or "vi". This file
normally reside on the server, however, you need a local copy if you want to start Oracle from a remote machine. DBA's
commonly refer to this file as the INIT.ORA file.

An SPFILE (Server Parameter File), on the other hand, is a persistent server-side binary file that can only be modified with
the "ALTER SYSTEM SET" command. This means you no longer need a local copy of the pfile to start the database from a
remote machine. Editing an SPFILE will corrupt it, and you will not be able to start your database anymore.

20. What is PGA_AGGREGRATE_TARGET parameter? (Should be 20% of SGA)

The Program Global Area (PGA) is a memory buffer that contains data and control information for a server process. A
PGA is created by Oracle when a server process is started. The PGA (Program or Process Global Area) is a memory
area (RAM) that stores data and control information for a single process. For example, it typically contains a sort area,
hash area, session cursor cache, etc.

 Automatic PGA Memory Management may be used in place of setting the sort_area_size, hash_area_size,
sort_area_retained_size, sort_area_hash_size and other related memory management parameters

PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to
the instance. You must set this parameter to enable the automatic sizing of SQL working areas used by memory-
intensive SQL operators such as sort, group-by, hash-join, bitmap merge, and bitmap create.

Oracle uses this parameter as a target for PGA memory. Use this parameter to determine the optimal size of each work
area allocated in AUTO mode (in other words, when WORKAREA_SIZE_POLICY is set to AUTO.
Oracle attempts to keep the amount of private memory below the target specified by this parameter by adapting the size
of the work areas to private memory. When increasing the value of this parameter, you indirectly increase the memory
allotted to work areas. Consequently, more memory-intensive operations are able to run fully in memory and less will
work their way over to disk.

When setting this parameter, you should examine the total memory on your system that is available to the Oracle
instance and subtract the SGA. You can assign the remaining memory to PGA_AGGREGATE_TARGET.

21. Large Pool is used for what?

Oracle Large Pool is an optional memory component of the oracle database SGA. This area is used for providing large
memory allocations in many situations that arise during the operations of an oracle database instance.

1. Session memory for the shared server and the Oracle XA Interface when distributed transactions are involved
2. I/O Server Processes
3. Parallel Query Buffers
4. Oracle Backup and Restore Operations using RMAN

Large Pool plays an important role in Oracle Database Tuning since the allocation of the memory for the above
components otherwise is done from the shared pool. Also due to the large memory requirements for I/O and Rman
operations, the large pool is better able to satisfy the requirements instead of depending on the Shared Pool Area.

22. What is PCT Increase setting?

PCTINCREASE refers to the percentage by which each next extent (beginning with the third extend) will grow. The size
of each subsequent extent is equal to the size of the previous extent plus this percentage increase. PCTINCREASE of 0
or 100 gives you nice round extent sizes that can easily be reused

23. What is PCTFREE and PCTUSED Setting?

These parameters are ignored with Locally Managed tablespaces

PCTFREE is a block storage parameter used to specify how much space should be left in a database block for future
updates. For example, for PCTFREE=10, Oracle will keep on adding new rows to a block until it is 90% full. This leaves
10% for future updates

PCTUSED is a block storage parameter used to specify when Oracle should consider a database block to be empty
enough to be added to the freelist. Oracle will only insert new rows in blocks that is enqueued on the freelist. For
example, if PCTUSED=40, Oracle will not add new rows to the block unless sufficient rows are deleted from the block
so that it falls below 40% empty

24. What is Row Migration and Row Chaining?

Row migration occurs when an update to that row would cause it to not fit on the block anymore (with all of the other
data that exists there currently). A migration means that the entire row will move and we just leave behind the
«forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.

Row chaining occurs when a row is too large to fit into a single database block. For example, if you use a 4KB blocksize
for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces.

25. What is 01555 - Snapshot Too Old error and how do you avoid it?

26. What is a Locally Managed Tablespace?

Using LMT, each tablespace manages it's own free and used space within a bitmap structure stored in one of the
tablespace's data files.
 Dictionary contention is reduced
 Space wastage reduced
 No rollback generated
 Fragmentation reduced

CREATE TABLESPACE ts2 DATAFILE '/oradata/ts2_01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

27. Can you audit SELECT statements?

NO

28. What does DBMS_FGA package do?

Fine Grained Access Control

29. What is Cost Based Optimization?

Cost Based Optimizer (CBO) - This method is used if internal statistics are present. The CBO checks several possible
execution plans and selects the one with the lowest cost, where cost relates to system resources.

30. How often you should collect statistics for a table?

31. How do you collect statistics for a table, schema and Database?

32. Can you make collection of Statistics for tables automatic?

By default Oracle 10g automatically gathers optimizer statistics using a scheduled job called GATHER_STATS_JOB. By
default this job runs within a maintenance windows between 10 P.M. to 6 A.M. week nights and all day on weekends.
The job calls the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC internal procedure which gathers
statistics for tables with either empty or stale statistics, similar to the DBMS_STATS.GATHER_DATABASE_STATS
procedure using the GATHER AUTO option. The main difference is that the internal job prioritizes the work such that
tables most urgently requiring statistics updates are processed first.

Dynamic sampling enables the server to improve performance by:

 Estimate single-table predicate selectivities where available statistics are missing or may lead to bad
estimations.
 Estimate statatistics for tables and indexes with missing statistics.
 Estimate statatistics for tables and indexes with out of date statistics.

Dynamic sampling is controled by the OPTIMIZER_DYNAMIC_SAMPLING parameter which accepts values from "0"
(off) to "10" (agressive sampling) with a default value of "2". At compile-time Oracle determines if dynamic sampling
would improve query performance. If so it issues recursive statements to estimate the necessary statistics. Dynamic
sampling can be beneficial when:

 The sample time is small compared to the overall query execution time.
 Dynamic sampling results in a better performing query.
 The query may be executed multiple times.

33. On which columns you should create Indexes?

Columns which have high cardinality of data, ssn, dob, ID, sequence number etc

34. What type of Indexes are available in Oracle?

35. What is B-Tree Index?


36. A table is having few rows, should you create indexes on this table?

37. A Column is having many repeated values which type of index you should create on this column, if you have to?

38. When should you rebuilt indexes?

Rebuild the index when these conditions are true:

- deleted entries represent 20% or more of the current entries. Data in the index becomes sparse
- the index depth is more then 4 levels. (blevel column in DBA_INDEXES is greater 4)

39. Can you built indexes online?

Online Index Rebuild Features:


+ ALTER INDEX REBUILD ONLINE;
+ DMLs are allowed on the base table
+ It is comparatively Slow
+ Base table is referred for the new index
+ Base table is locked in shared mode and DDLs are not possible
+ Intermediate table stores the data changes in the base table, during the index rebuild to update the new index later

Offline Index Rebuild Features:


+ ALTER INDEX REBUILD; (Default)
+ Does not refer the base table and the base table is exclusively locked
+ New index is created from the old index
+ No DML and DDL possible on the base table
+ Comparatively faster

40. Can you see Execution Plan of a statement.

Explain plan for xxxxxx

41. A table is created with the following setting

storage (initial 200k


next 200k
minextents 2
maxextents 100
pctincrease 40)

What will be size of 4th extent?

42. What is DB Buffer Cache Advisor?

43. What is STATSPACK tool?

44. Can you change SHARED_POOL_SIZE online?

Yes ALTER SYSTEM SET SHARED_POOL_SIZE=XXXXXX SCOPE=SPFILE;

45. Can you Redefine a table Online?

46. Can you assign Priority to users?

DBMS_RESOURCE_MANAGER package & set_consumer_group_mapping function

47. You want users to change their passwords every 2 months. How do you enforce this?

Using a profile
48. How do you delete duplicate rows in a table?

49. What is Automatic Management of Segment Space setting?

50. What is the difference between DELETE and TRUNCATE statements?

After performing a DELETE operation you need to COMMIT or ROLLBACK the transaction to make the change
permanent or to undo it.

TRUNCATE removes all rows from a table. The operation cannot be rolled back and no triggers will be fired. As such,
TRUCATE is faster and doesn't use as much undo space as a DELETE.

51. What is COMPRESS and CONSISTENT setting in EXPORT utility?

compress – When “Y”, export will mark the table to be loaded as one extent for the import utility. If “N”, the current
storage options defined for the table will be used. Although this option is only implemented on import, it can only be
specified on export.

consistent – [N] Specifies the set transaction read only statement for export, ensuring data consistency. This option
should be set to “Y” if activity is anticipated while the exp command is executing.

52. What is the difference between Direct Path and Convention Path loading?

SQL*Loader provides two methods for loading data:

• Conventional Path Load


• Direct Path Load

A conventional path load executes SQL INSERT statement(s) to populate table(s) in an Oracle database. A direct path
load eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing the data blocks
directly to the database files. A direct load, therefore, does not compete with other users for database resources so it
can usually load data at near disk speed. Certain considerations, inherent to this method of access to database files,
such as restrictions, security and backup implications, are discussed in this chapter.

Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database
tables. This method is used by all Oracle tools and applications

Instead of filling a bind array buffer and passing it to Oracle with a SQL INSERT command, a direct path load parses the
input data according to the description given in the loader control file, converts the data for each input field to its
corresponding Oracle column datatype, and builds a column array structure (an array of <length, data>
pairs).SQL*Loader then uses the column array structure to format Oracle data blocks and build index keys. The newly
formatted database blocks are then written directly to the database (multiple blocks per I/O request using asynchronous
writes if the host platform supports asynchronous I/O).

When loading a partitioned or subpartitioned table, SQL*Loader partitions the rows and maintains indexes (which can
also be partitioned). Note that a direct path load of a partitioned or subpartitioned table can be quite resource intensive
for tables with many partitions or subpartitions.

53. Can you disable and enable Primary key?

Alter table employee disable constraint employee_pk;

Alter table employee enable constraint employee_pk using index;

54. What is an Index Organized Table?

Index Organized Tables are tables that, unlike heap tables, are organized like B*Tree indexes.
CREATE TABLE admin_docindex(
token char(20),
doc_id NUMBER,
token_frequency NUMBER,
token_offsets VARCHAR2(512),
CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
ORGANIZATION INDEX
TABLESPACE admin_tbs
PCTTHRESHOLD 20
OVERFLOW TABLESPACE admin_tbs2;

55. What is a Global Index and Local Index?

Global index is for the entire partitioned table, covers all the partitions.

Local index is separate index for each partition…..local index is preferred to global for performance

56. What is the difference between Range Partitioning and Hash Partitioning?

Maps data to partitions based on ranges of partition key values that you establish for each partition.

Hash Partitioning, which maps data to partitions based on a hashing algorithm, evenly distributing data between the
partitions. This is typically used where ranges aren't appropriate, i.e. customer number, product ID

57. What is difference between Multithreaded/Shared Server and Dedicated Server?

58. Can you import objects from Oracle ver. 7.3 to 9i?

59. How do you move tables from one tablespace to another tablespace?

Alter table X move tablespace Y

60. How do see how much space is used and free in a tablespace?

61. How can you see the current DDL statements in the database?

Using Log Miner

62. What is block change tracking?

Used in RMAN for doing incremental backups.

Block change tracking causes the changed database blocks to be flagged in a file. As data blocks change, the
Change Tracking Writer (CTWR) background process tracks the changed blocks in a private area of memory. When a
commit is issued against the data block, the block change tracking information is copied to a shared area in Large Pool
called the CTWR buffer. During the checkpoint, the CTWR process writes the information from the CTWR RAM buffer to
the change-tracking file.To achive this we need to enable the block change traking in our database:

63 What is a wait event?

db file scattered read - The process has issued an I/O request to read a series of contiguous blocks from a data file into
the buffer cache, and is waiting for the operation to complete. This typically happens during a full table scan or full index
scan.

db file sequential read - The process has issued an I/O request to read one block from a data file into the buffer cache,
and is waiting for the operation to complete. This typically happens during an index lookup or a fetch from a table by
ROWID when the required data block is not already in memory. Do not be misled by the confusing name of this wait
event!
1. Which types of backups you can take in Oracle?

Export, cold backup, hot backup and RMAN

2. A database is running in NOARCHIVELOG mode then which type of backups you can take?

Cold

3. Can you take partial backups if the Database is running in NOARCHIVELOG mode?

NO

4. Can you take Online Backups if the the database is running in NOARCHIVELOG mode?

NO

5. How do you bring the database in ARCHIVELOG mode from NOARCHIVELOG mode?

SQL > Select log_mode from sys.v$database;

Make the appropriate entries in the init.ora

log_archive_dest_1='location=/u02/oradata/cuddle/archive'

log_archive_start=TRUE

SQL > startup mount;

SQL > alter database archivelog;

SQL > alter database open;

6. You cannot shutdown the database for even some minutes, then in which mode you should run
the database?

Archive log mode

7. Where should you place Archive logfiles, in the same disk where DB is or another disk?

Another disk

8. Can you take online backup of a Control file if yes, how?

Alter database backup controlfile to trace;

9. What is a Logical Backup?

Export / import

10. Should you take the backup of Logfiles if the database is running in ARCHIVELOG mode?
YES

11. Why do you take tablespaces in Backup mode?

Hot backup

12. What is the advantage of RMAN utility?

Recovery Manager (or RMAN) is an Oracle provided utility for backing-up, restoring and recovering Oracle Databases.
RMAN ships with the database server and doesn't require a separate installation.

1. Ability to perform INCREMENTAL backups

2. Ability to Recover one block of datafile

3. Ability to automatically backup CONTROLFILE and SPFILE

4. Ability to delete the older ARCHIVE REDOLOG files, with the new one's automatically.

5. Ability to perform backup and restore with parallelism.

6. Ability to report the files needed for the backup. Recovery Catalog

7. Ability to RESTART the failed backup, without starting from beginning.

8. Much faster when compared to other TRADITIONAL backup strategies.

9. Compression of ununsed blocks

13. How RMAN improves backup time?

14. Can you take Offline backups using RMAN?

15. How do you see information about backups in RMAN?

Using RMAN command “List backup”

16. What is a Recovery Catalog?

17. Should you place Recovery Catalog in the Same DB?

NO, separate.

18. Can you use RMAN without Recovery catalog?

Yes, by using the control file

19. Can you take Image Backups using RMAN?

20. Can you use Backupsets created by RMAN with any other utility?

no

21. Where RMAN keeps information of backups if you are using RMAN without Catalog?

In the control file


22. You have taken a manual backup of a datafile using o/s. How RMAN will know about it?

23. You want to retain only last 3 backups of datafiles. How do you go for it in RMAN?

the CONFIGURE RETENTION POLICY command. The REPORT OBSOLETE and DELETE OBSOLETE commands
can be executed periodically or regularly to view obsolete files and to delete them, respectively.

The retention policy is continuous. As the data file, control file, and archived redo log backups are produced over time,
RMAN keeps track of them and decides which to retain and which to mark as obsolete. RMAN does not automatically
delete the backups or copies.

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 5 DAYS;

24. Which is more efficient Incremental Backups using RMAN or Incremental Export?

25. Can you start and shutdown DB using RMAN?

YES

26. How do you recover from the loss of datafile if the DB is running in NOARCHIVELOG mode?

27. You loss one datafile and it does not contain important objects. The important objects are there in other datafiles which are
intact. How do you proceed in this situation?

28. You lost some datafiles and you don't have any full backup and the database was running in NOARCHIVELOG mode. What
you can do now?

29. How do you recover from the loss of datafile if the DB is running in ARCHIVELOG mode?

Recover datafile ‘/xxxx/xxxx/xxx.dbf’;

30. You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week old and partial
backup of this datafile which is just 1 day old. From which backup should you restore this file?

31. You loss controlfile how do you recover from this?

Alter database backup control file to trace;

Alter database open resetlogs

32. The current logfile gets damaged. What you can do now?

33. What is a Complete Recovery?

34. What is Cancel Based, Time based and Change Based Recovery?

Partial recovery

35. Some user has accidentally dropped one table and you realize this after two days. Can you recover this table if the DB is
running in ARCHIVELOG mode?

36. Do you have to restore Datafiles manually from backups if you are doing recovery using RMAN?

no

37. A database is running in ARCHIVELOG mode since last one month. A datafile is added to the database last week. Many
objects are created in this datafile. After one week this datafile gets damaged before you can take any backup. Now can you
recover this datafile when you don't have any backups?

38. How do you recover from the loss of a controlfile if you have backup of controlfile?

39. Only some blocks are damaged in a datafile. Can you just recover these blocks if you are using RMAN?

Restore and recover the whole database


RMAN> STARTUP FORCE MOUNT;
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE;
RMAN> ALTER DATABASE OPEN;

Restore and recover a tablespace


RMAN> SQL 'ALTER TABLESPACE users OFFLINE';
RMAN> RESTORE TABLESPACE users;
RMAN> RECOVER TABLESPACE users;
RMAN> SQL 'ALTER TABLESPACE users ONLINE';

 Restore and recover a datafile


RMAN> SQL 'ALTER DATABASE DATAFILE 64 OFFLINE';
RMAN> RESTORE DATAFILE 64;
RMAN> RECOVER DATAFILE 64;
RMAN> SQL 'ALTER DATABASE DATAFILE 64 ONLINE';

 Restore a Control File

STARTUP NOMOUNT;
RUN
{
ALLOCATE CHANNEL c1 DEVICE TYPE sbt;
RESTORE CONTROLFILE;
ALTER DATABASE MOUNT;
RESTORE DATABASE;
}

 RMAN> RESTORE DATABASE VALIDATE;

40. Some datafiles were there on a secondary disk and that disk has become damaged and it will take some days to get a new
disk. How will you recover from this situation?

41. Have you faced any emergency situation? Tell us how you resolved it?

42. At one time you lost parameter file accidentally and you don't have any backup. How you will recreate a new parameter file
with the parameters set to previous values.

1. How do you see how many instances are running?

Ps –ef | grep pmon

2. How do you automate starting and shutting down of databases in Unix?

Using Y or N in the oratab file

3. You have written a script to take backups. How do you make it run automatically every week?
4. What is OERR utility?

Utility provided by oracle on unix platforms to check oracle errors

Oerr ora 12544

5. How do you see Virtual Memory Statistics in Linux?

6. How do you see how much hard disk space is free in Linux?

7. What is SAR?

8. What is SHMMAX?

9. Swap partition must be how much the size of RAM?

10. What is DISM in Solaris?

11. How do you see how many memory segments are acquired by Oracle Instances?

12. How do you see which segment belongs to which database instances?

13. What is VMSTAT?

14. How do you set Kernel Parameters in Red Hat Linux, AIX and Solaris?

15. How do you remove Memory segments?

16. What is the difference between Soft Link and Hard Link?

17. What is stored in oratab file?

All the databases running on the server and their oracle homes

18. How do you see how many processes are running in Unix?

19. How do you kill a process in Unix?

Kill -9 pid

20. Can you change priority of a Process in Unix?

WAIT EVENTS

When Oracle executes an SQL statement, it is not constantly executing. Sometimes it has to wait for a specific event to happen
befor it can proceed.
For example, if Oracle (or the SQL statement) wants to modify data, and the corresponding database block is not currently in
the SGA, Oracle waits for this block to be available for modification.
All possible wait events can be found in v$event_name. In Oracle 10g R1, there are some 806 different wait events.
What Oracle waits for and how long it has totally waited for these events can be monitored through the following views:

• v$session_event
• v$session_wait
• v$system_event
Important events
Important events are:

• buffer busy waits


• db file scattered read
• db file sequential read
• free buffer waits
• latch free
• log buffer space
• log file sync
• enqueue
• SQL*Net more data from client
• SQL*Net more data to client
• write complete waits

buffer busy waits

If two processes try (almost) simultaneausly the same block and the block is not resident in the buffer cache, one process will
allocate a buffer in the buffer cache and lock it and the read the block into the buffer. The other process is locked until the block
is read. This wait is refered to as buffer busy wait.
See also this link.

db file scattered read

A process reads multiple blocks (mostly as part of a full table scan or an index fast full scan). It can also indicate a multiblock
read when the process reads parts of a sort segement.

db file single block read

db file sequential read

In most cases, this event means that a foreground process reads a single block (because it reads a block from an index or
because it reads a block by rowid).

direct path read

enqueue

The enqueue wait event can be queried through v$enqueue_stat.


See also enqueue types in x$ksqst

free buffer waits

See also optimal size of block buffer.

latch free

log buffer space

This wait event indicates that the size of the log buffer is chosen too small.

log file sync

SQL*Net more data from client

SQL*Net more data to dblink


write complete waits

Wait classes
Wait events can be categorized by wait classes. These classes are exposed through v$session_wait_class.
The following wait classes exist:

Administrative

Application

Cluster

Concurrency

Configuration

Commit

Idle Waits

Network

Other

System I/O

Scheduler

User I/O

Parameters
The parameters P1, P2 and P3 in v$session_wait are dependent on the wait.
P1 refers sometimes to the datafile number.
If this number is greater than db_files, it refers to a temp file.

The name of the datafile for a number can be retrieved through v$datafiles.

Oracle Latch

What is Latch ?

A mechanism to protect shared data structures in the System Global Area.


For Example: latches protect the list of users currently accessing the database and protect the data structures describing the
blocks in the buffer cache.

A server or background process acquires a latch for a very short time while manipulating or looking at one of these structures.

During DB performance we will see LATCH event ...so what is latch event and how many types of latch events ?

A latch is a low-level internal lock used by Oracle to protect memory structures.

The latch free event is updated when a server process attempts to get a latch, and the latch is unavailable on the first attempt.
Most Popular latch wait event are ...

1. Latch: library cache or Latch: shared pool

Below is Possible causes for above both latch events.

1. Lack of statement reuse


2. Statements not using bind variables
3. Insufficient size of application cursor cache
4. Cursors closed explicitly after each execution
5. Frequent logon/logoffs
6. Underlying object structure being modified (for example truncate)
7. Shared pool too small

Below is Possible suggestion for aviod above both latch events.

1. Increase SHARED_POOL_SIZE parameter value.


2. Modify Frontend application to use BIND VARIABLE
3. Use CURSOR_SHARING='force' (for temporary basis)

2. Latch: cache buffers lru chain

Possible Causes

1. Inefficient SQL that accesses incorrect indexes iteratively (large index range scans) or many full table scans.
2. DBWR not keeping up with the dirty workload; hence, foreground process spends longer holding the latch looking for a free
buffer
3. Cache may be too small

Possible Suggestion

1. Look for: Statements with very high logical I/O or physical I/O, using unselective indexes
2. Increase DB_CACHE_SIZE parameter value.
3. The cache buffers lru chain latches protect the lists of buffers in the cache. When adding, moving, or removing a buffer from a
list, a latch must be obtained.

For symmetric multiprocessor (SMP) systems, Oracle automatically sets the number of LRU latches to a value equal to one half
the number of CPUs on the system. For non-SMP systems, one LRU latch is sufficient.

Contention for the LRU latch can impede performance on SMP machines with a large number of CPUs. LRU latch contention is
detected by querying V$LATCH, V$SESSION_EVENT, and V$SYSTEM_EVENT. To avoid contention, consider tuning the
application, bypassing the buffer cache for DSS jobs, or redesigning the application.

3 - Latch: cache buffers chains

Possible Causes

1. Repeated access to a block (or small number of blocks), known as a hot block
2. From AskTom:

Contention for these latches can be caused by:

- Very long buffer chains.


- very very heavy access to the same blocks.

Possible Suggestion

1. From AskTom:
When I see this, I try to see what SQL the waiters are trying to execute. Many times,
what I find, is they are all running the same query for the same data (hot blocks). If
you find such a query -- typically it indicates a query that might need to be tuned (to
access less blocks hence avoiding the collisions).

If it is long buffer chains, you can use multiple buffer pools to spread things out. You
can use DB_BLOCK_LRU_LATCHES to increase the number of latches. You can use both
together.

The cache buffers chains latches are used to protect a buffer list in the buffer cache. These latches are used when searching
for, adding, or removing a buffer from the buffer cache. Contention on this latch usually means that there is a block that is
greatly contended for (known as a hot block).

To identify the heavily accessed buffer chain, and hence the contended for block, look at latch statistics for the cache buffers
chains latches using the view V$LATCH_CHILDREN. If there is a specific cache buffers chains child latch that has many more
GETS, MISSES, and SLEEPS when compared with the other child latches, then this is the contended for child latch.

This latch has a memory address, identified by the ADDR column. Use the value in the ADDR column joined with the X$BH
table to identify the blocks protected by this latch. For example, given the address (V$LATCH_CHILDREN.ADDR) of a heavily
contended latch, this queries the file and block numbers:

SELECT OBJ data_object_id, FILE#, DBABLK,CLASS, STATE, TCH


FROM X$BH
WHERE HLADDR = 'address of latch'
ORDER BY TCH;

X$BH.TCH is a touch count for the buffer. A high value for X$BH.TCH indicates a hot block.

Many blocks are protected by each latch. One of these buffers will probably be the hot block. Any block with a high TCH value is
a potential hot block. Perform this query a number of times, and identify the block that consistently appears in the output. After
you have identified the hot block, query DBA_EXTENTS using the file number and block number, to identify the segment.

After you have identified the hot block, you can identify the segment it belongs to with the following query:

SELECT OBJECT_NAME, SUBOBJECT_NAME


FROM DBA_OBJECTS
WHERE DATA_OBJECT_ID = &obj;

In the query, &obj is the value of the OBJ column in the previous query on X$BH.

4. Latch: row cache objects

The row cache objects latches protect the data dictionary.


Suggestion: Increase SHARED_POOL_SIZE parameter to avoid this latch.

You might also like