Professional Documents
Culture Documents
Buffer Cache Hit Ratio: Useful DBA Monitoring Scripts
Buffer Cache Hit Ratio: Useful DBA Monitoring Scripts
Buffer Cache Hit Ratio: Useful DBA Monitoring Scripts
select round((1-(pr.value/(bg.value+cg.value)))*100,2)
from v$sysstat pr, v$sysstat bg, v$sysstat cg
where pr.name='physical reads'
and bg.name='db block gets'
and cg.name='consistent gets'
Sorts in Memory
select round((mem.value/(mem.value+dsk.value))*100,2)
from v$sysstat mem, v$sysstat dsk
where mem.name='sorts (memory)'
and dsk.name='sorts (disk)'
Recovery Catalog
The recovery catalog is an optional repository of information about your target databases that RMAN uses and maintains.
We should place the catalog database on a different server than the target database. If we fail to do this, we jeopardize backup and recovery options, because
we make it possible to lose both the catalog and the target database.
The catalog database should be created with the latest version of Oracle in your production environment. This helps to minimize compatibility issues and
complexity when you start to back up your target databases.
Creating a catalog
The examples in this section provide details for creating a catalog database and registering a target database within the catalog. These examples assume that
your catalog database is on a different host than your target database.
Now that we have a tablespace to store our schema objects, we can create the schema
Before we create the catalog objects, we need to grant special privileges to new schema. These privileges, granted through the
RECOVRY_CATALOG_OWNER role, let the schema manage its catalog objects.
We can now create the catalog objects within our new schema. In order to perform this step, invoke RMAN, connect to newly created catalog schema, and
issue the create catalog command. If we don't specify a tablespace with the create catalog command, the catalog objects are created in the default tablespace
assigned to the catalog owner.
RMAN> exit;
After creating a catalog, the next logical step is to register a target database. We won’t be able to backup the target with the catalog unless the target database
is registered.
Invoke RMAN, connect to both the target and the catalog and issue the register database command.
RMAN> Exit;
Configure command
We can configure persistent settings in the Rman environment. The configuration setting is done once, and used by Rman to perform all subsequent
operations.
To display the pre configured settings type the command SHOW ALL
There are various parameters that can be used to configure RMAN operations to suit our needs.
There are few things we need to have in place before instructing RMAN to connect to the target.
Before you connect to your target database, you must ensure that the standard Unix environment variables are established. These variables include:
ORACLE_SID, ORACLE_HOME, PATH, NLS_LANG, and NLS_DATE_FORMAT. They govern the name of the instance, the path to the RMAN
executable; and the behavior of backup, restore, and reporting commands.
When using RMAN, NLS_LANG should be set to the character set that your database was created with. If you do not set NLS_LANG, you may encounter
problems when issuing BACKUP, RESTORE, and RECOVER commands.
Once you have the appropriate environment variables set, you then need access to an O/S account or a database schema that has SYSDBA privileges. You
must have access to the SYSDBA privilege before you can connect to the target database using RMAN. There are two methods of administering the
SYSDBA privilege:
For local connections, RMAN automatically connects you to the target database with SYSDBA privileges.
Setting up a password file is the other method by which we can administer the SYSDBA privilege. There are two good reasons to use RMAN with a
password file.
1. Oracle has deprecated the use of CONNECT INTERNAL and Server Manager.
2. We may want to administer RMAN remotely through a network connection.
For example, if you're in an environment where you want to back up all of your target databases from one place and not has to log on to each host and back
up the database, you must do it via a network connection. To remotely administer RMAN through a network connection, you need to do the following:
To create the password file, as the Oracle software owner or as a member of the dba group.
$ cd $oracle_home/dbs
$ orapwd file=sidname password=password entries=n
Example
$ cd $ORACLE_HOME/dbs
$ orapwd file=orapwAKI1 password=goofi entries=30
After we create a password file, we need to enable remote logins. To do this, set the instance’s REMOTE_LOGIN_PASSWORDFILE initialization
parameter to EXCLUSIVE.
Setting this parameter to exclusive signifies that only one database can use the password file and that users other than sys and internal can reside in it. We
can now use a network connection to connect to your target database as SYSDBA.
Test the connection, try to connect from a PC to the remote database as SYS with SYSDBA privileges:
Note that we have to create a password file only for the target database and not for the catalog. This is because when you connect to the target, you need to
connect as an account that has the SYSDBA privilege. When you connect remotely to a target database, the SYSDBA privilege is enabled through the
password file. This is unlike a connection to the catalog, for which SYSDBA is not required, because you log in as the owner of the catalog schema.
When the SYSDBA privilege is granted to a specified user, the user can be queried in the V$PWFILE_USERS view.
In order to use Rman we have to invoke the executable. Once we have invoked the executable, we get an RMAN prompt, from which we can execute
RMAN commands.
The executable of RMAN is located with all of the other oracle executables, in the BIN directory of oracle installation.
$ rman
O/S Authentication
We can use O/S authentication only from an O/S account on the database server
Connect to the database after RMAN has been invoked prevents any password information from showing up in a process list.
If we are using catalog, we will typically connect to the target and the catalog at the same time. This is because when we are performing backup and
recovery operations both the target and the catalog need to be aware of your activities.
O/S authentication
This connects us to the target and catalog database at the same time. Alternatively we can invoke RMAN first and then issue connect commands for the
target and catalog separately.
$ rman
RMAN> connect catalog rmancat/rmancat@GEK1
RMAN> connect target /
Password Authentication
TABLESPACE_NAME SUBSTR(FILE_NAME,1,70)
----------------------------------------------------------------------
SYSTEM /home/oracle/ORACLE_HOME/database/OLD_LOCATION/system.dbf
UNDO /home/oracle/ORACLE_HOME/database/OLD_LOCATION/undo.dbf
DATA /home/oracle/ORACLE_HOME/database/OLD_LOCATION/data.dbf
MEMBER
--------------------------------------------------------------------------------
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo1.ora
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo2.ora
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo3.ora
NAME
--------------------------------------------------------------------------------
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_1.ora
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_2.ora
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_3.ora
Now, as the files to be moved are known, the database can be shut down:
SQL> shutdown
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/system.dbf /home/oracle/ORACLE_HOME/database/NEW_LOCATION/system.dbf
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/undo.dbf /home/oracle/ORACLE_HOME/database/NEW_LOCATION/undo.dbf
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/data.dbf /home/oracle/ORACLE_HOME/database/NEW_LOCATION/data.dbf
$
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo1.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/redo1.ora
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo2.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/redo2.ora
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo3.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/redo3.ora
$
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_1.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_1.ora
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_2.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_2.ora
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_3.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_3.ora
The init.ora file is also copied because it references the control files. I name the copied file just init.ora because it is not in a standard place anymore and it
will have to be named explicitely anyway when the database is started up.
$ cp /home/oracle/ORACLE_HOME/dbs/initOLD.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/initNEW.ora
The new location for the control files must be written into the (copied) init.ora file:
/home/oracle/ORACLE_HOME/database/NEW_LOCATION/init.ora
control_files = (/home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_1.ora,
/home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_2.ora,
/home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_3.ora)
SQL> shutdown
Thanks :)
Please share your really important comments and views for improvement......
Posted by Real-Core-DBA at 11:31 PM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Tuesday, August 31, 2010
No 1-> The first time a block is changed in a datafile that is in hot backup mode,
the ENTIRE BLOCK is written to the redo log files, not just the changed bytes.
Normally only the changed bytes (a redo vector) is written. In hot backup mode,
the entire block is logged the FIRST TIME. This is because you can get into a
situation where the process copying the datafile and DBWR are working on the
same block simultaneously.
The datafile headers which contain the SCN of the last completed checkpoint
are NOT updated while a file is in hot backup mode. This lets the recovery
process understand what archive redo log files might be needed to fully recover
this file.
To limit the effect of this additional logging, you should ensure you only place one tablepspace at a time in backup mode and bring the
tablespace out of backup mode as soon as you have backed it up. This will reduce the number of blocks that may have to be logged to the
minimum possible.
Please share your really important comments and views for improvement...... Thanks :)
Posted by Real-Core-DBA at 1:48 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Sunday, August 29, 2010
Step by Step MMR Replication Setup
BACKUP PRIMARY/SECONDARY SITE DB
Take the cold backup from the existing SECONDARY Database Server (O/s Level).
(Remove all the SECONDARY DB from different mount point for refresh new DB from PRIMARY)
Cross Check the Mount points on the SECONDARY database server have sufficient space
(This will create a back script of the controlfile at the destination: user_dump_dest with some name with .trc extension, kindly find it with the
latest timestamp and rename it as ‘create_control_file.sql’)
Restore the backup to the new database server to the relevant mount points.
Note: copy the redo.log files and multiplex the files in different mount points.
Copy the pfile from the source Primary database server to the location
$ sqlplus ‘ / as sysdba’
[ ] startup nomount pfile=’pfile_path/pfilename’. ( it set the parameter that are set in the parameterfile)
I.e # cd $ORACLE_HOME/dbs/
[ ] @create_control_file.sql;
[ ] select * from V$RECOVER_FILE;
$ ORACLE_SID=SECONDARY
$ export ORACLE_SID
$ sqlplus '/ as sysdba'
Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
Jserver Release 8.1.7.4.0 - Production
SQL>
Create database link "IAS.QA.SECONDARY.COM"
Using '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(COMMUNITY=TCP)(PROTOCOL=TCP)(Host=10.277.93.169)(Port=1521)))
(CONNECT_DATA=(SID=SECONDARY)))';
Create database link " IAS.QA.PRIMARY.COM"
Using '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(COMMUNITY=TCP)(PROTOCOL=TCP)(Host=10.237.93.141)(Port=1521)))
(CONNECT_DATA=(SID=SECONDARY)))';
EXECUTE Dbms_Defer_Sys.Schedule_Push( -
Destination => ' IAS.QA.PRIMARY.COM ', -
Interval => 'sysdate+1/24/60', -
Next_date => sysdate+1/24/60, -
Stop_on_error => FALSE, -
Delay_seconds => 0, -
Parallelism => 1);
10. Schedule job to push transactions to INT master sites with appropriate intervals
EXECUTE Dbms_Defer_Sys.Schedule_Push( -
Destination => ' IAS.QA.SECONDARY.COM ', -
Interval => 'sysdate+1/24/60', -
Next_date => sysdate+1/24/60, -
Stop_on_error => FALSE, -
Delay_seconds => 0, -
Parallelism => 1);
EXECUTE Dbms_Defer_Sys.Schedule_Purge( -
Next_date => sysdate+1/24, -
Interval => 'sysdate+1/24');
CONNECT repadmin/repadmin
EXECUTE Dbms_Repcat.Create_Master_Repgroup('IREPLICATION');
BEGIN
DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
sname => 'TEST'
oname => 'NEW_CHECKPOINT'
type => 'TABLE',
min_communication => TRUE);
END;
/
BEGIN
DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
sname => 'TEST'
oname => 'NEW_CORRELATION'
type => 'TABLE',
min_communication => TRUE);
END;
Once replication support has been generated for all objects relevant objects replication can be started or stopped as follows:
-- Start Replication
EXECUTE Dbms_Repcat.Resume_Master_Activity(gname => 'REPLICATION');
-- Stop Replication
EXECUTE Dbms_Repcat.Suspend_Master_Activity(gname => 'REPLICATION');
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SQL> select * from dba_sys_privs where privilege like '%FLASH%';
Grant succeeded.
Two main parameters that is used with the create statement of flashback data are rentention and
quota. Retention confirms the maximum time holding of flashback data and quota will confirm for
the space it will be used on provided tablespace.
Now let’s see how to use this flashback data with tables.
SQL> create table test11 ( name varchar2(30), address varchar2(50))
2 flashback archive flash2;
Table created.
Table altered.
FLASHBACK_ARCHIVE_NAME RETENTION_IN_DAYS
------------------------- -----------------
FLASH2 1825
SQL>
Now table is created with flashback data, let’s try to use it. Here I will update the created table and
try to read from flashback data for the updated column.
SQL> select * from test11;
NAME ADDRESS
------------------------------ --------------------------------------------------
MOHIT A
MOHIT A
MOHIT A
MOHIT A
MOHIT A
SQL>
SQL> !date
Wed Dec 21 05:14:17 EST 2011
SQL> !date
Wed Dec 21 05:15:35 EST 2011
5 rows updated.
SQL> commit;
Commit complete.
NAME ADDRESS
------------------------------ --------------------------------------------------
MOHIT B
MOHIT B
MOHIT B
MOHIT B
MOHIT B
Now the before image must be available with flashback data, so checking that.
SQL> select * from SYS_FBA_HIST_366239;
no rows selected
SQL> /
no rows selected.
It should be available but unfortunately not…!! Here I understand why. Background process FDBA
wakes up at system determined interval (default is 5 mins) and there it copies all the
corresponding undo data for archive. So next time if changes won’t reflect immediately won’t be
surprise.
NAME ADDRESS
------------------------------ --------------------------------------------------
MOHIT A
MOHIT A
MOHIT A
MOHIT A
MOHIT A
NAME ADDRESS
------------------------------ --------------------------------------------------
MOHIT B
MOHIT B
MOHIT B
MOHIT B
MOHIT B
Now the data is available in flashback archive. One more point to add, we cannot modify the
flashback data. It is only allowed for read-only access.
Next section, we will take a short look of some limitation of this feature.
All the tables enabled for the flashback archive feature will not be allowed to perform below
funtions:
Attempted to drop for the table which is enabled for faskback data archive:
SQL> drop table test11;
drop table test11
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table
Solution:
Table altered.
Table dropped.
SQL>
Thanks
Posted by Real-Core-DBA at 11:16 PM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such
as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.
2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.
3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.
4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then
impdp full=y where in original export/import does not always exhibit this behavior.
6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.
7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original
export/import we could directly compress the dump by using pipes.
8)The Data Pump method for moving data between different database versions is different than the method
used by original Export/Import.
9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates
an active constraint, the load is discontinued and no data is loaded. This is different from original Import,
which logs any rows that are in violation and continues with the load.
10)Expdp/Impdp consume more undo tablespace than original Export and Import.
11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded.
Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled,
the data was not compressed upon import.
12)Data Pump supports character set conversion for both direct path and external tables. Most of the
restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump.
The one case in which character set conversions are not supported under the Data Pump is when using
transportable tablespaces.
13)There is no option to merge extents when you re-create tables. In original Import, this was provided by
the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target
table.
Differences between Data Pump impdp and import utility
The original import utility dates back to the earliest releases of Oracle, and it's quite slow and primitive
compared to Data Pump. While the old import (imp) and Data Pump import (impdp) do the same thing, they
are completely different utilities, with different syntax and characteristics.
Here are the major syntax differences between import and Data Pump impdp:
Data Pump does not use the BUFFERS parameter
Data Pump export represents the data in XML format
A Data Pump schema import will recreate the user and execute all of the associated security privileges
(grants, user password history).
Data Pump's parallel processing feature is dynamic. You can connect to a Data Pump job that is
currently running and dynamically alter the number of parallel processes.
Data Pump will recreate the user, whereas the old imp utility required the DBA to create the user ID
before importing.
Analyze
The only method available for collecting statistics in Oracle 8.0 and lower.
ANALYZE can only run serially.
ANALYZE cannot overwrite or delete certain types of statistics that where generated by DBMS_STATS.
ANALYZE calculates global statistics for partitioned tables and indexes instead of gathering them directly. This
can lead to inaccuracies for some statistics, such as the number of distinct values.
For partitioned tables and indexes, ANALYZE gathers statistics for the individual partitions and then
calculates the global statistics from the partition statistics.
For composite partitioning, ANALYZE gathers statistics for the subpartitions and then calculates the
partition statistics and global statistics from the subpartition statistics.
ANALYZE can gather additional information that is not used by the optimizer, such as information about chained
rows and the structural integrity of indexes, tables, and clusters. DBMS_STATS does not gather this information.
No easy way of knowing which tables or how much data within the tables have changed. The DBA would
generally re-analyze all of their tables on a semi-regular basis.
DBMS_STATS
Only available for Oracle 8i and higher.
Statistics can be generated to a statistics table and can then be imported or exported between databases and re-
loaded into the data dictionary at any time. This allows the DBA to experiment with various statistics.
DBMS_STATS routines have the option to run via parallel query or operate serially.
Can gather statistics for sub-partitions or partitions.
Certain DDL commands (ie. create index) automatically generate statistics, therefore eliminating the need to
generate statistics explicitly after DDL command.
DBMS_STATS does not generate information about chained rows and the structural integrity of segments.
The DBA can set a particular table, a whole schema or the entire database to be automatically monitored when a
modification occurs. When enabled, any change (insert, update, delete, direct load, truncate, etc.) that occurs on a table will be
tracked in the SGA. This information is incorporated into the data dictionary by the SMON process at a pre-set interval (every 3
hours in Oracle 8.1.x, and every 15 minutes in Oracle 9i). The information collected by this monitoring can be seen in
the DBA_TAB_MODIFICATIONS view. Oracle 9i introduced a new function in the DBMS_STATS package
called: FLUSH_DATABASE_MONITORING_INFO. The DBA can make use of this function to flush the monitored table data
more frequently. Oracle 9i will also automatically call this procedure prior to executing DBMS_STATSfor statistics gathering
purposes. Note that this function is not included with Oracle 8i.
DBMS_STATS provides a more efficient, scalable solution for statistics gathering and should be used over the
traditional ANALYZE command which does not support features such as parallelism and stale statistics collection.
Use of table monitoring in conjunction with DBMS_STATS stale object statistics generation is highly
recommended for environments with large, random and/or sporadic data changes. These features allow the database to more
efficiently determine which tables should be re-analyzed versus the DBA having to force statistics collection for all tables. Including
those that have not changed enough to merit a re-scan)
Posted by Real-Core-DBA at 1:18 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
3) If you are given a database, how will you know how many datafiles each tablespace contain?
Select distinct tablespace_name,file_name from dba_data_files;
4). How will you know which temporaray tablepsace is allocated to which user?
Select temporary_tablespace from dba_users where username=’SCOTT’;
5) If you are given a database,how will you know whether it is locally managed or dictionary managed?
Select extent_management from dba_tablespaces where tablespace_name=’USERS’;
6) How will you list all the tablespaces and their status in a database?
Select tablespace_name,status from dba_tablespaces;
7) How will you find the system wide 1) default permanent tablespace, 2) default temporary tablespace 3) Database time zone?
Select property_name,property_value from database_properties where property_name like ‘%DEFAULT%’;
8) How will you find the current users who are using temporary tablespace segments?
V$TEMPSEG_USAGE
9) How will you convert an existing dictionary managed permanent tablespace to temporary tablespace?
Not possible
11) How will you convert dictionary managed tablespace to locally managed tablespace?
Exec dbms_space_admin.tablespace_migrate_to_local(‘TABLESPACE_NAME’);
12) If you have given command to make a tablespace offline normal, but its not happening.it is in transactional read-only mode.
How will you find which are the transactions which are preventing theconversion?
By looking at queries using by those SID (u can get script from net). I suspect question is not clear.
13) If you drop a tablespace containing 4 datafiles, how many datafiles will be droped at a time by giving a single drop tablespace
command?
All datafiles
14) If database is not in OMF,How will you drop all the datafiles of a tablespace without dropping the tablespace itself?
Alter database datafile ‘PATH’ offline drop;
15) How will you convert the locally managed tablespace to dictionay managed?What are the limitations?
Exec dbms_space_admin.tablespace_migrate_from_local(‘TABLESPACE_NAME’);
19) Write command to make all datafiles of a tablespace offline without making the tablspace offline itself?
Alter database datafile ‘PATH’ offline normal;
20) In 10g, How to allocate more than one temporary tablespace as default temporary tablespace to a single user?
By using temporary tablespace group
24) Write a query to display the names of all dynamic performance views?
Select table_name from dictionary where table_name like ‘v$%’;
25) Name the script that needs to be executed to create the data dictionary views after database creation?
Catalog.sql
26) Grant to the user SCOTT the RESTRICTED SESSION privilege?
SQL> grant restricted session to scott;
Grant succeeded.
27) How are privileged users being authenticated on the database you are currently working on? Which initialization parameter
would give me this information?
Question not clear
28) Which dynamic performance view gives you information about all privileged users who have been granted sysdba or sysoper
roles? Query the view?
SQL> desc v$pwfile_users
30) Write a query to display the file# and the status of all datafiles that are offline?
Select file#,status from v$datafile where status=’OFFLINE’;
31) Write the statement to display the size of the System Global Area (SGA)?
Show parameter sga
Or
Show sga
32) Obtain the information about the current database? What is its name and creation date?
Select name,created from v$database;
33) What is the size of the database buffer cache? Which two initialization Parameters are used to determine this value?
Db_cache_size or db_block_buffers
34) What value should the REMOTE_LOGIN_PASSWORDFILE take if you need to set up Operating System authentication?
exclusive
35) Which initialization parameter holds this value? What does the shared pool comprise of?
Library cache and data dictionary cache.
Parameter : shared_pool_size
37) Which dynamic performance view displays information about the active transactions in the database? Which view returns
session related information?
V$transaction, v$session
38) Which dynamic performance view is useful for killing user sessions? Which columns of the view will you require to kill a user
session? Write the statement to kill any of the currently active sessions in your database?
V$session (SID, SERAIL#)
Alter system kill session ‘SID,SERIAL#’;
39) What is the difference between the ALTER SYSTEM and ALTER SESSION commands?
Changes performed using ALTER SYSTEM are either permanent for the memory or database. But for ALTER SESSION,
its only for that session
40) Write down the mandatory steps that a DBA would need to perform before the CREATE DATABASE command may be used to
create a database?
Create a pfile or spfile
Create password file
If windows, create instance using ORADIM utility
41) What does the script utlexcpt.sql create? What is this table used for?
42) In which Oracle subdirectory are all the SQL scripts such as catalog.sql/ catproc.sql /utlexcpt.sql etc...? Located?
$ORACLE_HOME/rdbms/admin/
43) Which dynamic performance view would you use to display the OPTIMAL size of the rollback segment RBS2. Write a query to
retrieve the OPTIMAL size and Rollback segment name?
V$undostat (but many scripts are available in google or even in my blog)
44) During a long-running transaction, you receive an error message indicating you have insufficient space in rollback segment RO4.
Which storage parameter would you modify to solve this problem?
Extent size
45) How would I start the database if only users with the RESTRICTED SESSION privilege need to access it?
Startup restrict
46) Which data dictionary view would you query to find out information about free extents in your database? Write a query to
display a count of the number of free extents in your database?
We can use scripts. Exactly its difficult to know
47) Write a query to display the tablespace name, datafile name and type of extent management (local or dictionary) from the data
dictionary?
You need to combine dba_data_files and dba_tablespaces
49) When a tablespace is offline can it be made read only? Perform the
Required steps to confirm your answer?
Didn’t got the answer
50) Which parameter specifies the percentage of space in each data block that is reserved for future updates?
PCTFREE
51) write down two reasons why automatic extent allocation for an extent may fail?
If the disk space reached max limit
If autoextend reached maxsize limit
52) Query the DBA_CONSTRAINTS view and display the names of all the constraints that are created on the CUSTOMER table?
Select constraint_name from dba_constraints where table_name=’CUSTOMER’;
53) Write a command to display the names of all BITMAP indexes created in the database?
Select index_name from dba_indexes where index_type=’BITMAP’;
54) Write a command to coalesce the extents of any index of your choice?
Alter tablespace <tablespace_name> coalesce;
Don’t know for extents
55) . What happens to a row that is bigger than a single block? What is this called? Which data dictionary view can be queried to
obtain information about such blocks?
Row will be chained into multiple blocks. CHAINED_ROWS is the view
56) Write a query to retrieve the employee number and ROWIDs of all rows that belong to the EMP table belonging to user SCOTT?
Select rowid,empno from scott.emp;
57) During a long-running transaction, you receive an error message indicating you have insufficient space in rollback segment RO4.
Which storage parameter would you modify to solve this problem?
Repeated question
59) What is the block size of your database and how do you see it?
Db_block_size
60) At one time you lost parameter file accidentally and you don't have any backup. How you will recreate a new parameter file with
the parameters set to previous values.?
We can recover it from alert log file which contains non-default values
61) You want to retain only last 3 backups of datafiles. How do you go for it in RMAN?
By configuring backup retention policy to redundancy 3
Posted by Real-Core-DBA at 1:39 AM 8 comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
If the system tablespace is missing or corrupted the database cannot be started up so a complete closed database recovery must be
performed.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted system datafile to its original location, ie:
cp -p /usr/backup/RMAN/system01.dbf /usr/oradata/u01/IASDB/system01.dbf
2. startup mount;
3. recover datafile 1;
4. alter database open;
If a non system tablespace is missing or corrupted while the database is open, recovery can be performed while the database
remain open.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u01/IASDB/user01.dbf
2. alter tablespace <tablespace_name> offline immediate;
3. recover tablespace <tablespace_name>;
4. alter tablespace <tablespace_name> online;
Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing
If a non system tablespace is missing or corrupted and the database crashed,recovery can be performed after the database is open.
Pre requisites: A closed or open database backup and archived logs.
1. startup; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain mounted)
2. Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u01/IASDB/user01.dbf
3. alter database datafile 6 offline; (tablespace cannot be used because the database is not open)
4. alter database open;
5. recover datafile 6;
6. alter tablespace <tablespace_name> online;
If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since
the creation of the missing datafile exist.
Pre requisites: All relevant archived logs.
1. alter tablespace <tablespace_name> offline immediate;
2. alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’;
3. recover tablespace <tablespace_name>;
4. alter tablespace <tablespace_name> online;
If the create datafile command needs to be executed to place the datafile on a location different than the original use:
alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’ as ‘/user/oradata/u02/IASDB/newdata01.dbf’
If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery
performed.
Pre requisites: All relevant archived logs.
1. Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u02/IASDB/user01.dbf
2. alter tablespace <tablespace_name> offline immediate;
3. alter tablespace <tablespace_name> rename datafile ‘/user/oradata/u01/IASDB/user01.dbf’ to
‘/user/oradata/u02/IASDB/user01.dbf’;
4. recover tablespace <tablespace_name>;
5. alter tablespace <tablespace_name> online;
Incomplete recovery may be necessaire when an archived log is missing, so recovery can only be made until the previous sequence,
or when an important object was dropped, and recovery needs to be made until before the object was dropped.
Pre requisites: A closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be
performed.
1. If the database is open, shutdown abort
2. Use OS commands to restore all datafiles to its original locations:
cp -p /usr/backup/RMAN/u01/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u02/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u03/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u04/*.dbf /usr/oradata/u01/IASDB/
etc…
3. startup mount;
4. recover automatic database until time ’2004-03-31:14:40:45′;
5. alter database open resetlogs;
6. make a new complete backup, as the database is open in a new incarnation and previous archived log are not
relevant.Alternatively you may use instead of until time, until sequence or until cancel:
recover automatic database until sequence 120 thread 1; OR
recover database until cancel;
Rman recovery scenarios require that the database is in archive log mode, and that backups of datafiles, control files and archived
redolog files are made using Rman. Incremental Rman backups may be used also.
Rman can be used with the repository installed on the archivelog, or with a recovery catalog that may be installed in the same or
other database.
Configuration and operation recommendations:
Set the parameter controlfile autobackup to ON to have with each backup a
controlfile backup also:
configure controlfile autobackup on;
set the parameter retention policy to the recovery window you want to have,
ie redundancy 2 will keep the last two backups available, after executing delete obsolete commands:
configure retention policy to redundancy 2;
Execute your full backups with the option ‘plus archivelogs’ to include your archivelogs with every backup:
backup database plus archivelog;
Perform daily maintenance routines to maintain on your backup directory the number of backups you need only:
crosscheck backup;
crosscheck archivelog all;
delete noprompt obsolete backup;
To work with Rman and a database based catalog follow these steps:
1. sqlplus /
2. create tablespace repcat;
3. create user rmuser identified by rmuser default tablespace repcat temporary tablespace temp;
4. grant connect, resource, recovery_catalog_owner to rmuser
5. exit
6. rman catalog rmuser/rmuser # connect to rman catalog as the rmuser
7. create catalog # create the catalog
8. connect target / #
In this case complete recovery is performed, only the system tablespace is missing,so the database can be opened without reseting
the redologs.
1. rman target /
2. startup mount;
3. restore database;
4. recover database;
5. alter database open;
Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing
A user datafile is reported missing when tryin to startup the database. The datafile can be turned offline and the database started
up. Restore and recovery are performed using Rman. After recovery is performed the datafile can be turned online again.
1. sqlplus /nolog
2. connect / as sysdba
3. startup mount
4. alter database datafile ‘<datafile_name>’ offline;
5. alter database open;
6. exit;
7. rman target /
8. restore datafile ‘<datafile_name>’;
9. recover datafile ‘<datafile_name>’;
10. sql ‘alter tablespace <tablespace_name> online’;
If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since
the creation of the missing datafile exist. Since the database is up you can check the tablespace name and put it offline. The option
offline immediate is used to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
1. sqlplus ‘/ as sysdba’
2. alter tablespace <tablespace_name> offline immediate;
3. alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf;
4. exit
5. rman target /
6. recover tablespace <tablespace_name>;
7. sql ‘alter tablespace <tablespace_name> online’;
If the create datafile command needs to be executed to place the datafile on a location different than the original use:
alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’ as ‘/user/oradata/u02/IASDB/newdata01.dbf’
If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery
performed.
Pre requisites: All relevant archived logs, complete cold or hot backup.
1. Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u02/IASDB/user01.dbf
2. alter tablespace <tablespace_name> offline immediate;
3. alter tablespace <tablespace_name> rename datafile ‘/user/oradata/u01/IASDB/user01.dbf’ to
‘/user/oradata/u02/IASDB/user01.dbf’;
4. rman target /
5. recover tablespace <tablespace_name>;
6. sql ‘alter tablespace <tablespace_name> online’;
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the one you have in place, and startup the
Database. If both controlfiles are missing, the database will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman alway set configuration parameter
autobackup of controlfile to ON. You will need the dbid to restore the controlfile, get it from the name of the backed up controlfile.It
is the number following the ‘c-’ at the start of the name.
1. rman target /
2. set dbid <dbid#>
3. startup nomount;
4. restore controlfile from autobackup;
5. alter database mount;
6. recover database;
7. alter database open resetlogs;
8. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.
Incomplete recovery may be necessaire when the database crash and needs to be recovered, and in the recovery process you find
that an archived log is missing. In this case recovery can only be made until the sequence before the one that is missing.
Another scenario for incomplete recovery occurs when an important object was dropped or incorrect data was committed on it.
In this case recovery needs to be performed until before the object was dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be
performed.
1. If the database is open, shutdown it to perform full restore.
2. rman target \
3. startup mount;
4. restore database;
5. recover database until sequence 8 thread 1; # you must pass the thread, if a single instance will always be 1.
6. alter database open resetlogs;
7. make a new complete backup, as the database is open in a new incarnation and previous archived log are not
relevant.Alternatively you may use instead of until sequence, until time, ie: ’2012-01-04:01:01:10′.