Professional Documents
Culture Documents
Datapump Scenarios - Frequently Asked
Datapump Scenarios - Frequently Asked
Experience
Sharing Knowledge is ultimate key to Gaining knowledge… The only two things that stay with
you for life are you & your knowledge -Nikhil Kotak
Home ▼
Tuesday, 9 August 2016
Q. How to split the datapump dumpfiles, into multiple and at multiple directories ?
PARALLEL parameter is used to improve the speed of the export.
This will be also more effective if you split the dumpfiles with DUMPFILE parameter
across the filesystem.
Create 2 or 3 directories in different filesystems and use the commands effectively.
Q. How to import the dumpfile if the Database version are not same ?
VERSION parameter is used while taking export if you want to create a dumpfile which
should be imported into a DB which is lower than the source DB
Example:
If your source DB is 11g and target DB is 10g, you can't use the dumpfile taken from 11g
expdp utility to import into 10g DB.
If you don’t use this VERSION parameter, then it will show you the error as Below
EXCLUDE = INDEX,STATISTICS
This will not import the indexes and statistics which in turn only import the tables,
hence improving the performance
2)Always set init.ora parameter cursor_sharing to exact which has a good effect on
import's performance.
The above command gives all the CREATE USER statements and CREATE TABLESPACE
statements
which will be useful in many cases.
Note: You can also get the INDEXES and TABLES creation ddl from the dumpfile as well.
I’ll be updating the post whenever I come across things that can help improving the
performance of datapump.
Share 0
2 comments:
Sign out
‹ Home ›
View web version
Powered by Blogger.