Professional Documents
Culture Documents
Data Pump
Data Pump
14 Votes
I got a request to import that dump into UAT database which I started performing
using imp utility.
For some of the tables, i received ORA-01400 error and detailed message will be as
below
Note that schema, table and column names are changed for security reasons.
Immediately I verified the column C1 in TEST table and observed that it is defined
as NOT NULL. Intially, I thought this might be the reason for the error that Oracle
is trying to insert NULL value. I checked in the production database and found that
there is no NULL values at all in C1 column which means the table structure, data
everything is perfect.
After referring to MOS doc 826746.1, got to know that this is a known issue when
you take export with exp command from 11gR1 onwards using DIRECT=Y option.
This is because there is behaviour change in exp utility. Prior to 11gR1 when you
take export (exp) with DIRECT=Y, if a NULL value is there in column, Oracle used to
replace that with default value what it will have in COL$ table. But, from 11gR1
onwards, the value is not replaced by its default but the NULL values is stored in
export dump. This will produce ORA-1400 during import.
The solution for this problem is simple…. take the export in conventinal path only
i.e don’t use DIRECT=Y or use datapump for your activity.
Even though it seems to be a problem, Oracle has not opened any bug on this because
exp utility is desupported from 11g onwards and Oracle recommends to use datapump
only.
Finally, i temporarily fixed the issue by taking TEST table export in expdp. Also,
changed the daily export backup method from exp to expdp in the production
database.
Share this:
EmailPrint
Share
Save
11 Votes
A week back I have faced an error related to datapump export and would like to
present my findings here.
When I executed datapump export (expdp) job on a 10.2.0.4 database, it got failed
with below error message
when we look at last line, like me, many people wonder why it is showing stream
pool here and what is the relation between stream pool and expdp?
Datapump jobs (either expdp/impdp), when initiated, will use buffer queues in order
to transfer the data directly from the master table to the dump file. Instead of
creating new buffer queue, datapump operations will try to use the existing queues
in stream pool memory area.
Normally, Stream pool will have buffer queues in order to store messages in memory
that supports capture processes, apply processes, XStream outbound servers, and
XStream inbound servers.
Solution:
Set STREAM_POOL_SIZE parameter to a value greater than 0. Usually 48m should be the
min size and this value will be more if there is a large database or a database
with high work load.
Share
Save
2 Votes
Some times, due to some internal errors, datapump utility binaries may get
corrupted. In such cases, we can rebuild that using below ways
After performing above steps, all the binaries will be re-build. So, it is better
to execute utlrp.sql to recompile invalid objects
In Oracle 11g :
Share this:
EmailPrint
Share
Save
9 Votes
Yesterday I ran one import using impdp and observed that its taking quite long time
to complete even after mentioning parallel parameter.
I tried to analyze using various methods and finally with MOS doc id 780784.1, I
got to know that the victim is using TABLE_EXISTS_ACTION=TRUNCATE in my impdp
command.
we know that datapump will use two different load methods during import(impdp).
1. Direct load path – this is the main reason why datapump import (impdp) is faster
than traditional import (imp)
When the table is already not existing, oracle will use direct path load, but when
the table is existing especially IOT, oracle will use external table path only.
So, the final observation is if you have any IOT, please either drop it before
importing or use TABLE_ExISTS_ACTION=REPLACE
Here itself, I want to list the occasions when oracle will not use direct path.
Usual observation is direct path is always faster than external table path. But
datapump cannot use direct path always due to some restrictions and because of this
reason, sometimes you may observe impdp run slower than expected.
Now, what are those situations when datapump will not use direct path? If a table
exist with
3. A table is in a cluster.
4. There is an active trigger on a pre-existing table.
10. The table into which data is being imported is a pre-existing table and at
least one of the following conditions exists:
11. Supplemental logging is enabled and the table has at least 1 LOB column.
Note: Data Pump will not load tables with disabled unique indexes. If the data
needs to be loaded into the table, the indexes must be either dropped or re-
enabled.
Share this:
EmailPrint
Share
Save
solution for ORA-27054: NFS file system where the file is created or resides is not
mounted with correct options in RAC env
Posted by Pavan DBA on October 2, 2012
17 Votes
We have ben faced some issue with expdp failure on a RAC database and want to share
some info reg that. We have ten 10.2.0.4 RAC databases on the same server.
For database files, we are using ASM and for rman backup we are using tapes. Only
for export backup (FULL=Y), we created NFS mount point and scheduled export to that
file system.
We have observed that out of 10 databases, export is failing for one database with
the below error
This is the error we are facing in 2nd instance. when we execute the same script
from 1st instance, it is working fine. After some analysis, found that it is a bug
in 10.2.0.4 with bug # 9115046. More info about this bug is in metalink note
ORA-27054 from EXPDP to NFS disk which does not have “noac” set (Doc ID 9115046.8)
As per this note, the above error will occur when NFS file system is not mounted
with correct options.
“ORA-027054 error is reported by expdp when run against a RAC enabled database if
the target location for the data pump data file is an NFS disk which does not have
the “noac” property set against it. “noac” is required for datafiles and many other
database file types but should not be needed for expdp output files.”
This bug is fixed in 11.2.0.1 version, so we may get this in 11.1.0.7 also.
As a workaround to avoid this, use a different destination for the datapump export
file which does have relevant attributes, or is a local file system
OR
Use event 10298 level 32 to disable the noac check (but take care as this avoid
all NFS checks for all file types which can lead to corruption scenarios if disks
are not mounted with the correct options)
ALTER SYSTEM SET EVENTS ‘10298 trace name context forever, level 32’;