Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 6

How to resolve ORA-01400 during import(imp) in 11g database

Posted by Pavan DBA on July 12, 2014

14 Votes

Posting after long time…

Recently I am performing a refresh activity using export/import. Donno the reason,


but DBA who handled this database in the past configured traditional export (exp)
backup on 11.2.0.3 database.

I got a request to import that dump into UAT database which I started performing
using imp utility.

For some of the tables, i received ORA-01400 error and detailed message will be as
below

. . importing table “TEST”


IMP-00019: row rejected due to ORACLE error 1400
IMP-00003: ORACLE error 1400 encountered
ORA-01400: cannot insert NULL into (“ABC”.”TEST”.”C1″)
Column : 1
Column :

Note that schema, table and column names are changed for security reasons.

Immediately I verified the column C1 in TEST table and observed that it is defined
as NOT NULL. Intially, I thought this might be the reason for the error that Oracle
is trying to insert NULL value. I checked in the production database and found that
there is no NULL values at all in C1 column which means the table structure, data
everything is perfect.

After referring to MOS doc 826746.1, got to know that this is a known issue when
you take export with exp command from 11gR1 onwards using DIRECT=Y option.

This is because there is behaviour change in exp utility. Prior to 11gR1 when you
take export (exp) with DIRECT=Y, if a NULL value is there in column, Oracle used to
replace that with default value what it will have in COL$ table. But, from 11gR1
onwards, the value is not replaced by its default but the NULL values is stored in
export dump. This will produce ORA-1400 during import.

The solution for this problem is simple…. take the export in conventinal path only
i.e don’t use DIRECT=Y or use datapump for your activity.

Even though it seems to be a problem, Oracle has not opened any bug on this because
exp utility is desupported from 11g onwards and Oracle recommends to use datapump
only.

Finally, i temporarily fixed the issue by taking TEST table export in expdp. Also,
changed the daily export backup method from exp to expdp in the production
database.

Hope this helps for some people….

Share this:
EmailPrint

Share
Save

Posted in export/import/Datapump | Tagged: 826746.1, cannot insert null value,


conventional path, direct=y, How to resolve ORA-01400 during import(imp) in 11g
database, IMP-00003, IMP-00019, ora-01400, ora-01400 during imp, ORACLE error 1400
encountered, row rejected due to ORACLE error 1400 | 3 Comments »

expdp throwing ORA-04031 (stream pool) error


Posted by Pavan DBA on December 4, 2013

11 Votes

A week back I have faced an error related to datapump export and would like to
present my findings here.

When I executed datapump export (expdp) job on a 10.2.0.4 database, it got failed
with below error message

ORA-31626: job does not exist


ORA-31637: cannot create job SYS_EXPORT_FULL_01 for user SYS
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPV$FT_INT”, line 600
ORA-39080: failed to create queues “KUPC$C_1_20070823095248” and
“KUPC$S_1_20070823095248” for Data Pump job ORA-06512: at “SYS.DBMS_SYS_ERROR”,
line 95
ORA-06512: at “SYS.KUPC$QUE_INT”, line 1580
ORA-04031: unable to allocate 4194344 bytes of shared memory (“streams
pool”,”unknown object”,”streams pool”,”fixed allocation callback

when we look at last line, like me, many people wonder why it is showing stream
pool here and what is the relation between stream pool and expdp?

Here is the background…

Datapump jobs (either expdp/impdp), when initiated, will use buffer queues in order
to transfer the data directly from the master table to the dump file. Instead of
creating new buffer queue, datapump operations will try to use the existing queues
in stream pool memory area.

Normally, Stream pool will have buffer queues in order to store messages in memory
that supports capture processes, apply processes, XStream outbound servers, and
XStream inbound servers.

Ideally, Stream pool size will be managed (increase/decrease) automatically when


ASMM or AMM is enabled. But, sometimes with this automatic memory management, it
might happen that STREAM_POOL_SIZE has not been set and pointing to a size of 0.
This is the cause for the error mentioned above.

Solution:

Set STREAM_POOL_SIZE parameter to a value greater than 0. Usually 48m should be the
min size and this value will be more if there is a large database or a database
with high work load.

So, friends, beware of Stream pool when doing expdp/impdp 🙂


Share this:
EmailPrint

Share

Save

Posted in export/import/Datapump | Tagged: "fixed allocation callback, "streams


pool", "unknown object", expdp failing with ora-04031, expdp throwing ORA-04031
(stream pool) error, ora-04031 during expdp, ORA-04031: unable to allocate 4194344
bytes of shared memory ("streams pool", ORA-31626: job does not exist, ORA-31637:
cannot create job SYS_EXPORT_FULL_01 for user SYS, ORA-39080: failed to create
queues "KUPC$C_1_20070823095248" and "KUPC$S_1_20070823095248" for Data Pump job |
3 Comments »

How to rebuild datapump metadata? (if corrupted)


Posted by Pavan DBA on March 29, 2013

2 Votes

Some times, due to some internal errors, datapump utility binaries may get
corrupted. In such cases, we can rebuild that using below ways

In Oracle database 10.1 version :

SQL> connect / as sysdba

SQL >@ $ORACLE_HOME/rdbms/admin/catdp.sql – this will install metadata

SQL >@ $ORACLE_HOME/rdbms/admin/dbmspump.sql – this will create DBMS procedures for


datapump

In Oracle database 10.2 version :

SQL >@ $ORACLE_HOME/rdbms/admin/catdph.sql

SQL >@ $ORACLE_HOME/rdbms/admin/prvtdtde.plb

SQL >@ $ORACLE_HOME/rdbms/admin/catdpb.sql

SQL >@ $ORACLE_HOME/rdbms/admin/dbmspump.sql

After performing above steps, all the binaries will be re-build. So, it is better
to execute utlrp.sql to recompile invalid objects

SQL >@ $ORACLE_HOME/rdbms/admin/utlrp.sql

In Oracle 11g :

SQL >@ $ORACLE_HOME/rdbms/admin/catproc.sql (catproc.sql will take care of all


other script execution)

To recompile invalid objects, if any

SQL >@ $ORACLE_HOME/rdbms/admin/utlrp.sql

Note: If you encounter this datapump binaries corruption issue on a production


database, before executing above steps, it is strongly recommended to first test
this and based on results you can implement in production

Share this:
EmailPrint

Share

Save

Posted in export/import/Datapump | Tagged: corrupted datapump binaries, datapump


binaries corrupted, How to rebuild datapump binaries? (if corrupted), reload
datapump binaries | 2 Comments »

impdp slow with TABLE_EXISTS_ACTION=TRUNCATE


Posted by Pavan DBA on March 22, 2013

9 Votes

Yesterday I ran one import using impdp and observed that its taking quite long time
to complete even after mentioning parallel parameter.

I tried to analyze using various methods and finally with MOS doc id 780784.1, I
got to know that the victim is using TABLE_EXISTS_ACTION=TRUNCATE in my impdp
command.

Why this happened?

we know that datapump will use two different load methods during import(impdp).

1. Direct load path – this is the main reason why datapump import (impdp) is faster
than traditional import (imp)

2. external table path

When the table is already not existing, oracle will use direct path load, but when
the table is existing especially IOT, oracle will use external table path only.

I have changed my syntax to TABLE_EXISTS_ACTION=REPLACE and then it got imported


fastly.

So, the final observation is if you have any IOT, please either drop it before
importing or use TABLE_ExISTS_ACTION=REPLACE

Here itself, I want to list the occasions when oracle will not use direct path.

Usual observation is direct path is always faster than external table path. But
datapump cannot use direct path always due to some restrictions and because of this
reason, sometimes you may observe impdp run slower than expected.

Now, what are those situations when datapump will not use direct path? If a table
exist with

1. A global index on multipartition tables exists during a single-partition load.


This includes object tables that are partitioned.

2. A domain index exists for a LOB column.

3. A table is in a cluster.
4. There is an active trigger on a pre-existing table.

5. Fine-grained access control is enabled in insert mode on a pre-existing table.

6. A table contains BFILE columns or columns of opaque types.

7. A referential integrity constraint is present on a pre-existing table.

8. A table contains VARRAY columns with an embedded opaque type.

9. The table has encrypted columns

10. The table into which data is being imported is a pre-existing table and at
least one of the following conditions exists:

– There is an active trigger

– The table is partitioned

– A referential integrity constraint exists

– A unique index exists

11. Supplemental logging is enabled and the table has at least 1 LOB column.

Note: Data Pump will not load tables with disabled unique indexes. If the data
needs to be loaded into the table, the indexes must be either dropped or re-
enabled.

12. using TABLE_EXISTS_ACTION=TRUNCATE ON IOT

Have a HAPPY LEARNING 🙂

Share this:
EmailPrint

Share

Save

Posted in export/import/Datapump | Tagged: datapump import slow, impdp is slow,


impdp slow with table_exists_action=truncate, table_exists_action=truncate | 1
Comment »

solution for ORA-27054: NFS file system where the file is created or resides is not
mounted with correct options in RAC env
Posted by Pavan DBA on October 2, 2012

17 Votes

We have ben faced some issue with expdp failure on a RAC database and want to share
some info reg that. We have ten 10.2.0.4 RAC databases on the same server.

For database files, we are using ASM and for rman backup we are using tapes. Only
for export backup (FULL=Y), we created NFS mount point and scheduled export to that
file system.
We have observed that out of 10 databases, export is failing for one database with
the below error

ORA-39001: invalid argument value


ORA-39000: bad dump file specification
ORA-31641: unable to create dump file
“/dbexports/EXP/proddb2/proddb2_120807225312.dmp”
ORA-27054: NFS file system where the file is created or resides is not mounted with
correct options
Additional information: 2

This is the error we are facing in 2nd instance. when we execute the same script
from 1st instance, it is working fine. After some analysis, found that it is a bug
in 10.2.0.4 with bug # 9115046. More info about this bug is in metalink note

ORA-27054 from EXPDP to NFS disk which does not have “noac” set (Doc ID 9115046.8)

As per this note, the above error will occur when NFS file system is not mounted
with correct options.

“ORA-027054 error is reported by expdp when run against a RAC enabled database if
the target location for the data pump data file is an NFS disk which does not have
the “noac” property set against it. “noac” is required for datafiles and many other
database file types but should not be needed for expdp output files.”

This bug is fixed in 11.2.0.1 version, so we may get this in 11.1.0.7 also.

As a workaround to avoid this, use a different destination for the datapump export
file which does have relevant attributes, or is a local file system
OR
Use event 10298 level 32 to disable the noac check (but take care as this avoid
all NFS checks for all file types which can lead to corruption scenarios if disks
are not mounted with the correct options)

ALTER SYSTEM SET EVENTS ‘10298 trace name context forever, level 32’;

As a permanent fix, we need to apply patch aganist that bug

You might also like