Professional Documents
Culture Documents
10g Data Pump
10g Data Pump
Oracle 10g Data Pump Say good bye to exp and imp (or not)!
Simon Pane
Agenda
Data Pump Overview Using Data Pump Demonstration Data Pump Test Cases
to use most DBAs have years of experience using these utilities various options available; can specify what to include independent
Cons
Comparatively Can
be network intensive filtering options (for example, can exclude just VIEWS) remapping options (i.e. from one tablespace to another)
5
Transportable Tablespaces
Pros
Undoubtedly Can
the fastest way to move data support if the platform byte-order is the same
Cross-platform
Cons
Tablespaces
Not
selective (must move the entire tablespace) is not possible (tablespace is read only when copied) physical reorganization is performed
Flashback No
Datafile
Must
use RMAN to convert the datafile if migrating to a platform with a different byte-order (check V$TRANSPORTABLE_PLATFORM)
6
Extraction to a flat file and loading using SQL Loader Direct copy using database links (SQL Plus COPY command) Oracle Streams 3rd Party data ETL or reorg tools
4.
3. 2. 1.
9.
8. 7.
6.
5.
4. 3. 2. 1.
Operation Fundamentals
Export/Import
These utilities would basically connect to the Oracle database
the client
Data Pump
The executables call PL/SQL APIs Therefore processing is done on the database server This can be an advantage or a disadvantage depending on the
situation
Self-Tuning: no longer need to use BUFFER or RECORDSET
10
Export Operation
exp.exe Network
Oracle Database
Export File(s)
11
expdp.exe Network
Oracle Database
Export File(s)
12
Key Differences
Dump and log files are on the server, not the client Must have a DIRECTORY created in the Oracle database for I/O
Permissions for the userid connecting to the instance, not the
Doesnt automatically overwrite dump file if it already exists returns an error instead
Parameters (command line) are reported in the log file
Multiple Interfaces
1.
2. 3.
Unload Mechanisms
lot of tables fall under this situation see Oracle documentation for a complete list
Multiple Processes
process
At the end of an export, the master control table is written to the
Worker Processes
Performs the loading/unloading Number of processes depends on the degree of parallelism
16
17
New Views
DBA_DATAPUMP_SESSIONS
Identify user sessions that are attached to a job
18
Security Considerations
Still uses the EXP_FULL_DATABASE and IMP_FULL_DATABASE A privileged user will have these two roles A privileged user can:
Export/import objects owned by other schemas Export non-schema objects (metadata) Attach to, monitor, and control jobs initiated by others Perform schema, datafile, and tablespace remapping
Object Statistics
20
Can still use a parameter file and the PARFILE command line option Fully supports Automatic Storage Management (ASM) Can still flashback to a specified time or SCN Can still extract (or backup) DDL (meta data)
Using the SQLFILE option instead of the traditional
21
22
Must first create an Oracle directory object and give the user who will be performing the Data Pump activities permission to use it (or rely on defaults):
SQL> create or replace directory dpump_demo as 'C:\temp'; Directory created. SQL> grant read,write on directory dpump_demo to simon; Grant succeeded. SQL>
23
CONTENT={ALL | DATA_ONLY | METADATA_ONLY} DIRECTORY=directory_object (default=DATA_PUMP_DIR) DUMPFILE=[directory_object:]file_name [,] ESTIMATE={BLOCKS | STATISTICS} ESTIMATE_ONLY={Y | N} EXCLUDE=object_type[:name_clause] [,]
TABLESPACES=tablespace_name [,]
25
CONTENT={ALL | DATA_ONLY | METADATA_ONLY} DIRECTORY=directory_object (default=DATA_PUMP_DIR) DUMPFILE=[directory_object:]file_name [,] EXCLUDE=object_type[:name_clause] [,] FULL={Y | N} INCLUDE=object_type[:name_clause] [,]
JOBNAME=jobname_string
LOGFILE=[directory_object:]file_name NOLOGFILE={Y | N} PARALLEL=integer (default=1) QUERY=[schema.][table_name:]query_clause
27
SQLFILE=[directory_object:]file_name
TABLE_EXISTS_ACTION={SKIP|APPEND|TRUNCATE|REPLACE} TABLES=[schema_name.]table_name[:partition_name] [,] TABLESPACES=tablespace_name [,]
28
START_JOB
STATUS STOP_JOB
29
Demonstration
30
expdp system/oracle@ORA1020 dumpfile=scott.dmp schemas=scott impdp system/oracle@ORA1020 dumpfile=scott.dmp schemas=SCOTT remap_schema=SCOTT:LARRY expdb system/oracle@ORA1020 dumpfile=larry.dmp schemas=larry SELECT * FROM DBA_DATAPUMP_JOBS;
31
Export> status
Export> stop_job
33
Test Scenario #1
Schema objects
Tables: 218 (many empty tables)
Indexes: 1180 (SIEBEL is a heavily indexed application)
34
35
Export Scripts
EXP
sqlplus exp.exe sqlplus
EXPDP
erase
sqlplus
expdp.exe
sqlplus
37
Import Scripts
IMP
sqlplus imp.exe
userid=system/oracle@ORA1020 file=SIEBEL.dmp log=SIEBEL.log fromuser='SIEBEL' touser='SCOTT' commit=y -s system/oracle @timestamp.sql >> exp.log
sqlplus
IMPDP
sqlplus
impdp.exe
userid=system/oracle@ORA1020 dumpfile=SIEBEL.dmp logfile=SIEBEL.log schemas='SIEBEL' directory=test_dir remap_schema=SIEBEL:SCOTT remap_tablespace=TOOLS:SCOTT_DATA -s system/oracle @timestamp.sql >> expdp.log
38
sqlplus
Destination tablespace and archived log destination were both on ASM drives
Machine performance was degraded much more by impdb import No import tuning performed (only COMMIT=Y)
39
Test Scenario #2
40
Export Scripts
EXP
sqlplus exp.exe sqlplus
EXPDP
erase
sqlplus
expdp.exe
sqlplus
42
Import Scripts
IMP
sqlplus imp.exe
sqlplus
IMPDP
sqlplus
impdp.exe
sqlplus
43
44
Conclusions
45
Conclusions
Data Pump is an exciting new Oracle 10g tool that provides many benefits over the traditional export and import utilities Whether to use Data Pump, Transportable Tablespaces, or even the traditional exp/imp will depend on the situation Since the command line interface is easy to use and so similar to the traditional exp/imp, DBAs and developers should spend the time to learn how to use it Final thought: since Data Pump dump files and traditional export dump files are not compatible/interchangeable, should a new file extension be used??? (.dmp vs .dpd)
46
The End
47