Different options in Export and Imports
Schema export and import using remap_schemas.
1) create directory dpump as ‘/u02/dpump’;
2) grant read,write on directory dpump to system;
1 |
expdp user/password directory=dpump dumpfile=export.dup logfile=export.log schemas=scott |
For importing schema :
1) Cretae directory dpump as ‘/u02/DATAPUMP’;
2) grant read,write on directory DATAPUMP to system;
1 |
impdp directory=dpump dumpfile=export.dup logile=import.log remap_schemas=SCOTT:HR table_exists_action=APPEND |
APPEND – Loads rows from the source and leaves existing rows unchanged.
Export and Import using Query option:
Query parameter can be used in two ways :-
a. Using parameter file :-
1 2 3 4 5 6 7 8 9 |
bash-3.2$ bash-3.2$ cat test.par TABLES=AMIT directory=datapump DUMPFILE=expdp_test.dmp logfile=expdp_test.log query=amit:"where DATE > '01-DEC-2013'" bash-3.2$ bash-3.2$ expdp user/user parfile=test.par |
b. In Command Line :-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
expdp user/user directory=datapump dumpfile=expdp_taa_01.dmp logfile=expdp_taa_01.log query=amit:\"where DATE \> \'01-DEC-2013\'\" tables=AMIT Export: Release 11.2.0.1.0 - Production on Fri Dec 13 12:24:50 2013 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "USER"."SYS_EXPORT_TABLE_01": user/******** directory=datapump dumpfile=expdp_taa_01.dmp logfile=expdp_taa_01.log query=amit:"where DATE > '01-DEC-2013'" tables=AMIT Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 384 KB Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "USER"."AMIT" 30.67 KB 75 rows Master table "USER"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for USER.SYS_EXPORT_TABLE_01 is: /disk1/datapump/expdp_taa_01.dmp Job "USER"."SYS_EXPORT_TABLE_01" successfully completed at 12:24:56 |
2. IMPDP :-
a. Using par file :-
1 2 3 4 5 6 7 8 9 |
bash-3.2$ cat test.par schemas=DATA include=TABLE:"IN('TEST')" directory=datapump DUMPFILE=expdp_fullDB_12_Dec_13.dmp logfile=expdp_etst.log query=test:"where DATE > '01-DEC-2013'" remap_schema=DATA:AMIT bash-3.2$ impdp amit/amit parfile=test.par |
b. In Command Line :-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
impdp amit/amit directory=datapump dumpfile=expdp_fullDB_12_Dec_13.dmp logfile=test.log query=TEST:\"where DATE \> \'01-DEC-2013\'\" schemas= DATA include=TABLE:\"IN\(\'TEST\'\)\" remap_schema=DATA:AMIT Import: Release 11.2.0.1.0 - Production on Fri Dec 13 12:38:29 2013 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "AMIT"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "AMIT"."SYS_IMPORT_SCHEMA_01": amit/******** directory=datapump dumpfile=expdp_fullDB_12_Dec_13.dmp logfile=test.log query=test:"where DATE > '01-DEC-2013'" schemas=DATA include=TABLE:"IN('TEST')" remap_schema=DATA:AMIT Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA . . imported "AMIT"."TEST" 51.17 KB 75 out of 1614 rows Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS Job "AMIT"."SYS_IMPORT_SCHEMA_01" successfully completed at 12:38:55 |
Export using Include Parameter:-
1 2 3 |
expdp DADM/passwd DIRECTORY=DMP_DIR SCHEMAS=DADM DUMPFILE=dadm_tables_data.dmp CONTENT=DATA_ONLY INCLUDE=TABLE:"IN ('CBTRFCC','CBTFCCN','CGTGCDD','CGDSALC','CGTRSGR','CBTFCCE','CGTREPD','CPDORPG') |
Import using Include Parameter:-
1 2 |
impdp DADM/passwd DIRECTORY=DMP_DIR SCHEMAS=DADM DUMPFILE=dadm_tables_data.dmp logfile=dadm_tables_data.log INCLUDE=TABLE:"IN ('CBTRFCC','CBTFCCN','CGTGCDD','CGDSALC','CGTRSGR','CBTFCCE') |
Using Exclude Parameter:-
1 |
expdp exclude=SEQUENCE,PROCEDURE,INDEXE,TABLE:"IN ('EMP1','DEPT')" directory=exp_dir dumpfile=scott.dmp logfile=exp_scott.log |
1 |
impdp exclude=SEQUENCE,PROCEDURE,INDEXE,TABLE:"IN ('EMP1','DEPT')" directory=exp_dir dumpfile=scott.dmp logfile=imp_scott.log |
Export Using Flashback_time parameter:-
1 2 3 4 5 6 7 |
$ cat myar.par directory=DMP_DIR dumpfile=expdp.dmp flashback_time="to_timestamp('13-02-2014 12:21:00', 'DD-MM-YYYY HH24:MI:SS')" $ expdp parfile=mypar.par You can use the same par file to import $ impdp parfile=mypar.par |
Using Parallel parameter:
DBA’s work on database with gigabytes or terabytes of data. Exporting data can be a very time consuming job when we needed to export lots of data in a short period of time. Datapump EXPDP has the ability to export data as compressed format, which achieves faster write times, but this will use more processor time.
Best way to achive this is using the parallel parameter. Parallel allows you to have several dump processes, thus exporting data much faster. You can specify the numbers of processes you want to launch by specifying parallel=n. You can specify the filenames of each processes individually or the filename can contain a substitution variable (%U)which indicates that multiple dump files are generated.
here is an example to export data using 10 channels.
expdp user=scott parallel=10 DUMPFILE=expdata%U.dmp
This command will create 10 files in the default datapump export directory and this export will then be imported by using the %U substitution variable again.
To import the datapump in parallel we can use the following example:
1 |
impdp parallel=10 DUMPFILE=expdata%U.dmp |
Incase of RAC environment the export job run on a subset of RAC instances, rather than on any instance in the cluster. .
But, If you want to run the export utility node 1 alone, specifying CLUSTER=N will force Data Pump processes (the worker processes, in particular) to run only on the instance where the job is started.
Here is an example:-
1 |
expdp user=scott parallel=10 DUMPFILE=expdata%U.dmp cluster=n |
To import
1 |
impdp user=scott parallel=10 DUMPFILE=expdata%U.dmp |
In below example we are compressing 800 GB dump to 100 GB. If NFS mount point is pointing to only one node in RAC then we need to use CLUSTER=N
Expdp :
1 2 3 4 5 6 7 8 9 10 11 12 |
expdp_parfile_SCOTT_data.par DIRECTORY=DBA_EXPORT EXCLUDE=STATISTICS CONSISTENT=Y PARALLEL=20 CLUSTER=N LOGFILE=exp_DISK_SCOTT_DATA_11022017.log DUMPFILE=exp_DISK_SCOTT_DATA_11022017_%U.dmp schemas=SCOTT COMPRESSION_ALGORITHM=HIGH compression=all nohup expdp \"/ as sysdba\" parfile=expdp_parfile_SCOTT_data.par & |
Impdp:
1 2 3 4 5 6 7 8 9 10 11 12 |
impdp_parfile_SCOTT_data.par dumpfile=exp_DISK_SCOTT_DATA_11022017_%U.dmp logfile=ORCL1_SCOTT_PERF_IMP.log job_name=ORCL1_SCOTT_PERF_IMP PARALLEL=20 transform=disable_archive_logging:Y cluster=n remap_schema=SCOTT:SCOTT_PERF directory=DBA_EXPORT table_exists_action=replace nohup impdp \"/ as sysdba\" parfile=impdp_parfile_SCOTT_data.par > impdp_log.out & |
Kumar
Nicely documented…
SUMAN
Thanks
Swapna
Useful information
Manikanta
This article is so good and useful .
Nama
Good explanation with example and diagram… easily understand the topic…
Sumanth
Very Usefull Article…
SUMAN
Thank you 🙂
Venkata
Good article and nice presentation with full of information
expecting few more
SUMAN
Thanks. Will come up more articles.
Manasa
Very nice