Let us we discuss the topic  datapump import in oracle 11g

S-1: we need to check directory name  through this command

SQL> desc dba_directories;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                     NOT NULL VARCHAR2(30)
 DIRECTORY_NAME                            NOT NULL VARCHAR2(30)
 DIRECTORY_PATH                                     VARCHAR2(4000)

if we want to who has been created those directories

SQL> select owner,directory_name from dba_directories;

OWNER                          DIRECTORY_NAME
------------------------------ ------------------------------
SYS                            RAM
SYS                            XMLDIR
SYS                            DATA_PUMP_DIR
SYS                            ORACLE_OCM_CONFIG_DIR

to view the physical path of that directory

SQL> select directory_name,directory_path from dba_directories;


DIRECTORY_NAME            DIRECTORY_PATH
------------------------- --------------------
XMLDIR                    /ade/b/1191423112/or
                          acle/rdbms/xml

RAM                       /u02/backup/datapump
DATA_PUMP_DIR             /u01/app/oracle/prod
                          uct/11.2.0.3/admin/C
                          A/dpdump/

ORACLE_OCM_CONFIG_DIR     /u01/app/oracle/prod
                          uct/11.2.0.3/db_1/cc
                          r/state

 Import Utilities parameters :

*ATTACH - Attach to an existing job. For example, ATTACH=job_name.

*CONTENT - Specifies data to load. Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.

*DATA_OPTIONS - Data layer option flags. Valid keywords are SKIP_CONSTRAINT_ERRORS.

*DIRECTORY - Directory object to be used for the dump, log, and SQL files.

*DUMPFILE - List of dump files to import from [expdat.dmp]. For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

*ENCRYPTION_PASSWORD -  Password key for accessing encrypted data within a dump file. Not valid for network import jobs.

*ESTIMATE - Calculate job estimates. Valid keywords are: [BLOCKS] and STATISTICS.

*EXCLUDE -  Exclude specific object types. For example, EXCLUDE=SCHEMA:"='HR'".

*FLASHBACK_SCN - SCN used to reset session snapshot.

*FLASHBACK_TIME -  Time used to find the closest corresponding SCN value.

*FULL - Import everything from source [Y].

*HELP - Display help messages [N].

*INCLUDE - Include specific object types. For example, INCLUDE=TABLE_DATA.

*JOB_NAME -  Name of import job to create.

*LOGFILE -  Log file name [import.log].

*NETWORK_LINK - Name of remote database links to the source system.

*NOLOGFILE - Do not write log file [N].

*PARALLEL - Change the number of active workers for the current job.

*PARFILE - Specify parameter file.

*PARTITION_OPTIONS - Specify how partitions should be transformed. Valid keywords are: DEPARTITION, MERGE and [NONE].

*QUERY -  Predicate clause used to import a subset of a table. For example, QUERY=employees:"WHERE department_id > 10".

*REMAP_DATA -  Specify a data conversion function. For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.

*REMAP_DATAFILE - Redefine datafile references in all DDL statements.

*REMAP_SCHEMA - Objects from one schema are loaded into another schema.

*REMAP_TABLE - Table names are remapped to another table. For example, REMAP_TABLE=EMP.EMPNO:REMAPPKG.EMPNO.

*REMAP_TABLESPACE - Tablespace object is remapped to another tablespace.

*REUSE_DATAFILES - Tablespace will be initialized if it already exists [N].

*SCHEMAS - List of schemas to import.

*SKIP_UNUSABLE_INDEXES - Skip indexes that were set to the Index Unusable state.

*SOURCE_EDITION - Edition to be used for extracting metadata.

*SQLFILE - Write all the SQL DDL to a specified file.

*STATUS - Frequency (secs) job status is to be monitored where the default [0] will show new status when available.

*STREAMS_CONFIGURATION - Enable the loading of Streams metadata

*TABLE_EXISTS_ACTION - Action to take if imported object already exists. Valid keywords are   APPEND, REPLACE, [SKIP] and TRUNCATE.

*TABLES - Identifies a list of tables to import. For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.

*TABLESPACES - Identifies a list of tablespaces to import.

*TARGET_EDITION - Edition to be used for loading metadata.

*TRANSFORM - Metadata transform to apply to applicable objects. Valid keywords are OID, PCTSPACE, SEGMENT_ATTRIBUTES and STORAGE.

*TRANSPORTABLE - Options for choosing transportable data movement. Valid keywords are: ALWAYS and [NEVER]. Only valid in NETWORK_LINK mode import operations.

*TRANSPORT_DATAFILES - List of data files to be imported by transportable mode.

*TRANSPORT_FULL_CHECK - Verify storage segments of all tables [N].

*TRANSPORT_TABLESPACES - List of tablespaces from which metadata will be loaded. Only valid in NETWORK_LINK mode import operations.

*VERSION - A version of objects to import. Valid keywords are: [COMPATIBLE], LATEST or any valid database version. Only valid for NETWORK_LINK and SQLFILE.

The following commands are valid while in interactive mode.

Note: abbreviations are allowed.

*CONTINUE_CLIENT - Return to logging mode. The job will be restarted if idle.

*EXIT_CLIENT - Quit client session and leave job running.

*HELP - Summarize interactive commands.

*KILL_JOB - Detach and delete a job.

*PARALLEL - Change the number of active workers for the current job.

*START_JOB - Start or resume the current job. Valid keywords are SKIP_CURRENT.

*STATUS - Frequency (secs) job status is to be monitored where the default [0] will show new status when available.

*STOP_JOB - An orderly shutdown of job execution and exits the client. Valid keywords are IMMEDIATE.

*Full import This option create  new database from in the dump

[oracle@goldengatesource ~]$ impdp system/oracle full=y directory=ram dumpfile=full_dp.dmp logfile=Full_DP.log

Remap_schema This option  in Datapump import utility that reassigns the ownership of an exported tablespace to another user account. which  means export  the existing user account and import to recreate the new user with existing account

[oracle@goldengatesource ~]$ impdp system/oracle  directory=ram remap_schema=scott:Varon dumpfile=scott.dmp logfile=scott.log

Tables_exists_action it's an import utility which ignores the error when the table is already exists it's option are [append],[skip],[truncate],[replace]

*SKIP = it seems like ignore=y option in conventional import utility

*APPEND = it includes the data from the data  dump  which means to add extra rows into the table  and the existing rows are unchanged

*TRUNCATE = It truncate the existing rows and insert the rows from dump REPLACE = it will drop the current table and create new table from dump file

Note: Both SKIP and REPLACE options are not valid if you set  the CONTENT=DATA_ONLY

 I)Table_exists_action=skip

SQL> select * from city;


  city              country_id    last_update
-----------------------------------------------------------
A Corua (La Corua)    87    2006-02-15 04:45:25
Abhal                    82    2006-02-15 04:45:25
Abu Dhabi           101    2006-02-15 04:45:25
Acua                    60    2006-02-15 04:45:25
Adana                    97    2006-02-15 04:45:25

S-1 take an export of that table

[oracle@goldengatesource ~]$expdp scott/tiger directory=ram dump=city.dmp tables=city logfile=city.logfile

S_2 try the skip option  Table_exists_action=skip

[oracle@goldengatesource ~]$impdp scott/tiger directory=ram dump=city.dmp logfile=imp0.log tables=city table_exists_action=skip

II)Table_exists_action=append I have deleted and inserted 4 new rows into employee table. So as of now the rows, the dump and table are different and I am going to import the dump with APPEND option.

SQL>delete from  city;

5 rows deleted .

SQL>insert into city (select * from abc);

4 rows created.

SQL>commit;

SQL>select * from city;

  city              country_id    last_update
-----------------------------------------------------------
A Corua (La Corua)    87    2006-02-15 04:45:25
ABHa                    82    2006-02-15 04:45:25
AFRICA                   107    2006-02-15 04:45:25
BUMA                    62    2006-02-15 04:45:25

Let us take import the table

[oracle@goldengatesource ~]$impdp scott/tiger directory=ram dump=city.dmp logfile=imp1.log tables=city table_exists_action=append

new rows are inserted in that table

SQL>select * from city;

  city              country_id    last_update
-----------------------------------------------------------
A Corua (La Corua)    87    2006-02-15 04:45:25
ABHa                    82    2006-02-15 04:45:25
AFRICA                   107    2006-02-15 04:45:25
BUMA                    62    2006-02-15 04:45:25
Abhal                    82    2006-02-15 04:45:25
Abu Dhabi           101    2006-02-15 04:45:25
Acua                    60    2006-02-15 04:45:25

III)Table_exists_action=truncate it deletes the existing values and insert values in that table

oracle@goldengatesource ~]$impdp scott/tiger directory=ram dump=city.dmp logfile=imp2.log tables=city table_exists_action=truncate

IV)Table_exists_action=Replace This option will drop the current  table in database and import to recreate the new  table  in that dump

oracle@goldengatesource ~]$impdp scott/tiger directory=ram dump=city.dmp logfile=imp3.log tables=city table_exists_action=replace

To verify the table created time

SQL>select OBJECT_NAME, to_char(LAST_DDL_TIME,'dd-mm-yyyy hh:mi:ss') created from dba_objects where OBJECT_NAME='CITY' and owner='SCOTT';
 

OBJECT_NAME        CREATED
------------------------------

CITY               28-04-2013 11:22:13

*Remap_table Allows the table to rename it while doing an  import operation

 
oracle@goldengatesource ~]$impdp system/oracle directory=ram remap_schema=scott.emp:hari dumpfile=hari.dmp logfile=hari.log tables=scott.emp

*Remap_tablespace This options in the import utility  allow remapping  tablespace one into another

oracle@goldengatesource ~]$impdp system/oracle directory=ram remap_tablespace=users:data logfile=users01.log

Multiple REMAP_TABLESPACE parameters can be specified, but no two can have the same source tablespace. The target schema must have sufficient quota in the target tablespace.