New features are broken out into separate articles, but the help section at the bottom is up to date with the latest versions. For the examples to work we must first unlock the SCOTT account and create a directory object it can access. The directory object is only a pointer to a physical directory, creating it does not actually create the physical directory on the file system of the database server. Data Pump is a server-based technology, so it typically deals with directory objects pointing to physical directories on the database server.
It does not write to the local file system on your client PC. The following is an example of the table export and import syntax. The following is an example of the schema export and import syntax. The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax. The two parameters are mutually exclusive, so use the parameter that requires the least entries to give you the result you require.
The basic syntax for both parameters is the same. If the parameter is used from the command line, depending on your OS, the special characters in the clause may need to be escaped, as follows. Because of this, it is easier to use a parameter file.
The way you handle quotes on the command line will vary depending on what you are trying to achieve. Here are some examples that work for single tables and multiple tables directly from the command line. The following database link will be used to demonstrate its use.
The objects are exported from the source server in the normal manner, but written to a directory object on the local server, rather than one on the source server. The difference here is the objects are imported directly from the source into the local server without being written to a dump file.
By default the expdp utility exports are only consistent on a per table basis. Not surprisingly, you can make exports consistent to an earlier point in time by specifying an earlier time or SCN, provided you have enough UNDO space to keep a read consistent view of the data during the export operation.
In Unlike the original exp and imp utilities all data pump ". The same wildcard can be used during the import to allow you to reference multiple files.
You can see more examples of this here. The following is an example of how this API can be used to perform a schema export. The following is an example of how this API can be used to perform a schema import with a schema remap operation. The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced. Enhanced support for remapping tablespaces during an import operation. Support for filtering the metadata that is exported and imported, based upon objects and object types.
Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs. The ability to estimate how much space an export job would consume, without actually performing the export. The ability to specify the version of database objects to be moved. Most Data Pump export and import operations occur on the Oracle database server. This contrasts with original export and import, which were primarily client-based.
The remainder of this chapter discusses Data Pump technology as it is implemented in the Data Pump Export and Import utilities. To make full use of Data Pump technology, you must be a privileged user. Nonprivileged users have neither. Export and import nonschema-based objects such as tablespace and schema definitions, system privilege grants, resource plans, and so forth.
Data Pump supports two access methods to load and unload table row data: direct path and external tables. Because both methods support the same external data representation, data that is unloaded with one method can be loaded using the other method. Data Pump automatically chooses the fastest method appropriate for each table.
The Oracle database has provided direct path unload capability for export operations since Oracle release 7. Data Pump technology enhances direct path technology in the following ways:. Improved performance through elimination of unnecessary conversions. This is possible because the direct path internal stream format is used as the format stored in the Data Pump dump files.
The default method that Data Pump uses for loading and unloading data is direct path, when the structure of a table allows it. Note that if the table has any columns of datatype LONG , then direct path must be used. The following sections describe situations in which direct path cannot be used for loading and unloading.
If any of the following conditions exist for a table, Data Pump uses external tables rather than direct path to load the data for that table:. A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned. The table into which data is being imported is a pre-existing table and at least one of the following conditions exists:.
If any of the following conditions exist for a table, Data Pump uses the external table method to unload data, rather than direct path:. The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns. The Oracle database has provided an external tables capability since Oracle9 i that allows reading of data sources external to the database.
As of Oracle Database 10 g , the external tables feature also supports writing database data to destinations external to the database. The format of the files is the same format used with the direct path method. This allows for high-speed loading and unloading of database tables.
Data Pump uses external tables as the data access mechanism in the following situations:. Loading and unloading very large tables and partitions in situations where parallel SQL can be used to advantage. Loading tables with global or domain indexes defined on them, including partitioned object tables. When you perform an export over a database link, the data from the source database instance is written to dump files on the connected database instance. In addition, the source database can be a read-only database.
When you perform an import over a database link, the import source is a database, not a dump file set, and the data is imported to the connected database instance. Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably. Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of progress.
The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations. While the data and metadata are being transferred, a master table is used to track the progress within a job.
The master table is implemented as a user table within the database. The specific function of the master table for export and import jobs is as follows:. For export jobs, the master table records the location of database objects within a dump file set. Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set.
For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database.
The master table is created in the schema of the current user performing the export or import operation. Therefore, that user must have sufficient tablespace quota for its creation. The name of the master table is the same as the name of the job that created it. Therefore, you cannot explicitly give a Data Pump job the same name as a preexisting table or view.
If a job terminates unexpectedly, the master table is retained. You can delete it if you do not intend to restart the job. If a job stops before it starts running that is, it is in the Defining state , the master table is dropped. Within the master table, specific objects are assigned attributes such as name or owning schema.
The class of an object is called its object type. The objects can be based upon the name of the object or the name of the schema that owns the object.
You can also specify data-specific filters to restrict the rows that are exported and imported. When you are moving data from one database to another, it is often useful to perform transformations on the metadata for remapping storage between tablespaces or redefining the owner of a particular set of objects.
For example, to limit the effect of a job on a production system, the database administrator DBA might wish to restrict the parallelism. The degree of parallelism can be reset at any time during a job. For example, PARALLEL could be set to 2 during production hours to restrict a particular job to only two degrees of parallelism, and during nonproduction hours it could be reset to 8. The parallelism setting is enforced by the master process, which allocates work to be executed to worker processes that perform the data and metadata processing within an operation.
These worker processes operate in parallel. In general, the degree of parallelism should be set to more than twice the number of CPUs on an instance. The worker processes are the ones that actually unload and load metadata and table data in parallel. The number of active worker processes can be reset throughout the life of a job. When a worker process is assigned the task of loading or unloading a very large table or partition, it may choose to use the external tables access method to make maximum use of parallel execution.
In such a case, the worker process becomes a parallel execution coordinator. The Data Pump Export and Import utilities can be attached to a job in either interactive-command mode or logging mode. In logging mode, real-time detailed status about the job is automatically displayed during job execution. The information displayed can include the job and parameter descriptions, an estimate of the amount of data to be exported, a description of the current operation or item being processed, files used during the job, any errors encountered, and the final job state Stopped or Completed.
Job status can be displayed on request in interactive-command mode. The information displayed can include the job description and state, a description of the current operation or item being processed, files being written, and a cumulative status.
A log file can also be optionally written during the execution of a job. The log file summarizes the progress of the job, lists any errors that were encountered along the way, and records the completion status of the job. The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.
An understanding of how Data Pump allocates and handles these files will help you to use Export and Import to their fullest advantage. For export operations, you can specify dump files at the time the job is defined, as well as at a later time during the operation.
Log files and SQL files will overwrite previously existing files. Dump files will never overwrite previously existing files. Instead, an error will be generated. Because Data Pump is server-based, rather than client-based, dump files, log files, and SQL files are accessed relative to server-based directory paths. Data Pump requires you to specify directory paths as directory objects.
A directory object maps a name to a directory path on the file system. The reason that a directory object is required is to ensure data security and integrity.
For example:. If you were allowed to specify a directory path location for an input file, you might be able to read data that the server has access to, but to which you should not. If you were allowed to specify a directory path location for an output file, the server might overwrite a file that you might not normally have privileges to delete.
By default, it is available only to privileged users. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories. Data Pump Export and Import use the following order of precedence to determine a file's location:. If a directory object is specified as part of the file specification, then the location specified by that directory object is used.
The directory object must be separated from the filename by a colon. This environment variable is defined using operating system commands on the client system where the Data Pump Export and Import utilities are run. The value assigned to this client-based environment variable must be the name of a server-based directory object, which must first be created on the server system by a DBA.
For example, the following SQL statement creates a directory object on the server system. The dump file employees. This directory object is automatically created at database creation or when the database dictionary is upgraded. A separate directory object, which points to an operating system directory path, should be used for the log file. For example, you would create a directory object for the ASM dump file as follows:.
To enable user hr to have access to these directory objects, you would assign the necessary privileges, for example:.
0コメント