Apr 17, 2011

Transportable tablespace - Fastest way to move a large volume of data

In data warehouse environments, we move data from one system to another system. There are a few ways to transport data. We can use flat files(most common and simple), distributed queries or transportable tablespaces.  Today, I like to talk about how to use a transportable tablesapce.

Using a transportable tablespace.
- This is the fastest way to move large volumes of data between two Oracle databases. 
- Limitations: the source and target systems must have the same O/S and must use the same character set.

< Example >
Let's say we have a table, called 'sales' in our database1(SH schema). And we want to move all the sales records since year 2000 to database2(SH schema). The amount of records that we have to move is very large. We decided to use a transportable tablespace.


> In the source system

1. create a tablespace that will hold a table that we will create in the next step.
 SQL>    CREATE TABLESPACE  temp_sales_tbls
               DATAFILE  ' /tmp/temp_sales.dbf '
               SIZE 30 M REUSE Autoextend ON;

2. create a table in the new tablespace. And we will store the sales records that we need to transport in this new table.
 SQL >    CREATE TABLE  temp_sales_table   NOLOGGING
                TABLESPACE temp_sales_tbls
                AS
                select * from sh.sales where time_id  > '31-DEC-1999' ;

3. Alter the tablespace to READ-ONLY  to prevent any changes made to it.
 SQL >   ALTER TABLESPACE temp_sales_tbls READ  ONLY;

4. Create a directory object that points to the location where the dump file for the Data Pump export are written.
  SQL >   CREATE DIRECTORY dum_dir AS '/temp/dump/' ;

5. Run a Data Pump untility, expdp, to greate a dump file that contains the meta data for the tablespace, temp_sales_tbls.
  $>   EXPDP sh/sh DIRECTORY=dump_dir  DUMPFILE=sales.dmp 
         TRANSPORT_TABLESPACES=temp_sales_tbls


> In the target system

6. Now we copy the dump file(sales.dmp) and the datafile(temp_sales.dbf ) to the target system.
  - copy the dump file to where a new directory object will point.
     (we are going to create this new directory in step 7)
     i.e)  /temp/dump_import_loc/
  - copy the datafile to where the datafiles of database 2 are located.
      i.e)  /home/oracle/oradata/

7. Create a directory object pointing to the location
  SQL> CREATE DIRECTORY dump_import_dir  AS '/temp/dump_import_loc/' ;

8. Run a Data Pump utility, impdp, to make the datafile(temp_sales.dbf) accessible to database2.is as
$> IMPDP  sh/sh  DIRECTORY=dump_import_dir 
      DUMPFILE=sales.dmp  LOGFILE=imp.log
      TRANSPORT_DATAFILES='/home/oracle/oradata/temp_sales.dbf '

9. We can verify our works so far. 
 SQL>  connect sh/sh
 SQL>  Select count(*) from temp_sales_table;
 SQL>  select * from temp_sales_table;

10. Finally, don't forget that we have 'sales' table in the target database. Our original goal is to move the sales records since year 2000 from thee source database to the target database. We need to inset the imported sales records into the 'Sales' table in the target database.  In the target database,
  SQL>  connect sh/sh
  SQL >  INSERT  /*+ APPEND */  INTO sales
               SELECT * FROM  temp_sales_table;

--------------------------------------
*** This is a simple example. There might be more processes involved in a real-life situation.

No comments:

Post a Comment