Tuesday, May 20, 2008

Oracle Flashback Drop

Summary

Oracle 10g introduced a flashback drop feature that will allow you to bring back a dropped table without having to execute any type of recovery operation. This is due to the recyclebin which is what makes the flashback drop feature work. This feature is very useful for those times when a developer comes in who just dropped a table by mistake. Before flashback drop you either had to have an export of the table or recover the database using an auxiliary instance to get the table back and then move the table from the auxiliary instance back into database where the table was dropped. You can however drop a table and bypass the recycle bin as well by using the PURGE option to the drop table.


Some important things to note about the flashback drop feature

1. Think of each schema getting its own Recycle Bin
2. The recycle bin is a logical collection of previously DROPPED objects
3. In Oracle 10g the default action of a DROP TABLE command is to RENAME the table essentially placing the table in the RECYCLE BIN.
4. Objects in the RECYCLE BIN are NOT automatically purged and must be explicitly purged explicitly by a User or DBA action.
5. When a table is dropped the segments that table consumed in the tablespace are still present. Therefore when a table is dropped space is not freed as the table segments are still there.
6. If an object is dropped and recreated multiple times all dropped versions will be kept in the recycle bin, subject to space.
7. Where multiple versions are present, it's best to reference the tables via the table name shown in the RECYCLEBIN -> RECYCLEBIN_NAME.
8. For any references to the ORIGINAL_NAME, it is assumed the most recent object is drop version in the referenced question.


Showing the contents of the RECYCLEBIN

Show the current schemas recyclebin
show recyclebin
select * from recyclebin ;

Show the DBA Recyclebin which will show all schema recyclebins
select * from dba_recyclebin ;


Purging the RECYCLEBIN

Purge current schema RECYCLEBIN
PURGE RECYCLEBIN ;

Purge All RECYCLEBINs
PURGE DBA_RECYCLEBIN ;

Purge Objects from RECYCLEBIN
PURGE TABLE tablename ;
PURGE INDEX indexname ;

Purge Objects from RECYCLEBIN for a tablespace
PURGE TABLESPACE ts_name ;
PURGE TABLESPACE ts_name USER username ;


Dropping the table and bypass the RECYCLEBIN

DROP TABLE tablename PURGE ;


Recovering a Table from the RECYCLEBIN

Recover last dropped version of the table
FLASHBACK TABLE tablename TO BEFORE DROP;

Recover the table, but give it a new name
FLASHBACK TABLE tablename TO BEFORE DROP RENAME TO newtablename ;

Recover the table from a prior dropped version of the table
FLASHBACK TABLE “recyclebin name” TO BEFORE DROP ;
FLASHBACK TABLE “recyclebin name” TO BEFORE DROP RENAME TO newtablename ;


Examples Using Flashback Drop

1. Create a table for our testing
SQL> CREATE TABLE flashback_drop_test ( id NUMBER(10) );

Table created.


2. Insert a record so that we have data
SQL> INSERT INTO flashback_drop_test (id) VALUES (1);

1 row created.

SQL> COMMIT;

Commit complete.


3. Drop the table
SQL> DROP TABLE flashback_drop_test;

Table dropped.


4. Show the RECYCLEBIN
SQL> SHOW RECYCLEBIN
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$qLfTuNLERa6PzKkR4l1luA==$0 TABLE 2008-05-20:16:03:36


5. Show that we can no longer select from the table.
SQL> select * from flashback_drop_test;
select * from flashback_drop_test
*
ERROR at line 1:
ORA-00942: table or view does not exist


6. Bring back the last dropped version of the table
SQL> FLASHBACK TABLE flashback_drop_test TO BEFORE DROP;

Flashback complete.


7. Show that the table is back and we can select from it
SQL> select * from flashback_drop_test;

ID
----------
1


8. Show that the RECYCLEBIN is now empty
SQL> show recyclebin
SQL>


9. Drop the table again using the purge option.
SQL> drop table flashback_drop_test PURGE ;

Table dropped.

10. Show that the recyclebin is empty as we used the PURGE option.
SQL> show recyclebin
SQL>


11. Recreate the table for our further testing.
SQL> CREATE TABLE flashback_drop_test ( id NUMBER(10) );

Table created.


12. Drop the table again using the purge option.
SQL> drop table flashback_drop_test ;

Table dropped.


13. recreate the table to have multiple versions of the table.
SQL> CREATE TABLE flashback_drop_test ( id NUMBER(10) );

Table created.


14. Insert a record so that we have data so we will have 2 versions of the table one with a record and one empty.
SQL> INSERT INTO flashback_drop_test (id) VALUES (1);

1 row created.

SQL> COMMIT;


15. Show that we have the table in the RECYCLEBIN again.
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$4/+GRpvFSl6cIWL21YkgPA==$0 TABLE 2008-05-20:16:03:39


16. Show that the we can select directly from the table in the RECYCLEBIN using the RECYCLEBIN name and show that the table in the RECYCLEBIN has no rows.
SQL> select * from "BIN$XzZkNWkNQBiwl9WSVWk49w==$0" ;

no rows selected


17. Show that the current table still exists and has rows
SQL> select * from flashback_drop_test ;

ID
----------
1


18. Drop the table
SQL> drop table flashback_drop_test ;

Table dropped.


19. Show that we now have 2 versions of the table in the recyclebin.
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$3FoAb84NSMmiVnhYco2e5w==$0 TABLE 2008-05-20:16:50:25
FLASHBACK_DROP_TEST BIN$XzZkNWkNQBiwl9WSVWk49w==$0 TABLE 2008-05-20:16:46:32


20. Bring back the latest version of the table but with another table name.
SQL> FLASHBACK TABLE flashback_drop_test TO BEFORE DROP RENAME TO flashback_drop_test_old;

Flashback complete.


21. Show that we can select from the new table name.
SQL> select * from flashback_drop_test_old;

ID
----------
1


22. Show that we still do not see the original table.
SQL> select * from flashback_drop_test;
select * from flashback_drop_test
*
ERROR at line 1:
ORA-00942: table or view does not exist


23. Show that we still have a version of the table in the RECYCLEBIN
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$XzZkNWkNQBiwl9WSVWk49w==$0 TABLE 2008-05-20:16:46:32


24. Create a new version of the table.
SQL> CREATE TABLE flashback_drop_test ( id NUMBER(10) );

Table created.


25. Insert a row so this version has a record.
SQL> INSERT INTO flashback_drop_test (id) VALUES (1);

1 row created.

SQL> COMMIT;


26. Drop the table
SQL> drop table flashback_drop_test ;

Table dropped.


27. Show that the 2 versions are in the RECYCLEBIN
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$MMV1E9cRRZCvUkC8HM3WhQ==$0 TABLE 2008-05-20:16:58:45
FLASHBACK_DROP_TEST BIN$XzZkNWkNQBiwl9WSVWk49w==$0 TABLE 2008-05-20:16:46:32


28. Restore the older version of the table from the RECYCLEBIN, this one does not have a record in it.
SQL> FLASHBACK TABLE "BIN$XzZkNWkNQBiwl9WSVWk49w==$0" TO BEFORE DROP ;

Flashback complete.


29. Show that the table is the correct version as it will have no record in it.
SQL> select * from flashback_drop_test ;

no rows selected

Thursday, May 8, 2008

Oracle Transparent Data Encryption

Summary

Oracle has provided a way to automatically encrypt data in a column of a table and at the same time decrypt it automatically therefore allowing an application to function as if the column was not encrypted. Transparent data encryption will protect the data in a column in a table with the use of a wallet. Transparent data encryption by default uses the algorithm AES with 192-bit key for data encryption. The following is a short test showing the use of Transparent Data Encryption.

There is overhead to transparent data encryption that can not be ignored as the encryption during inserts and updates, and decryption operations during select operations consume CPU cycles. While this overhead is fairly low it is safer to selectively apply encryption to where it is truly needed on a column basis. Therefore proper load tests should be performed incrementally as column encryption is rolled out so that full impact to system can be measured.

There are three significant things to mention about encryption one being that streams does not support the use of encrypted columns the other two have to do with the use of indexes. One you can not index columns that are encrypted with salt therefore columns encrypted that require an index as the column will be used in searches will need to be encrypted without salt. By default transparent data encryption is done with salt and to encrypt a column without salt the “no salt” option must be specified. Second indexes for encrypted columns behave different in a single respect but can have a large impact. Take table emp for example, add column ssn and encrypt it and build an index on it. We will use 2 query examples:


select ename from emp where ssn = ‘123-45-6789’ ;

This query will use the index and will not matter if the column is encrypted or not.

select ename from emp where ssn like ‘123-45-%’ ;

This query however will depend on if the column is encrypted or not to whether the index is used. If the column is encrypted the index will be ignored, however if the column is not encrypted the index would indeed be used. The reason the index is not used it due to the index now the column has an encrypted value and the ability to substring for an encrypted value is not possible. We will show that the index is indeed not used in the test below.


10g (10.2.0.3 Windows) and 11g (11.1.0.6) Test

1. Set the Wallet location for encryption in the sqlnet.ora file.

10g

ENCRYPTION_WALLET_LOCATION =
(SOURCE=
(METHOD=file)
(METHOD_DATA=
(DIRECTORY=c:\oracle\product\10.2.0\db_1\network\admin\ENCRYPT)))

11g

ENCRYPTION_WALLET_LOCATION =
(SOURCE=
(METHOD=file)
(METHOD_DATA=
(DIRECTORY=c:\oracle\product\10.2.0\db_1\network\admin\ENCRYPT)))


2. Create the wallet, initial creation of the wallet will automatically open it.
alter system set encryption key authenticated by "xxxxxxxxx" ;

SQL> alter system set encryption key authenticated by "xxxxxxxxx" ;
System altered.


3. Each time you need to use the wallet it will need to be explicitly opened. The wallet will allow the automatic decryption of data as it is read, otherwise the data of encrypted columns will remain encrypted and not able to be viewed. All other columns that are not encrypted work as normal even when the wallet is not open. In this case since it was just created the wallet is already open.

alter system set encryption wallet open authenticated by "kim11ball" ;

SQL> alter system set encryption wallet open authenticated by "xxxxxxxxx" ;
alter system set encryption wallet open authenticated by "xxxxxxxxx"
*
ERROR at line 1:
ORA-28354: wallet already open

Note: Can close the wallet explicitly with the following alter system set encryption wallet close ;

SQL> alter system set encryption wallet close ;

System altered.

4. Lets open the wallet so we can proceed with the test

SQL> alter system set encryption wallet open authenticated by "xxxxxxxxx" ;

System altered.


5. To test our transparent data encryption by using the emp table, first add an ssn column.

SQL> alter table emp add ssn varchar2(11) ;

Table altered.

SQL> update emp set ssn = ‘222-22-2222’ ;

51 rows updated.

SQL> commit ;


6. Encrypt the ssn column

SQL> alter table emp modify (ssn encrypt) ;

Table altered.

Note: Can turn the column encryption off with the following

SQL> alter table emp modify (ssn decrypt) ;

Table altered.


7. Check the emp table structure as we can see the SSN column show encrypt.

SQL> desc emp
Name Null? Type
----------- -------- ---------------------------
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)
SSN VARCHAR2(11) ENCRYPT


8. Let verify the wallet is indeed closed.

SQL> alter system set encryption wallet close;

System altered.

SQL> alter system set encryption wallet open authenticated by "nsfjsnfjksfa" ;
alter system set encryption wallet open authenticated by "nsfjsnfjksfa"
*
ERROR at line 1:
ORA-28353: failed to open wallet

SQL> alter system set encryption wallet close;
alter system set encryption wallet close
*
ERROR at line 1:
ORA-28365: wallet is not open


9. As we can see we are going to do the query with a user called temp the only permissions on this user is select on the emp table and create session.

SQL> select user from dual ;

USER
------------------------------
TEMP


10. If we look at the SSN column we see that the column queries unencrypted, 10g is restricting the access to the column with the wallet closed as expected

SQL> select ssn from emp ;
select ssn from emp
*
ERROR at line 1:
ORA-28365: wallet is not open


11. Not open the wallet for some more testing

SQL> alter system set encryption wallet open authenticated by "xxxxxxxxx" ;

System altered.


12. See if an index can be built, as the summary indicates an index can not be built on an encrypted column with salt.

SQL> create index emp_ssn on emp (ssn) ;
create index emp_ssn on emp (ssn)
*
ERROR at line 1:
ORA-28338: cannot encrypt indexed column(s) with salt


13. Turn off the salt using the alter table statement so that the index can be built.

SQL> alter table emp modify ssn encrypt no salt ;

Table altered.


14. Now build the index

SQL> create index emp_ssn on emp (ssn) ;

Index created.


15. Now check the affect the encryption has on the index usage.

A: Check using the column in the where clause with an =

SQL> explain plan for select ename from emp where ssn = '123-45-6789' ;

Explained.

SQL> @plan

Plan hash value: 1582005447

---------------------------------------------------------------------------
Id Operation Name RowsBytesCost(%CPU)Time
---------------------------------------------------------------------------
0 SELECT STATEMENT 1 38 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWIDEMP 1 38 2 (0) 00:00:01
* 2 INDEX RANGE SCAN EMP_SSN 1 1 (0) 00:00:01
---------------------------------------------------------------------------

B: Check the index use with the LIKE, as we can see by the explain plan the index is suppressed.

SQL> explain plan for select ename from emp where ssn like '123-45-%' ;

Explained.

SQL> @plan

Plan hash value: 3956160932

---------------------------------------------------------------
Id Operation NameRowsBytesCost(%CPU)Time
---------------------------------------------------------------
0 SELECT STATEMENT 2 76 3 (0)00:00:01
* 1 TABLE ACCESS FULL EMP 2 76 3 (0)00:00:01
---------------------------------------------------------------


16. No remove the encryption from the ssn column and see if the like uses the index as expected and indeed the index is used.

SQL> alter table emp modify (ssn decrypt) ;

Table altered.

SQL> explain plan for select ename from emp where ssn like '123-45-%' ;

Explained.

SQL> @plan

Plan hash value: 1582005447

---------------------------------------------------------------------------
Id Operation Name RowsBytesCost(%CPU) Time
---------------------------------------------------------------------------
0 SELECT STATEMENT 2 28 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWIDEMP 2 28 2 (0) 00:00:01
* 2 INDEX RANGE SCAN EMP_SSN 1 1 (0) 00:00:01
---------------------------------------------------------------------------

Oracle 11g Read-Only Tables

Summary

Prior to 11g you could make an entire tablespace read-only which meant that you had to group all your tables that you wanted to be read-only into a common tablespace or set of tablespaces, or you had to create a trigger that would cause an error. 11G now allows a single table to be made read-only without affecting the other tables in the tablespace. This can help save the overhead of having to move tables to read-only tablespaces when you are read to make the table read-only. Not only can we place the table read-only but can change it back to read-write to allow insert and updates when needed.

The read-only designation for a table will stop all DML (truncate/insert/update/delete/etc) operations and certain DDL operations as well like ALTER TABLE to add/modify/rename/drop columns, ALTER TABLE drop/truncate/exchange (SUB)PARTITION, ALTER TABLE upgrade. Though keep in mind that some DDL can still be done against a table designated read-only like DROP TABLE, ALTER TABLE add/coalesce/merge/modify/move/rename/split (SUB)PARTITION, ALTER TABLE rename/move/add supplemental log/drop supplemental log/deallocate ununsed.
To identify if a table is read-only a new column has been added to dba_tables, user_tables and all_tables called READ_ONLY. This column will contain YES if the table is read-only and NO when the table is not read-only.

Unfortunately at this time Oracle does not allow the ability to just take a partition of a table and make it read-only, to make a partition of a table read-only the partition must be moved to a tablespace that can be marked read-only. One can only hope Oracle will help us out and provide that ability in future releases saving the overhead of move partitions or having to place partitions in separate tablespaces to get the partitions to be read-only.

Putting table into read-only:

alter table big_emp2 read only ;

Taking table out of read-only

alter table big_emp2 read write ;

11g (11.1.0.6 Windows) Test

Create the table for the test and load with dummy data.

SQL> -- create the table for the test
SQL> CREATE TABLE big_emp2
2 (EMPNO NUMBER(4) CONSTRAINT big_emp2_pk primary key disable,
3 ENAME VARCHAR2(10),
4 JOB VARCHAR2(9),
5 MGR NUMBER(4),
6 HIREDATE DATE,
7 SAL NUMBER(7,2),
8 COMM NUMBER(7,2),
9 DEPTNO NUMBER(2)
10 )
11 PARTITION BY RANGE (HIREDATE)
12 (PARTITION y1980 VALUES LESS THAN ( TO_DATE('01-JAN-1981','DD-MON-YYYY') ) TABLESPACE USERS,
13 PARTITION y1981 VALUES LESS THAN ( TO_DATE('01-JAN-1982','DD-MON-YYYY') ) TABLESPACE USERS,
14 PARTITION y1982 VALUES LESS THAN ( TO_DATE('01-JAN-1983','DD-MON-YYYY') ) TABLESPACE USERS,
15 PARTITION y1983 VALUES LESS THAN ( TO_DATE('01-JAN-1984','DD-MON-YYYY') ) TABLESPACE USERS,
16 PARTITION y1984 VALUES LESS THAN ( TO_DATE('01-JAN-1985','DD-MON-YYYY') ) TABLESPACE USERS,
17 PARTITION y1985 VALUES LESS THAN ( TO_DATE('01-JAN-1986','DD-MON-YYYY') ) TABLESPACE USERS,
18 PARTITION y1986 VALUES LESS THAN ( TO_DATE('01-JAN-1987','DD-MON-YYYY') ) TABLESPACE USERS,
19 PARTITION y1987 VALUES LESS THAN ( MAXVALUE ) TABLESPACE USERS
20 )
21 /

Table created.

SQL>
SQL> -- create the unique index for the primary key
SQL> create unique index beig_emp2_pk on big_emp2 (empno) ;

Index created.

SQL>
SQL> -- enable the primary key using the unique index
SQL> alter table big_emp2 enable primary key using index ;

Table altered.

SQL> -- load data
SQL> insert into big_emp2 select * from big_emp ;

9254 rows created.


Make the Table Read Only

SQL> alter table big_emp2 read only ;

Table altered.


Check that the table shows read-only

SQL> select table_name, read_only from user_tables where table_name = 'BIG_EMP2';

TABLE_NAME REA
------------------------------ ---
BIG_EMP2 YES


Attempt to do DML operations against the table

SQL> update big_emp2 set salary = salary + 200 ;
update big_emp2 set salary = salary + 200
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

SQL> insert into big_emp2 values (0,'My Name','MGR','1000','01-JAN-08',5000,1000,10) ;
insert into big_emp2 values (0,'My Name','MGR','1000','01-JAN-08',5000,1000,10)
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

SQL> delete from big_emp2 ;
delete from big_emp2
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

SQL> truncate table big_emp2 ;
truncate table big_emp2
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"


Attempt to do DDL operations against the table

SQL> alter table big_emp2 add ssn varchar2(11) ;
alter table big_emp2 add ssn varchar2(11)
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

SQL> ALTER TABLE big_emp2 drop partition y1980 ;
ALTER TABLE big_emp2 drop partition y1980
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

Monday, May 5, 2008

Moving Cost Based Optimizer Statistics

We have found that over time test databases do not match production databases is data or even in data volume unless we refresh them frequently. The problem is getting the development and test staff to allow the downtime necessary to do a database refresh from production especially for larger databases. So how do we make sure out execution plans in our test environment match what the production execution plans would be. The answer is to take the cost based optimizer statistics and put them in the production database.

1. To move the cost based optimizer statistics we first create a stats table and put the CBO stats in that stats table. In our example we will use the SYSTEM schema for the stats table.

----------------------------------------------------------------------
-- Setup our stats table and export our stats for the passed schema
-- into the stats table
----------------------------------------------------------------------
DECLARE
v_tbl_cnt NUMBER ;

BEGIN
-- enable DBMS_OUTPUT
dbms_output.enable (1000000) ;

-- Check if our stats table exists
SELECT count(*)
INTO v_tbl_cnt
FROM dba_tables
WHERE owner = 'SYSTEM'
AND table_name = 'CBO_STATS' ;

-- If stats table exists then we must get rid of it to ensure
-- that we always recreate it to ensure structure is correct
IF v_tbl_cnt > 0 THEN
dbms_output.put_line('Having to Remove SYSTEM.CBO_STATS as it already Exists.');

-- Remove our stats table now that we have exported it
DBMS_STATS.drop_stat_table('SYSTEM','CBO_STATS') ;
END IF ;

-- Create our stats table
dbms_output.put_line ('Creating SYSTEM.CBO_STATS Table to Store CBO Stats for Schema &1.') ;
dbms_stats.create_stat_table(ownname => 'SYSTEM', stattab => 'CBO_STATS', tblspace => 'USERS') ;

-- put our schema stats into out created stats table
dbms_output.put_line ('Exporting Schema Stats for &1 into SYSTEM.CBO_STATS table.') ;
dbms_stats.export_schema_stats (ownname => '&1', stattab => 'CBO_STATS', statown => 'SYSTEM') ;
END ;
/


2. Next we will need to export the stats table we just put the cost based optimizer statistics into

----------------------------------------------------------------------
-- export our stats table so it can be imported to another database
-- We will use the DBMS_DATAPUMP Package to do our export to simplify
-- the process
----------------------------------------------------------------------
DECLARE
v_handle NUMBER ;
ind NUMBER ; -- Loop index
spos NUMBER ; -- String starting position
slen NUMBER ; -- String length for output
percent_done NUMBER ; -- Percentage of job complete
job_state VARCHAR2(30) ; -- To keep track of job state
le ku$_LogEntry ; -- For WIP and error messages
js ku$_JobStatus ; -- The job status from get_status
jd ku$_JobDesc ; -- The job description from get_status
sts ku$_Status ; -- The status object returned by get_status

BEGIN
dbms_output.enable (1000000) ;

-- create our datapump export job
dbms_output.put_line ('Opening Export Job') ;
v_handle := DBMS_DATAPUMP.OPEN (operation => 'EXPORT',
job_mode => 'TABLE',
job_name => 'EXPORT_CBO_STATS_JOB',
version => 'COMPATIBLE');

-- set our logfile
dbms_output.put_line ('Setting Log file for Export Job') ;
DBMS_DATAPUMP.ADD_FILE (handle => v_handle,
filename => 'expdp_export_cbo_stats.log',
directory => 'DMPDIR',
filetype => 3) ;

-- set our dump file
dbms_output.put_line ('Setting Dump file for Export Job') ;
DBMS_DATAPUMP.ADD_FILE (handle => v_handle,
filename => 'expdp_export_cbo_stats.dmp',
directory => 'DMPDIR',
filetype => 1) ;

-- Add the schema filter
DBMS_DATAPUMP.METADATA_FILTER(v_handle,'SCHEMA_EXPR','IN (''SYSTEM'')');

-- set the filter for datapump to be the schema we want to export.
dbms_output.put_line ('Adding filter to only get SYSTEM.CBO_STATS') ;
DBMS_DATAPUMP.METADATA_FILTER (handle => v_handle,
name => 'NAME_EXPR',
value => '= ''CBO_STATS''',
object_type => 'TABLE') ;

-- Start the datapump export job
dbms_output.put_line ('Starting Datapump Export Job to dump SYSTEM.CBO_STATS table.') ;
DBMS_DATAPUMP.START_JOB (v_handle) ;

-- May have to use WAIT_FOR_JOB

-- initialize percent_done and job_state values
percent_done := 0 ;
job_state := 'UNDEFINED' ;

-- Loop while the job_state is not completed or stopped
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(v_handle,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts) ;

js := sts.job_status ;

-- If the percentage done changed, display the new value.
if js.percent_done != percent_done then
dbms_output.put_line('*** Job percent done = ' to_char(js.percent_done)) ;

percent_done := js.percent_done ;
end if ;

-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0) then
le := sts.wip ;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) then
le := sts.error ;
else
le := null ;
end if ;
end if ;

if le is not null then
ind := le.FIRST ;

-- loop while loop indicator is not null
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;

-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Data Pump Import Job has completed') ;
dbms_output.put_line('Final job state = ' job_state) ;
dbms_datapump.detach(v_handle) ;

-- Handle exceptions from our export
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Exception in Data Pump job') ;
dbms_datapump.get_status(v_handle,dbms_datapump.ku$_status_job_error, 0, job_state, sts) ;

IF (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) THEN
le := sts.error;

if le is not null then
ind := le.FIRST ;

while ind is not null loop
spos := 1;
slen := length(le(ind).LogText) ;

if slen > 255 then
slen := 255;
end if;

while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText, spos, slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;

ind := le.NEXT(ind);
end loop;
end if;
END IF ;
END ;
/


3. If we use a database link (TARGET) to the database we are moving the stats to and it is on a different host then we can transfer the file across the database link with a DMPDIR defined on the other side. Otherwise we can copy the file through OS FTP, SCP or copy commands.

----------------------------------------------------------------------
-- Transfer the file via DBMS_FILE_TRANSFER over network link
-- Can change process to just do import over network link
----------------------------------------------------------------------
EXEC DBMS_FILE_TRANSFER.PUT_FILE ('DMPDIR', 'expdp_export_cbo_stats.dmp', 'DMPDIR', 'expdp_export_cbo_stats.dmp', 'TARGET') ;


4. Now the stats table can be removed now that we have exported the table and transferred the dump file to the destination.

----------------------------------------------------------------------
-- Clean up by Removing our stats table now that we have exported it
----------------------------------------------------------------------
EXEC DBMS_STATS.drop_stat_table('SYSTEM','CBO_STATS') ;



5. Connect to the destination database so that we can import the stats table.

-- Our example here the 2 databases are on the same host so our directory is still ok
-- Import our stats table into the other database for stats import use
DECLARE
ind NUMBER ; -- Loop index
spos NUMBER ; -- String starting position
slen NUMBER ; -- String length for output
v_handle NUMBER ; -- Data Pump job handle
percent_done NUMBER ; -- Percentage of job complete
job_state VARCHAR2(30) ; -- To keep track of job state
le ku$_LogEntry ; -- For WIP and error messages
js ku$_JobStatus ; -- The job status from get_status
jd ku$_JobDesc ; -- The job description from get_status
sts ku$_Status ; -- The status object returned by get_status

BEGIN
dbms_output.enable (1000000) ;

-- Create a Data Pump job to do a "full" import as only our stats table is there
-- create our datapump export job
v_handle := DBMS_DATAPUMP.OPEN (operation => 'IMPORT',
job_mode => 'FULL',
job_name => 'IMPORT_CBO_STATS_JOB',
version => 'COMPATIBLE');

-- set our logfile for import
DBMS_DATAPUMP.ADD_FILE (handle => v_handle,
filename => 'expdp_import_cbo_stats.log',
directory => 'DMPDIR',
filetype => 3) ;

-- set our dump file
DBMS_DATAPUMP.ADD_FILE (handle => v_handle,
filename => 'expdp_export_cbo_stats.dmp',
directory => 'DMPDIR',
filetype => 1) ;

-- If a table already exists in the destination schema, replace it before loading
-- to ensure the structure is proper as the data prior to is not important here.
DBMS_DATAPUMP.SET_PARAMETER(v_handle,'TABLE_EXISTS_ACTION','REPLACE');

-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(v_handle);

-- The import job should now be running. In the following loop, the job is
-- monitored until it completes. In the meantime, progress information is
-- displayed. Note: this is identical to the export example.

-- initialize percent_done and job_state values
percent_done := 0 ;
job_state := 'UNDEFINED' ;

-- Loop while the job_state is not completed or stopped
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(v_handle,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts) ;

js := sts.job_status ;

-- If the percentage done changed, display the new value.
if js.percent_done != percent_done then
dbms_output.put_line('*** Job percent done = ' to_char(js.percent_done)) ;

percent_done := js.percent_done ;
end if ;

-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0) then
le := sts.wip ;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) then
le := sts.error ;
else
le := null ;
end if ;
end if ;

if le is not null then
ind := le.FIRST ;

-- loop while loop indicator is not null
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;

-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Data Pump Import Job has completed') ;
dbms_output.put_line('Final job state = ' job_state) ;
dbms_datapump.detach(v_handle) ;


-- Handle exceptions from our import
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Exception in Data Pump job') ;
dbms_datapump.get_status(v_handle,dbms_datapump.ku$_status_job_error, 0, job_state, sts) ;

IF (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) THEN
le := sts.error;

if le is not null then
ind := le.FIRST ;

while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);

if slen > 255 then
slen := 255;
end if;

while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText, spos, slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;

ind := le.NEXT(ind);
end loop;
end if;
END IF ;
END ;
/

6. If bringing the stats table from a prior version of Oracle into a newer version of Oracle you will need to upgrade the stats table.

------------------------------------------------------------------
-- If moving stats from a previous version of database need to upgrade stats table
------------------------------------------------------------------
exec dbms_stats.upgrade_stat_table ('SYSTEM', 'CBO_STATS') ;


7. Lastly import the statistics into the schema.

------------------------------------------------------------------
-- Import the schema stats exported from the other database
------------------------------------------------------------------

EXEC DBMS_STATS.import_schema_stats('&1','CBO_STATS',NULL,'SYSTEM');