Sunday, December 14, 2008

Oracle Heterogenenous Connectivity to MySQL with Database Link 10g

1. We need to install the ODBC driver for the non-Oracle database we are going to connect to.


2. Once the ODBC Driver is installed we will need to configure a data source for the non-Oracle database. For Windows open the control panel and locate the Administrative Tools. Then locate the Data Sources (ODBC) and double click on the Data Sources.
Select the System DSN as for the HS service it will need to be a System DSN then Click on Add button.
Select the Driver for the non-Oracle data source. In this case we will be using the MySQL 3.51 Driver for a local MySQL database.
In this case we will give this data source the name of mysql configured on the localhost, we will be connecting with the root user. In this case we are on the default port therefore we done have to configure the connect options.


3. After configuring the connection settings we can test if the connection is working by pressing the test button near the bottom left of the Window. A window will indicate if the connection was successful.


4. Configure the init.ora for the HS service that well be used for the non-Oracle database connectivity.

initmysql.ora

# This is a sample agent init file that contains the HS parameters that are
# needed for an ODBC Agent.

#
# HS init parameters
#
#HS_DB_NAME = mysql
HS_FDS_CONNECT_INFO = mysql
HS_FDS_TRACE_LEVEL = ON

#
# Environment variables required for the non-Oracle system
#
#set =


5. Need to add the non-Oracle data source to the listener.ora

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(SID_NAME = orcl)
(ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
)
(SID_DESC=
(SID_NAME=mysql)
(ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
(PROGRAM=hsodbc)
(ENVS=LD_LIBRARY_PATH = C:\oracle\product\10.2.0\db_1\lib32)
)
)


6. Reload the listener and check that we have a service for the non-Oracle data source.

C:\Oracle\product\10.2.0\db_1\BIN> lsnrctl reload

LSNRCTL for 32-bit Windows: Version 10.2.0.4.0 - Production on 10-AUG-2008 19:14:21

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
The command completed successfully


7. Check the listener services to make sure the mysql shows

C:\Oracle\product\10.2.0\db_1\BIN> lsnrctl services

LSNRCTL for 32-bit Windows: Version 10.2.0.4.0 - Production on 10-AUG-2008 19:14:33

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
Services Summary...
Service "ORCLXDB" has 1 instance(s).
Instance "orcl", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1002 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=xxxxxxxxx)(PORT=2323))
Service "ORCL_XPT" has 1 instance(s).
Instance "orcl", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Service "mysql" has 1 instance(s).
Instance "mysql", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Service "orcl" has 2 instance(s).
Instance "orcl", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
Instance "orcl", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully


8. Need to now add the non-Oracle data source to the tnsnames.ora

mysql =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = xxxxxxxx)(PORT = 1521))
(CONNECT_DATA =
(SID = MYSQL)
)
(HS=OK)
)


9. Check that the non-Oracle data source is reachable via a tnsping.

C:\Oracle\product\10.2.0\db_1\BIN>tnsping mysql

TNS Ping Utility for 32-bit Windows: Version 10.2.0.4.0 - Production on 14-DEC-2008 18:12:46
Copyright (c) 1997, 2007, Oracle. All rights reserved.

Used parameter files:C:\oracle\product\10.2.0\db_1\network\admin\sqlnet.ora

Used TNSNAMES adapter to resolve the aliasAttempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = xxxxxxxx)(PORT = 1521)) (CONNECT_DATA = (SID = MYSQL)) (HS=OK))OK (30 msec)


10. Create a public database link and check that it works.

C:\Oracle\product\10.2.0\db_1\BIN>sqlplus /@orcl

SQL*Plus: Release 10.2.0.4.0 - Production on Sun Aug 10 20:10:00 2008

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create public database link mysql connect to "root" identified by "xxxxxxxxxx"
using 'mysql' ;

Database link created.

SQL> select count(*) from alfresco.alf_node@mysql ;

COUNT(*)
----------
1469

SQL>

Friday, September 12, 2008

RMAN Tracing

RMAN Tracing allows us the capability to get wom insight into what RMAN is doing behind the scenes so to speak.

Trace an RMAN Session

Trace an RMAN session to a trace file using the trace=/tracefilename.trc as part of the rman execution. Then in the RMAN execute set the debug on.

Example:

rman target username/password@target catalog username/password@catalog trace=/tmp/rman_test.trc

RMAN> run
{debug on;
report obsolete;
debug off;
}


Trace via the RMAN Channel

Another way to trace RMAN activity would be to configure the trace when configuring the RMAN channel.

1. Make sure timed_statistics = true for all performance testing.

2. set the NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS'.

3. Configure the Channel with Trace/Debug options

trace 1 - general trace info for skg and sbt
trace 5 - verbose channel tracing of skg and sbt calls

CONFIGURE CHANNEL DEVICE TYPE
DISK DEBUG=5 TRACE 1;

or

CONFIGURE CHANNEL DEVICE TYPE SBT PARMS "ENV=(....) DEBUG=5 TRACE 5;

Note: (Trace is only useful for tape backup and restore tracing)

Thursday, September 11, 2008

Oracle 11g SecureFiles

Summary

Oracle Secure Files also known as Fast Files is Oracle’s improvement to storage of large object types (LOBs). Secure Files gives comparable performance to a file system for LOBs and LOBs can store many types from data from images, large amounts of text, word documents, excel spreadsheets, XML, HTML, etc. as well as DICOM formatted medical images.

This is a step forward in helping manage unstructured data with the Oracle database by boosting performance, but also by improving security. Secure Files extends Transparent Data Encryption to LOBs, this not only makes storing and managing unstructured content easier, but improves the security of the unstructured content.

If that was not enough Secure Files also gives advanced file system features such as compression and data de-duplication. Data De-duplication is where duplicate objects in LOBs tied to many records within the database is only stored once rather then a copy for each record. This not only improves storage space, but can also offer performance improvements. Compression like it indicates compresses LOB data transparently offering storage savings and a performance boost, but Oracle takes it a step further and automatically determines if the data is able to be compressed and if so are the space savings from the compression of benefit.

By default normal LOB storage is used, called BASIC file. To use SecureFile for LOB storage the SECUREFILE lob Storage keyword must be used.


The default behavior for securefile usage can be changed via the db_securefile initialization parameter.

db_securefile={ALWAYS FORCE PERMITTED NEVER IGNORE}

ALWAYS – Always attempts to create all LOBs as SECUREFILE LOBs
FORCE - all LOBs created in the system will be created as SECUREFILE LOBs.
PERMITTED – Allows LOBs that use SecureFiles.
NEVER – Never Allows LOBs that use SecureFiles, they will always be BASICFILE LOBs
IGNORE – Ignore errors that otherwise would be raised by forcing LOBs as SECUREFILE LOBs

Can be set dynamically via an ALTER SYSTEM:

SQL> ALTER SYSTEM SET db_securefile = 'ALWAYS' ;

Securefiles offer a lot of benefits over the old LOB storage method, such as deduplication capability, compression and encryption.

Compression has 2 forms medium which is the default and high. Keep in mind the high level of compression the larger resource impact on the database you should expect.

CREATE TABLE t1 ( a CLOB)
LOB(a) STORE AS SECUREFILE
(COMPRESS
CACHE
NOLOGGING );

CREATE TABLE t1 ( a CLOB)
LOB(a) STORE AS SECUREFILE
( COMPRESS HIGH
CACHE
NOLOGGING );


Encryption for SecureFiles is implemented via the Transparent Data Encryption (TDE) and SecureFiles extends that TDE for LOB data types. Encryption is performed at the block level and uses the following valid encryption levels 2DES168, AES128, AES192 (default) and AES256. Keep in mind for SecureFiles the NO SALT option is not supported.

CREATE TABLE t1 ( a CLOB ENCRYPT USING 'AES128')
LOB(a) STORE AS SECUREFILE
( CACHE );

DeDuplication can be a powerful feature that can reduce the amount of storage space for LOBs as all duplicated LOBs are only stored once. With the desire to reduce database disk space the compress and DeDuplication can provide significant cost savings for storage.

CREATE TABLE t1 ( REGION VARCHAR2(20), ID NUMBER, a BLOB)
LOB(a) STORE AS SECUREFILE
( DEDUPLICATE
CACHE)

CREATE TABLE t1 ( a CLOB)
LOB(a) STORE AS SECUREFILE
( COMPRESS HIGH
DEDUPLICATE
CACHE ) ;

Oracle Secure Files utilizes Shared IO Pool. The shared IO pool is used from the SGA and allocations are always for a specific session and therefore the data is specific to the session. We can look at the Shared IO Pool via the v$sga_dynamic_componets and v$sgainfo V$ views. If we examine the memory structures of the SGA we can see what the Shared IO Pool max size is in relationship to the other memory structures of the database.

SQL> select name, bytes from v$sgainfo ;

NAME BYTES
-------------------------------- ----------
Fixed SGA Size 1334380
Redo Buffers 5844992
Buffer Cache Size 268435456
Shared Pool Size 239075328
Large Pool Size 4194304
Java Pool Size 12582912
Streams Pool Size 4194304
Shared IO Pool Size 0
Granule Size 4194304
Maximum SGA Size 535662592
Startup overhead in Shared Pool 46137344
Free SGA Memory Available 0


SQL> select * from v$sga_dynamic_components where component='Shared IO Pool' ;

COMPONENT CURRENT_SIZE
---------------------------------------------------------------- ------------
MIN_SIZE MAX_SIZE USER_SPECIFIED_SIZE OPER_COUNT LAST_OPER_TYP LAST_OPER
---------- ---------- ------------------- ---------- ------------- ---------
LAST_OPER GRANULE_SIZE
--------- ------------
Shared IO Pool 0
0 0 0 0 STATIC
4194304


When a session is unable to find free memory in the Shared IO Pool, PGA memory would be used. To see PGA memory allocations you can use the V$SECUREFILE_TIMER view which gets an entry each time memory is allocated out of the PGA.

SQL> select * from v$securefile_timer where name like '%PGA%';

NAME LAYER_ID OWNTIME MAXTIME MINTIME INVOCATIONS LAYER_NAME
------------------------- ---------- ------- ------- ------- ----------- --------------------------------------------------
kdlw kcbi PGA alloc timer 2 0 0 0 0
Write gather cache
kdlw kcbi PGA free timer 2 0 0 0 0
Write gather cache
kdlw kcb PGA borrow timer 2 0 0 0 0
Write gather cache
kdlw kcb PGA free timer 2 0 0 0 0
Write gather cache

Oracle 11g Results Cache

Summary

Oracle 11g has many new features one of particular interest in the idea of a results cache. Results cache allows a query’s result to be cached allowing for large performance improvements for queries that are executed multiple times. The results cache works best for tables with mostly static data and has particular queries that return the same results over and over.

One particular point of interest for the result cache is that fact that the result cache does not work when logged in as sysdba.

We can see in the explain plan of a query when the RESULT CACHE is being used as shown below:

----------------------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes TempSpc Cost (%CPU) Time
----------------------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 85120 1745K 660 (1) 00:00:08
1 RESULT CACHE 5g4sqrbmnk7tw4ct7s8ak9vdgd
2 SORT ORDER BY 85120 1745K 6696K 660 (1) 00:00:08
3 PARTITION REFERENCE ALL 85120 1745K 107 (1) 00:00:02
4 TABLE ACCESS FULL ORDER_ITEMS 85120 1745K 107 (1) 00:00:02
----------------------------------------------------------------------------------------------------------------


Test Results

SQL> set linesize 150
SQL> set timing on
SQL> set echo on
SQL>
SQL> drop table results_cache_test ;

Table dropped.

Elapsed: 00:00:00.10
SQL>
SQL> -- run create a table for our testing
SQL> -- so not to skew results
SQL> create table results_cache_test
2 as
3 select *
4 from dba_tables ;

Table created.

Elapsed: 00:00:00.56
SQL>
SQL> create unique index results_cache_test_pk
2 on results_cache_test (owner, table_name) ;

Index created.

Elapsed: 00:00:00.04
SQL>
SQL> execute dbms_stats.gather_table_stats ('MRMESSIN', 'RESULTS_CACHE_TEST') ;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.95
SQL>
SQL> -- set autotrace on for first round with index
SQL> set autotrace on ;
SQL>
SQL> -- run to see the results
SQL> select table_name, cache, read_only
2 from results_cache_test
3 where owner = 'MRMESSIN' ;

TABLE_NAME CACHE REA
------------------------------ ----- ---
DEPT N NO
EMP N NO
DEMO_USERS N NO
DEMO_CUSTOMERS N NO
DEMO_ORDERS N NO
DEMO_ORDER_ITEMS N NO
DEMO_PRODUCT_INFO N NO
DEMO_STATES N NO
DEMO_PAGE_HIERARCHY N NO
EAGLE N NO
HELP_TABLE N NO

TABLE_NAME CACHE REA
------------------------------ ----- ---
BIG_EMP N NO
DEMO_IMAGES N NO
BIG_EMP2 N YES

14 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 2738702378

----------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
----------------------------------------------------------------------------------------
0 SELECT STATEMENT 95 3515 25 (0) 00:00:01
* 1 TABLE ACCESS FULL RESULTS_CACHE_TEST 95 3515 25 (0) 00:00:01
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='MRMESSIN')


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
85 consistent gets
0 physical reads
0 redo size
800 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

SQL>
SQL> -- run a second time to cache the results
SQL> select /*+ result_cache */ table_name, cache, read_only
2 from results_cache_test
3 where owner = 'MRMESSIN' ;

TABLE_NAME CACHE REA
------------------------------ ----- ---
DEPT N NO
EMP N NO
DEMO_USERS N NO
DEMO_CUSTOMERS N NO
DEMO_ORDERS N NO
DEMO_ORDER_ITEMS N NO
DEMO_PRODUCT_INFO N NO
DEMO_STATES N NO
DEMO_PAGE_HIERARCHY N NO
EAGLE N NO
HELP_TABLE N NO

TABLE_NAME CACHE REA
------------------------------ ----- ---
BIG_EMP N NO
DEMO_IMAGES N NO
BIG_EMP2 N YES

14 rows selected.

Elapsed: 00:00:00.04

Execution Plan
----------------------------------------------------------
Plan hash value: 2738702378

-------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 95 3515 25 (0) 00:00:01
1 RESULT CACHE 6qrad2hzvx6xm6rwrx23ncuwwg
* 2 TABLE ACCESS FULL RESULTS_CACHE_TEST 95 3515 25 (0) 00:00:01
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OWNER"='MRMESSIN')

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=3; dependencies=(MRMESSIN.RESULTS_CACHE_TEST); parameters=(nls); name="select /*+ result_cache */ table_name, cache, read_only
from results_cache_test
where owner = 'MRMESSIN' "



Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
85 consistent gets
0 physical reads
0 redo size
800 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

SQL>
SQL> -- run the third time to see query with results cache in place
SQL> select /*+ result_cache */ table_name, cache, read_only
2 from results_cache_test
3 where owner = 'MRMESSIN' ;

TABLE_NAME CACHE REA
------------------------------ ----- ---
DEPT N NO
EMP N NO
DEMO_USERS N NO
DEMO_CUSTOMERS N NO
DEMO_ORDERS N NO
DEMO_ORDER_ITEMS N NO
DEMO_PRODUCT_INFO N NO
DEMO_STATES N NO
DEMO_PAGE_HIERARCHY N NO
EAGLE N NO
HELP_TABLE N NO

TABLE_NAME CACHE REA
------------------------------ ----- ---
BIG_EMP N NO
DEMO_IMAGES N NO
BIG_EMP2 N YES

14 rows selected.

Elapsed: 00:00:00.04

Execution Plan
----------------------------------------------------------
Plan hash value: 2738702378

-------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 95 3515 25 (0) 00:00:01
1 RESULT CACHE 6qrad2hzvx6xm6rwrx23ncuwwg
* 2 TABLE ACCESS FULL RESULTS_CACHE_TEST 95 3515 25 (0) 00:00:01
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OWNER"='MRMESSIN')

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=3; dependencies=(MRMESSIN.RESULTS_CACHE_TEST); parameters=(nls); name="select /*+ result_cache */ table_name, cache, read_only
from results_cache_test
where owner = 'MRMESSIN' "



Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
800 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

SQL>
SQL> set autotrace off
SQL>
SQL> -- drop the index to start second round without index
SQL> drop index results_cache_test_pk ;

Index dropped.

Elapsed: 00:00:00.01
SQL>
SQL> set autotrace on ;
SQL>
SQL> -- run to see the results
SQL> select table_name, cache, read_only
2 from results_cache_test
3 where owner = 'MRMESSIN' ;

TABLE_NAME CACHE REA
------------------------------ ----- ---
DEPT N NO
EMP N NO
DEMO_USERS N NO
DEMO_CUSTOMERS N NO
DEMO_ORDERS N NO
DEMO_ORDER_ITEMS N NO
DEMO_PRODUCT_INFO N NO
DEMO_STATES N NO
DEMO_PAGE_HIERARCHY N NO
EAGLE N NO
HELP_TABLE N NO

TABLE_NAME CACHE REA
------------------------------ ----- ---
BIG_EMP N NO
DEMO_IMAGES N NO
BIG_EMP2 N YES

14 rows selected.

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
Plan hash value: 2738702378

----------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
----------------------------------------------------------------------------------------
0 SELECT STATEMENT 95 3515 25 (0) 00:00:01
* 1 TABLE ACCESS FULL RESULTS_CACHE_TEST 95 3515 25 (0) 00:00:01
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='MRMESSIN')


Statistics
----------------------------------------------------------
234 recursive calls
0 db block gets
116 consistent gets
0 physical reads
0 redo size
800 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
14 rows processed

SQL>
SQL> -- run a second time to cache the results
SQL> select /*+ result_cache */ table_name, cache, read_only
2 from results_cache_test
3 where owner = 'MRMESSIN' ;

TABLE_NAME CACHE REA
------------------------------ ----- ---
DEPT N NO
EMP N NO
DEMO_USERS N NO
DEMO_CUSTOMERS N NO
DEMO_ORDERS N NO
DEMO_ORDER_ITEMS N NO
DEMO_PRODUCT_INFO N NO
DEMO_STATES N NO
DEMO_PAGE_HIERARCHY N NO
EAGLE N NO
HELP_TABLE N NO

TABLE_NAME CACHE REA

------------------------------ ----- ---
BIG_EMP N NO
DEMO_IMAGES N NO
BIG_EMP2 N YES

14 rows selected.

Elapsed: 00:00:00.04

Execution Plan
----------------------------------------------------------

Plan hash value: 2738702378

-------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 95 3515 25 (0) 00:00:01
1 RESULT CACHE 6qrad2hzvx6xm6rwrx23ncuwwg
* 2 TABLE ACCESS FULL RESULTS_CACHE_TEST 95 3515 25 (0) 00:00:01
-------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("OWNER"='MRMESSIN')

Result Cache Information (identified by operation id):
------------------------------------------------------
1 - column-count=3; dependencies=(MRMESSIN.RESULTS_CACHE_TEST); parameters=(nls); name="select /*+ result_cache */ table_name, cache, read_only
from results_cache_test
where owner = 'MRMESSIN' "



Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
85 consistent gets
0 physical reads
0 redo size
800 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

SQL>
SQL> -- run the third time to see query with results cache in place
SQL> select /*+ result_cache */ table_name, cache, read_only
2 from results_cache_test
3 where owner = 'MRMESSIN' ;

TABLE_NAME CACHE REA
------------------------------ ----- ---
DEPT N NO
EMP N NO
DEMO_USERS N NO
DEMO_CUSTOMERS N NO
DEMO_ORDERS N NO
DEMO_ORDER_ITEMS N NO
DEMO_PRODUCT_INFO N NO
DEMO_STATES N NO
DEMO_PAGE_HIERARCHY N NO
EAGLE N NO
HELP_TABLE N NO
BIG_EMP N NO
DEMO_IMAGES N NO
BIG_EMP2 N YES

14 rows selected.

Elapsed: 00:00:00.04

Execution Plan
--------------------------------------------------------
Plan hash value: 2738702378

-------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 95 3515 25 (0) 00:00:01
1 RESULT CACHE 6qrad2hzvx6xm6rwrx23ncuwwg
* 2 TABLE ACCESS FULL RESULTS_CACHE_TEST 95 3515 25 (0) 00:00:01
-------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
-----------------------------------------------------

2 - filter("OWNER"='MRMESSIN')

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=3; dependencies=(MRMESSIN.RESULTS_CACHE_TEST); parameters=(nls); name="select /*+ result_cache */ table_name, cache, read_only
from results_cache_test
where owner = 'MRMESSIN'"

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
800 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

SQL>
SQL> spool off

Oracle 11g Invisible Indexes

Summary

Oracle has now provided in what is my opinion a huge step forward in query tuning. It will now allow us to hide indexes from the optimizer without have to drop them. If you have every tuned queries of very large tables and had to drop and recreate indexes in your query tuning testing then you will realize right away how useful this is.

Another huge advantage and use of this feature is for batch process tuning. Ever had a situation where you needed an index that would really help your batch process but hurt OLTP transaction query performance. Well now we can hide the index from OLTP, but allow our batch processes to see it.

To use an invisible index you can set the parameter optimizer_use_invisible_indexes to true. The default of this parameter is false and can be set via an alter session or alter system command. Otherwise invisible indexes are completely ignore by the optimizer including through hints.

A special thing to note about invisible indexes is that they can not get optimizer statistics gathered on them while in an invisible state. Therefore to maintain statistics that index will have to be made visible by altering the index or setting the optimizer_use_invisible_indexes to true.


Example

1. Create a table for use in our example:

SQL> create table my_table as select * from scott.emp ;

Table created.


2. Create an index on our table and make it invisible.

SQL> create index my_index on my_table (ename) invisible ;

Index created.


3. Check that the index is invisible using the new VISIBILITY column of the user_indexes, all_indexes or dba_indexes views.

SQL> select index_name, visibility from user_indexes where index_name = 'MY_INDEX';

INDEX_NAME VISIBILIT
------------------------------ ---------
MY_INDEX INVISIBLE

1 row selected.


4. Attempt to gather statistics on our new and cascading to indexes. Not since the index is invisible this fails.

SQL> execute dbms_stats.gather_table_stats ('MRMESSIN', 'MY_TABLE') ;
BEGIN dbms_stats.gather_table_stats ('MRMESSIN', 'MY_TABLE') ; END;

*
ERROR at line 1:
ORA-00904: : invalid identifier
ORA-06512: at "SYS.DBMS_STATS", line 17806
ORA-06512: at "SYS.DBMS_STATS", line 17827
ORA-06512: at line 1


4. Make the invisible indexes visible.

SQL> alter session set optimizer_use_invisible_indexes = true ;

Session altered.


5. Now gather our statistics

SQL> execute dbms_stats.gather_table_stats ('MRMESSIN', 'MY_TABLE') ;

PL/SQL procedure successfully completed.


6. Set the optimizer_use_invisible_indexes back to false for the start of our testing.

SQL> alter session set optimizer_use_invisible_indexes = true ;


7. Set autotrace so we can see our explain plan and data.

SQL> set autotrace traceonly


8. Select from the table using the indexed column in our where clause which under normal circumstances that optimizer would choose to use the index. Note however we are not using the index because it is invisible.

SQL> select empno, ename from my_table where ename = 'BLAKE';

1 row selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3804444429

----------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
----------------------------------------------------------------------------
0 SELECT STATEMENT 1 10 3 (0) 00:00:01
* 1 TABLE ACCESS FULL MY_TABLE 1 10 3 (0) 00:00:01
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("ENAME"='BLAKE')


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
477 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


9. Now lets query seeing if we can use the index Hint to use the index. Not the index is not used.

SQL> select /*+ INDEX(my_table my_index) */ empno, ename from my_table where ename = 'BLAKE’ ;

1 row selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3804444429

----------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
----------------------------------------------------------------------------
0 SELECT STATEMENT 1 10 3 (0) 00:00:01
* 1 TABLE ACCESS FULL MY_TABLE 1 10 3 (0) 00:00:01
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("ENAME"='BLAKE')


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
477 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


10. Now let see if the index is used if we set the parameter optimizer_use_invisible_indexes to true.

SQL> alter session set optimizer_use_invisible_indexes = true ;

Session altered.


11. Note that index is now seen and is being used.

SQL> select empno, ename from my_table where ename = 'BLAKE';

1 row selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 332163487

-----------------------------------------------------------------------------
Id Operation Name RowsBytesCost %CPU) Time
-----------------------------------------------------------------------------
0 SELECT STATEMENT 1 10 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWIDMY_TABLE 1 10 2 (0) 00:00:01
*2 INDEX RANGE SCAN MY_INDEX 1 1 (0) 00:00:01
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ENAME"='BLAKE')


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
477 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Tuesday, June 10, 2008

Oracle Stored Outlines

Summary

Stored outlines, also known as plan stability, can be through o as a representation of a SQL statement execution plan that gives the ability to change a SQL Statements execution behavior without having to change the SQL statement in the application code. This allows the tuning of SQL statements for vendor systems where access to the code is usually restricted or inaccessible. It also provides a better way for tuning SQL allowing for better testing of upgrades. Oracle constantly improves the cost based optimizer with every release and when SQL statements are changed in the application code by adding hints undoing the tuning work can be difficult and time consuming. Stored Outlines can be removed to allow upgrades to be tested to see if Oracle makes better decisions in the newer version therefore allowing Oracle to make the decisions without the use of outlines. If the outlines are still needed then they can be put back in and/or tuned for the new Oracle version.



Requirements for Stored Outlines

Database Parameters
To utilized stored outlines certain database parameters

must be set in the init.ora or spfile.ora

query_rewrite_enabled
star_transformation_enabled
optimizer_features_enabled


System Level Parameter Setting
Also a system level parameter must be set to allow the

use of the stored outlines. This can not be set in the
init.ora or spfile.ora therefore a database startup trigger
is a great way to ensure this is set each time the database
is started.

ALTER SYSTEM SET USE_STORED_OUTLINES = true

CREATE OR REPLACE TRIGGER enable_outline
AFTER STARTUP ON DATABASE
BEGIN
execute immediate 'alter system set use_stored_outlines = true';
END ;


Implementing a Stored Outline
1. Create the stored outline for the SQL Statement the needs
improvement as shown via v$sql.

CREATE OR REPLACE OUTLINE SCHEDULING_QUERY on
select :"SYS_B_00" ,sit_code ,sit_division ,sh_serial ,sh_schstat ,sh_estlen,
TO_CHAR(sh_date,:"SYS_B_01")...
/

Or utilize a SQL statement already in the shared pool. Keep in mind this will generate the name for you. Therefore will need to find out what name the system gave it to manipulate later

-- Identify the SQL statement in the V$SQL view.
SELECT hash_value, child_number, sql_text
FROM v$sql
WHERE sql_text LIKE 'select :"SYS_B_00" ,sit_code ,sit_division%';

HASH_VALUE CHILD_NUMBER
---------- ------------
SQL_TEXT
----------------------------------------------------------------------------------------------------
1119283366 0
select :"SYS_B_00" ,sit_code ,sit_division ,sh_serial ,sh_schstat ,sh_estlen, TO_CHAR(sh_date,:"SYS_B_01")...

1 row selected.

-- Create an outline for the statement.
BEGIN
DBMS_OUTLN.create_outline(
hash_value => 1119283366,
child_number => 0,
category => 'MY_OUTLINES');
END;
/



2. Develop a tuned SQL Statement


3. Create the stored outline for the tuned SQL statement

CREATE OR REPLACE OUTLINE SCHEDULING_QUERY_HINTED on
select /*+ leading(INTSTAMP) */
:"SYS_B_00" ,sit_code ,sit_division ,sh_serial ,sh_schstat ,sh_estlen ,
TO_CHAR(sh_date,:"SYS_B_01") ...;


4. Update OUTLN.OL$HINTS and put the plan for the tuned SQL
Statement into the untuned stored outline.

UPDATE OUTLN.OL$HINTS
SET OL_NAME = DECODE(OL_NAME, 'SCHEDULING_QUERY_HINTED', 'SCHEDULING_QUERY',
'SCHEDULING_QUERY', 'SCHEDULING_QUERY_HINTED'
WHERE OL_NAME IN ('SCHEDULING_QUERY','SCHEDULING_QUERY_HINTED');
commit;


5. Execute an explain plan for the original query to recognized
the stored outline. This will update the Outline to used if
Oracle sees the outline and is using it.

EXPLAIN PLAN FOR
select :"SYS_B_00" ,sit_code ,sit_division ,sh_serial
,sh_schstat ,sh_estlen ,TO_CHAR(sh_date,:"SYS_B_01")...


6. Check to see that the stored outline is now used.

select name, category, used from dba_outlines;

NAME CATEGORY USED
---------------- -------- ---------
SCHEDULING_QUERY DEFAULT USED


Tuesday, May 20, 2008

Oracle Flashback Drop

Summary

Oracle 10g introduced a flashback drop feature that will allow you to bring back a dropped table without having to execute any type of recovery operation. This is due to the recyclebin which is what makes the flashback drop feature work. This feature is very useful for those times when a developer comes in who just dropped a table by mistake. Before flashback drop you either had to have an export of the table or recover the database using an auxiliary instance to get the table back and then move the table from the auxiliary instance back into database where the table was dropped. You can however drop a table and bypass the recycle bin as well by using the PURGE option to the drop table.


Some important things to note about the flashback drop feature

1. Think of each schema getting its own Recycle Bin
2. The recycle bin is a logical collection of previously DROPPED objects
3. In Oracle 10g the default action of a DROP TABLE command is to RENAME the table essentially placing the table in the RECYCLE BIN.
4. Objects in the RECYCLE BIN are NOT automatically purged and must be explicitly purged explicitly by a User or DBA action.
5. When a table is dropped the segments that table consumed in the tablespace are still present. Therefore when a table is dropped space is not freed as the table segments are still there.
6. If an object is dropped and recreated multiple times all dropped versions will be kept in the recycle bin, subject to space.
7. Where multiple versions are present, it's best to reference the tables via the table name shown in the RECYCLEBIN -> RECYCLEBIN_NAME.
8. For any references to the ORIGINAL_NAME, it is assumed the most recent object is drop version in the referenced question.


Showing the contents of the RECYCLEBIN

Show the current schemas recyclebin
show recyclebin
select * from recyclebin ;

Show the DBA Recyclebin which will show all schema recyclebins
select * from dba_recyclebin ;


Purging the RECYCLEBIN

Purge current schema RECYCLEBIN
PURGE RECYCLEBIN ;

Purge All RECYCLEBINs
PURGE DBA_RECYCLEBIN ;

Purge Objects from RECYCLEBIN
PURGE TABLE tablename ;
PURGE INDEX indexname ;

Purge Objects from RECYCLEBIN for a tablespace
PURGE TABLESPACE ts_name ;
PURGE TABLESPACE ts_name USER username ;


Dropping the table and bypass the RECYCLEBIN

DROP TABLE tablename PURGE ;


Recovering a Table from the RECYCLEBIN

Recover last dropped version of the table
FLASHBACK TABLE tablename TO BEFORE DROP;

Recover the table, but give it a new name
FLASHBACK TABLE tablename TO BEFORE DROP RENAME TO newtablename ;

Recover the table from a prior dropped version of the table
FLASHBACK TABLE “recyclebin name” TO BEFORE DROP ;
FLASHBACK TABLE “recyclebin name” TO BEFORE DROP RENAME TO newtablename ;


Examples Using Flashback Drop

1. Create a table for our testing
SQL> CREATE TABLE flashback_drop_test ( id NUMBER(10) );

Table created.


2. Insert a record so that we have data
SQL> INSERT INTO flashback_drop_test (id) VALUES (1);

1 row created.

SQL> COMMIT;

Commit complete.


3. Drop the table
SQL> DROP TABLE flashback_drop_test;

Table dropped.


4. Show the RECYCLEBIN
SQL> SHOW RECYCLEBIN
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$qLfTuNLERa6PzKkR4l1luA==$0 TABLE 2008-05-20:16:03:36


5. Show that we can no longer select from the table.
SQL> select * from flashback_drop_test;
select * from flashback_drop_test
*
ERROR at line 1:
ORA-00942: table or view does not exist


6. Bring back the last dropped version of the table
SQL> FLASHBACK TABLE flashback_drop_test TO BEFORE DROP;

Flashback complete.


7. Show that the table is back and we can select from it
SQL> select * from flashback_drop_test;

ID
----------
1


8. Show that the RECYCLEBIN is now empty
SQL> show recyclebin
SQL>


9. Drop the table again using the purge option.
SQL> drop table flashback_drop_test PURGE ;

Table dropped.

10. Show that the recyclebin is empty as we used the PURGE option.
SQL> show recyclebin
SQL>


11. Recreate the table for our further testing.
SQL> CREATE TABLE flashback_drop_test ( id NUMBER(10) );

Table created.


12. Drop the table again using the purge option.
SQL> drop table flashback_drop_test ;

Table dropped.


13. recreate the table to have multiple versions of the table.
SQL> CREATE TABLE flashback_drop_test ( id NUMBER(10) );

Table created.


14. Insert a record so that we have data so we will have 2 versions of the table one with a record and one empty.
SQL> INSERT INTO flashback_drop_test (id) VALUES (1);

1 row created.

SQL> COMMIT;


15. Show that we have the table in the RECYCLEBIN again.
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$4/+GRpvFSl6cIWL21YkgPA==$0 TABLE 2008-05-20:16:03:39


16. Show that the we can select directly from the table in the RECYCLEBIN using the RECYCLEBIN name and show that the table in the RECYCLEBIN has no rows.
SQL> select * from "BIN$XzZkNWkNQBiwl9WSVWk49w==$0" ;

no rows selected


17. Show that the current table still exists and has rows
SQL> select * from flashback_drop_test ;

ID
----------
1


18. Drop the table
SQL> drop table flashback_drop_test ;

Table dropped.


19. Show that we now have 2 versions of the table in the recyclebin.
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$3FoAb84NSMmiVnhYco2e5w==$0 TABLE 2008-05-20:16:50:25
FLASHBACK_DROP_TEST BIN$XzZkNWkNQBiwl9WSVWk49w==$0 TABLE 2008-05-20:16:46:32


20. Bring back the latest version of the table but with another table name.
SQL> FLASHBACK TABLE flashback_drop_test TO BEFORE DROP RENAME TO flashback_drop_test_old;

Flashback complete.


21. Show that we can select from the new table name.
SQL> select * from flashback_drop_test_old;

ID
----------
1


22. Show that we still do not see the original table.
SQL> select * from flashback_drop_test;
select * from flashback_drop_test
*
ERROR at line 1:
ORA-00942: table or view does not exist


23. Show that we still have a version of the table in the RECYCLEBIN
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$XzZkNWkNQBiwl9WSVWk49w==$0 TABLE 2008-05-20:16:46:32


24. Create a new version of the table.
SQL> CREATE TABLE flashback_drop_test ( id NUMBER(10) );

Table created.


25. Insert a row so this version has a record.
SQL> INSERT INTO flashback_drop_test (id) VALUES (1);

1 row created.

SQL> COMMIT;


26. Drop the table
SQL> drop table flashback_drop_test ;

Table dropped.


27. Show that the 2 versions are in the RECYCLEBIN
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
------------------- ------------------------------ ------------ -------------------
FLASHBACK_DROP_TEST BIN$MMV1E9cRRZCvUkC8HM3WhQ==$0 TABLE 2008-05-20:16:58:45
FLASHBACK_DROP_TEST BIN$XzZkNWkNQBiwl9WSVWk49w==$0 TABLE 2008-05-20:16:46:32


28. Restore the older version of the table from the RECYCLEBIN, this one does not have a record in it.
SQL> FLASHBACK TABLE "BIN$XzZkNWkNQBiwl9WSVWk49w==$0" TO BEFORE DROP ;

Flashback complete.


29. Show that the table is the correct version as it will have no record in it.
SQL> select * from flashback_drop_test ;

no rows selected

Thursday, May 8, 2008

Oracle Transparent Data Encryption

Summary

Oracle has provided a way to automatically encrypt data in a column of a table and at the same time decrypt it automatically therefore allowing an application to function as if the column was not encrypted. Transparent data encryption will protect the data in a column in a table with the use of a wallet. Transparent data encryption by default uses the algorithm AES with 192-bit key for data encryption. The following is a short test showing the use of Transparent Data Encryption.

There is overhead to transparent data encryption that can not be ignored as the encryption during inserts and updates, and decryption operations during select operations consume CPU cycles. While this overhead is fairly low it is safer to selectively apply encryption to where it is truly needed on a column basis. Therefore proper load tests should be performed incrementally as column encryption is rolled out so that full impact to system can be measured.

There are three significant things to mention about encryption one being that streams does not support the use of encrypted columns the other two have to do with the use of indexes. One you can not index columns that are encrypted with salt therefore columns encrypted that require an index as the column will be used in searches will need to be encrypted without salt. By default transparent data encryption is done with salt and to encrypt a column without salt the “no salt” option must be specified. Second indexes for encrypted columns behave different in a single respect but can have a large impact. Take table emp for example, add column ssn and encrypt it and build an index on it. We will use 2 query examples:


select ename from emp where ssn = ‘123-45-6789’ ;

This query will use the index and will not matter if the column is encrypted or not.

select ename from emp where ssn like ‘123-45-%’ ;

This query however will depend on if the column is encrypted or not to whether the index is used. If the column is encrypted the index will be ignored, however if the column is not encrypted the index would indeed be used. The reason the index is not used it due to the index now the column has an encrypted value and the ability to substring for an encrypted value is not possible. We will show that the index is indeed not used in the test below.


10g (10.2.0.3 Windows) and 11g (11.1.0.6) Test

1. Set the Wallet location for encryption in the sqlnet.ora file.

10g

ENCRYPTION_WALLET_LOCATION =
(SOURCE=
(METHOD=file)
(METHOD_DATA=
(DIRECTORY=c:\oracle\product\10.2.0\db_1\network\admin\ENCRYPT)))

11g

ENCRYPTION_WALLET_LOCATION =
(SOURCE=
(METHOD=file)
(METHOD_DATA=
(DIRECTORY=c:\oracle\product\10.2.0\db_1\network\admin\ENCRYPT)))


2. Create the wallet, initial creation of the wallet will automatically open it.
alter system set encryption key authenticated by "xxxxxxxxx" ;

SQL> alter system set encryption key authenticated by "xxxxxxxxx" ;
System altered.


3. Each time you need to use the wallet it will need to be explicitly opened. The wallet will allow the automatic decryption of data as it is read, otherwise the data of encrypted columns will remain encrypted and not able to be viewed. All other columns that are not encrypted work as normal even when the wallet is not open. In this case since it was just created the wallet is already open.

alter system set encryption wallet open authenticated by "kim11ball" ;

SQL> alter system set encryption wallet open authenticated by "xxxxxxxxx" ;
alter system set encryption wallet open authenticated by "xxxxxxxxx"
*
ERROR at line 1:
ORA-28354: wallet already open

Note: Can close the wallet explicitly with the following alter system set encryption wallet close ;

SQL> alter system set encryption wallet close ;

System altered.

4. Lets open the wallet so we can proceed with the test

SQL> alter system set encryption wallet open authenticated by "xxxxxxxxx" ;

System altered.


5. To test our transparent data encryption by using the emp table, first add an ssn column.

SQL> alter table emp add ssn varchar2(11) ;

Table altered.

SQL> update emp set ssn = ‘222-22-2222’ ;

51 rows updated.

SQL> commit ;


6. Encrypt the ssn column

SQL> alter table emp modify (ssn encrypt) ;

Table altered.

Note: Can turn the column encryption off with the following

SQL> alter table emp modify (ssn decrypt) ;

Table altered.


7. Check the emp table structure as we can see the SSN column show encrypt.

SQL> desc emp
Name Null? Type
----------- -------- ---------------------------
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)
SSN VARCHAR2(11) ENCRYPT


8. Let verify the wallet is indeed closed.

SQL> alter system set encryption wallet close;

System altered.

SQL> alter system set encryption wallet open authenticated by "nsfjsnfjksfa" ;
alter system set encryption wallet open authenticated by "nsfjsnfjksfa"
*
ERROR at line 1:
ORA-28353: failed to open wallet

SQL> alter system set encryption wallet close;
alter system set encryption wallet close
*
ERROR at line 1:
ORA-28365: wallet is not open


9. As we can see we are going to do the query with a user called temp the only permissions on this user is select on the emp table and create session.

SQL> select user from dual ;

USER
------------------------------
TEMP


10. If we look at the SSN column we see that the column queries unencrypted, 10g is restricting the access to the column with the wallet closed as expected

SQL> select ssn from emp ;
select ssn from emp
*
ERROR at line 1:
ORA-28365: wallet is not open


11. Not open the wallet for some more testing

SQL> alter system set encryption wallet open authenticated by "xxxxxxxxx" ;

System altered.


12. See if an index can be built, as the summary indicates an index can not be built on an encrypted column with salt.

SQL> create index emp_ssn on emp (ssn) ;
create index emp_ssn on emp (ssn)
*
ERROR at line 1:
ORA-28338: cannot encrypt indexed column(s) with salt


13. Turn off the salt using the alter table statement so that the index can be built.

SQL> alter table emp modify ssn encrypt no salt ;

Table altered.


14. Now build the index

SQL> create index emp_ssn on emp (ssn) ;

Index created.


15. Now check the affect the encryption has on the index usage.

A: Check using the column in the where clause with an =

SQL> explain plan for select ename from emp where ssn = '123-45-6789' ;

Explained.

SQL> @plan

Plan hash value: 1582005447

---------------------------------------------------------------------------
Id Operation Name RowsBytesCost(%CPU)Time
---------------------------------------------------------------------------
0 SELECT STATEMENT 1 38 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWIDEMP 1 38 2 (0) 00:00:01
* 2 INDEX RANGE SCAN EMP_SSN 1 1 (0) 00:00:01
---------------------------------------------------------------------------

B: Check the index use with the LIKE, as we can see by the explain plan the index is suppressed.

SQL> explain plan for select ename from emp where ssn like '123-45-%' ;

Explained.

SQL> @plan

Plan hash value: 3956160932

---------------------------------------------------------------
Id Operation NameRowsBytesCost(%CPU)Time
---------------------------------------------------------------
0 SELECT STATEMENT 2 76 3 (0)00:00:01
* 1 TABLE ACCESS FULL EMP 2 76 3 (0)00:00:01
---------------------------------------------------------------


16. No remove the encryption from the ssn column and see if the like uses the index as expected and indeed the index is used.

SQL> alter table emp modify (ssn decrypt) ;

Table altered.

SQL> explain plan for select ename from emp where ssn like '123-45-%' ;

Explained.

SQL> @plan

Plan hash value: 1582005447

---------------------------------------------------------------------------
Id Operation Name RowsBytesCost(%CPU) Time
---------------------------------------------------------------------------
0 SELECT STATEMENT 2 28 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWIDEMP 2 28 2 (0) 00:00:01
* 2 INDEX RANGE SCAN EMP_SSN 1 1 (0) 00:00:01
---------------------------------------------------------------------------

Oracle 11g Read-Only Tables

Summary

Prior to 11g you could make an entire tablespace read-only which meant that you had to group all your tables that you wanted to be read-only into a common tablespace or set of tablespaces, or you had to create a trigger that would cause an error. 11G now allows a single table to be made read-only without affecting the other tables in the tablespace. This can help save the overhead of having to move tables to read-only tablespaces when you are read to make the table read-only. Not only can we place the table read-only but can change it back to read-write to allow insert and updates when needed.

The read-only designation for a table will stop all DML (truncate/insert/update/delete/etc) operations and certain DDL operations as well like ALTER TABLE to add/modify/rename/drop columns, ALTER TABLE drop/truncate/exchange (SUB)PARTITION, ALTER TABLE upgrade. Though keep in mind that some DDL can still be done against a table designated read-only like DROP TABLE, ALTER TABLE add/coalesce/merge/modify/move/rename/split (SUB)PARTITION, ALTER TABLE rename/move/add supplemental log/drop supplemental log/deallocate ununsed.
To identify if a table is read-only a new column has been added to dba_tables, user_tables and all_tables called READ_ONLY. This column will contain YES if the table is read-only and NO when the table is not read-only.

Unfortunately at this time Oracle does not allow the ability to just take a partition of a table and make it read-only, to make a partition of a table read-only the partition must be moved to a tablespace that can be marked read-only. One can only hope Oracle will help us out and provide that ability in future releases saving the overhead of move partitions or having to place partitions in separate tablespaces to get the partitions to be read-only.

Putting table into read-only:

alter table big_emp2 read only ;

Taking table out of read-only

alter table big_emp2 read write ;

11g (11.1.0.6 Windows) Test

Create the table for the test and load with dummy data.

SQL> -- create the table for the test
SQL> CREATE TABLE big_emp2
2 (EMPNO NUMBER(4) CONSTRAINT big_emp2_pk primary key disable,
3 ENAME VARCHAR2(10),
4 JOB VARCHAR2(9),
5 MGR NUMBER(4),
6 HIREDATE DATE,
7 SAL NUMBER(7,2),
8 COMM NUMBER(7,2),
9 DEPTNO NUMBER(2)
10 )
11 PARTITION BY RANGE (HIREDATE)
12 (PARTITION y1980 VALUES LESS THAN ( TO_DATE('01-JAN-1981','DD-MON-YYYY') ) TABLESPACE USERS,
13 PARTITION y1981 VALUES LESS THAN ( TO_DATE('01-JAN-1982','DD-MON-YYYY') ) TABLESPACE USERS,
14 PARTITION y1982 VALUES LESS THAN ( TO_DATE('01-JAN-1983','DD-MON-YYYY') ) TABLESPACE USERS,
15 PARTITION y1983 VALUES LESS THAN ( TO_DATE('01-JAN-1984','DD-MON-YYYY') ) TABLESPACE USERS,
16 PARTITION y1984 VALUES LESS THAN ( TO_DATE('01-JAN-1985','DD-MON-YYYY') ) TABLESPACE USERS,
17 PARTITION y1985 VALUES LESS THAN ( TO_DATE('01-JAN-1986','DD-MON-YYYY') ) TABLESPACE USERS,
18 PARTITION y1986 VALUES LESS THAN ( TO_DATE('01-JAN-1987','DD-MON-YYYY') ) TABLESPACE USERS,
19 PARTITION y1987 VALUES LESS THAN ( MAXVALUE ) TABLESPACE USERS
20 )
21 /

Table created.

SQL>
SQL> -- create the unique index for the primary key
SQL> create unique index beig_emp2_pk on big_emp2 (empno) ;

Index created.

SQL>
SQL> -- enable the primary key using the unique index
SQL> alter table big_emp2 enable primary key using index ;

Table altered.

SQL> -- load data
SQL> insert into big_emp2 select * from big_emp ;

9254 rows created.


Make the Table Read Only

SQL> alter table big_emp2 read only ;

Table altered.


Check that the table shows read-only

SQL> select table_name, read_only from user_tables where table_name = 'BIG_EMP2';

TABLE_NAME REA
------------------------------ ---
BIG_EMP2 YES


Attempt to do DML operations against the table

SQL> update big_emp2 set salary = salary + 200 ;
update big_emp2 set salary = salary + 200
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

SQL> insert into big_emp2 values (0,'My Name','MGR','1000','01-JAN-08',5000,1000,10) ;
insert into big_emp2 values (0,'My Name','MGR','1000','01-JAN-08',5000,1000,10)
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

SQL> delete from big_emp2 ;
delete from big_emp2
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

SQL> truncate table big_emp2 ;
truncate table big_emp2
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"


Attempt to do DDL operations against the table

SQL> alter table big_emp2 add ssn varchar2(11) ;
alter table big_emp2 add ssn varchar2(11)
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

SQL> ALTER TABLE big_emp2 drop partition y1980 ;
ALTER TABLE big_emp2 drop partition y1980
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "MRMESSIN"."BIG_EMP2"

Monday, May 5, 2008

Moving Cost Based Optimizer Statistics

We have found that over time test databases do not match production databases is data or even in data volume unless we refresh them frequently. The problem is getting the development and test staff to allow the downtime necessary to do a database refresh from production especially for larger databases. So how do we make sure out execution plans in our test environment match what the production execution plans would be. The answer is to take the cost based optimizer statistics and put them in the production database.

1. To move the cost based optimizer statistics we first create a stats table and put the CBO stats in that stats table. In our example we will use the SYSTEM schema for the stats table.

----------------------------------------------------------------------
-- Setup our stats table and export our stats for the passed schema
-- into the stats table
----------------------------------------------------------------------
DECLARE
v_tbl_cnt NUMBER ;

BEGIN
-- enable DBMS_OUTPUT
dbms_output.enable (1000000) ;

-- Check if our stats table exists
SELECT count(*)
INTO v_tbl_cnt
FROM dba_tables
WHERE owner = 'SYSTEM'
AND table_name = 'CBO_STATS' ;

-- If stats table exists then we must get rid of it to ensure
-- that we always recreate it to ensure structure is correct
IF v_tbl_cnt > 0 THEN
dbms_output.put_line('Having to Remove SYSTEM.CBO_STATS as it already Exists.');

-- Remove our stats table now that we have exported it
DBMS_STATS.drop_stat_table('SYSTEM','CBO_STATS') ;
END IF ;

-- Create our stats table
dbms_output.put_line ('Creating SYSTEM.CBO_STATS Table to Store CBO Stats for Schema &1.') ;
dbms_stats.create_stat_table(ownname => 'SYSTEM', stattab => 'CBO_STATS', tblspace => 'USERS') ;

-- put our schema stats into out created stats table
dbms_output.put_line ('Exporting Schema Stats for &1 into SYSTEM.CBO_STATS table.') ;
dbms_stats.export_schema_stats (ownname => '&1', stattab => 'CBO_STATS', statown => 'SYSTEM') ;
END ;
/


2. Next we will need to export the stats table we just put the cost based optimizer statistics into

----------------------------------------------------------------------
-- export our stats table so it can be imported to another database
-- We will use the DBMS_DATAPUMP Package to do our export to simplify
-- the process
----------------------------------------------------------------------
DECLARE
v_handle NUMBER ;
ind NUMBER ; -- Loop index
spos NUMBER ; -- String starting position
slen NUMBER ; -- String length for output
percent_done NUMBER ; -- Percentage of job complete
job_state VARCHAR2(30) ; -- To keep track of job state
le ku$_LogEntry ; -- For WIP and error messages
js ku$_JobStatus ; -- The job status from get_status
jd ku$_JobDesc ; -- The job description from get_status
sts ku$_Status ; -- The status object returned by get_status

BEGIN
dbms_output.enable (1000000) ;

-- create our datapump export job
dbms_output.put_line ('Opening Export Job') ;
v_handle := DBMS_DATAPUMP.OPEN (operation => 'EXPORT',
job_mode => 'TABLE',
job_name => 'EXPORT_CBO_STATS_JOB',
version => 'COMPATIBLE');

-- set our logfile
dbms_output.put_line ('Setting Log file for Export Job') ;
DBMS_DATAPUMP.ADD_FILE (handle => v_handle,
filename => 'expdp_export_cbo_stats.log',
directory => 'DMPDIR',
filetype => 3) ;

-- set our dump file
dbms_output.put_line ('Setting Dump file for Export Job') ;
DBMS_DATAPUMP.ADD_FILE (handle => v_handle,
filename => 'expdp_export_cbo_stats.dmp',
directory => 'DMPDIR',
filetype => 1) ;

-- Add the schema filter
DBMS_DATAPUMP.METADATA_FILTER(v_handle,'SCHEMA_EXPR','IN (''SYSTEM'')');

-- set the filter for datapump to be the schema we want to export.
dbms_output.put_line ('Adding filter to only get SYSTEM.CBO_STATS') ;
DBMS_DATAPUMP.METADATA_FILTER (handle => v_handle,
name => 'NAME_EXPR',
value => '= ''CBO_STATS''',
object_type => 'TABLE') ;

-- Start the datapump export job
dbms_output.put_line ('Starting Datapump Export Job to dump SYSTEM.CBO_STATS table.') ;
DBMS_DATAPUMP.START_JOB (v_handle) ;

-- May have to use WAIT_FOR_JOB

-- initialize percent_done and job_state values
percent_done := 0 ;
job_state := 'UNDEFINED' ;

-- Loop while the job_state is not completed or stopped
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(v_handle,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts) ;

js := sts.job_status ;

-- If the percentage done changed, display the new value.
if js.percent_done != percent_done then
dbms_output.put_line('*** Job percent done = ' to_char(js.percent_done)) ;

percent_done := js.percent_done ;
end if ;

-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0) then
le := sts.wip ;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) then
le := sts.error ;
else
le := null ;
end if ;
end if ;

if le is not null then
ind := le.FIRST ;

-- loop while loop indicator is not null
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;

-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Data Pump Import Job has completed') ;
dbms_output.put_line('Final job state = ' job_state) ;
dbms_datapump.detach(v_handle) ;

-- Handle exceptions from our export
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Exception in Data Pump job') ;
dbms_datapump.get_status(v_handle,dbms_datapump.ku$_status_job_error, 0, job_state, sts) ;

IF (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) THEN
le := sts.error;

if le is not null then
ind := le.FIRST ;

while ind is not null loop
spos := 1;
slen := length(le(ind).LogText) ;

if slen > 255 then
slen := 255;
end if;

while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText, spos, slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;

ind := le.NEXT(ind);
end loop;
end if;
END IF ;
END ;
/


3. If we use a database link (TARGET) to the database we are moving the stats to and it is on a different host then we can transfer the file across the database link with a DMPDIR defined on the other side. Otherwise we can copy the file through OS FTP, SCP or copy commands.

----------------------------------------------------------------------
-- Transfer the file via DBMS_FILE_TRANSFER over network link
-- Can change process to just do import over network link
----------------------------------------------------------------------
EXEC DBMS_FILE_TRANSFER.PUT_FILE ('DMPDIR', 'expdp_export_cbo_stats.dmp', 'DMPDIR', 'expdp_export_cbo_stats.dmp', 'TARGET') ;


4. Now the stats table can be removed now that we have exported the table and transferred the dump file to the destination.

----------------------------------------------------------------------
-- Clean up by Removing our stats table now that we have exported it
----------------------------------------------------------------------
EXEC DBMS_STATS.drop_stat_table('SYSTEM','CBO_STATS') ;



5. Connect to the destination database so that we can import the stats table.

-- Our example here the 2 databases are on the same host so our directory is still ok
-- Import our stats table into the other database for stats import use
DECLARE
ind NUMBER ; -- Loop index
spos NUMBER ; -- String starting position
slen NUMBER ; -- String length for output
v_handle NUMBER ; -- Data Pump job handle
percent_done NUMBER ; -- Percentage of job complete
job_state VARCHAR2(30) ; -- To keep track of job state
le ku$_LogEntry ; -- For WIP and error messages
js ku$_JobStatus ; -- The job status from get_status
jd ku$_JobDesc ; -- The job description from get_status
sts ku$_Status ; -- The status object returned by get_status

BEGIN
dbms_output.enable (1000000) ;

-- Create a Data Pump job to do a "full" import as only our stats table is there
-- create our datapump export job
v_handle := DBMS_DATAPUMP.OPEN (operation => 'IMPORT',
job_mode => 'FULL',
job_name => 'IMPORT_CBO_STATS_JOB',
version => 'COMPATIBLE');

-- set our logfile for import
DBMS_DATAPUMP.ADD_FILE (handle => v_handle,
filename => 'expdp_import_cbo_stats.log',
directory => 'DMPDIR',
filetype => 3) ;

-- set our dump file
DBMS_DATAPUMP.ADD_FILE (handle => v_handle,
filename => 'expdp_export_cbo_stats.dmp',
directory => 'DMPDIR',
filetype => 1) ;

-- If a table already exists in the destination schema, replace it before loading
-- to ensure the structure is proper as the data prior to is not important here.
DBMS_DATAPUMP.SET_PARAMETER(v_handle,'TABLE_EXISTS_ACTION','REPLACE');

-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(v_handle);

-- The import job should now be running. In the following loop, the job is
-- monitored until it completes. In the meantime, progress information is
-- displayed. Note: this is identical to the export example.

-- initialize percent_done and job_state values
percent_done := 0 ;
job_state := 'UNDEFINED' ;

-- Loop while the job_state is not completed or stopped
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(v_handle,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts) ;

js := sts.job_status ;

-- If the percentage done changed, display the new value.
if js.percent_done != percent_done then
dbms_output.put_line('*** Job percent done = ' to_char(js.percent_done)) ;

percent_done := js.percent_done ;
end if ;

-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0) then
le := sts.wip ;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) then
le := sts.error ;
else
le := null ;
end if ;
end if ;

if le is not null then
ind := le.FIRST ;

-- loop while loop indicator is not null
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;

-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Data Pump Import Job has completed') ;
dbms_output.put_line('Final job state = ' job_state) ;
dbms_datapump.detach(v_handle) ;


-- Handle exceptions from our import
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Exception in Data Pump job') ;
dbms_datapump.get_status(v_handle,dbms_datapump.ku$_status_job_error, 0, job_state, sts) ;

IF (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0) THEN
le := sts.error;

if le is not null then
ind := le.FIRST ;

while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);

if slen > 255 then
slen := 255;
end if;

while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText, spos, slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;

ind := le.NEXT(ind);
end loop;
end if;
END IF ;
END ;
/

6. If bringing the stats table from a prior version of Oracle into a newer version of Oracle you will need to upgrade the stats table.

------------------------------------------------------------------
-- If moving stats from a previous version of database need to upgrade stats table
------------------------------------------------------------------
exec dbms_stats.upgrade_stat_table ('SYSTEM', 'CBO_STATS') ;


7. Lastly import the statistics into the schema.

------------------------------------------------------------------
-- Import the schema stats exported from the other database
------------------------------------------------------------------

EXEC DBMS_STATS.import_schema_stats('&1','CBO_STATS',NULL,'SYSTEM');