Wednesday, December 26, 2007

Add/Remove Disks from ASM disk group on live database

ASM will allow the adding and removing of volumes from a disk group while disk group is still in use by a database. This will allow the replacement of disk volumes from olds disk to new without having to bring the database down.

Steps:

1. Find out or disk groups

SQL> select group_number, name from v$asm_diskgroup ;
GROUP_NUMBER NAME
------------ ------------------------------
1 P1CSM2_ASM_DG1

SQL>


2. Find the disk volumes that make up the ASM diskgroup that we want to work with. In this example we will be replacing the 3 old disk volumes with 3 new ones.

SQL> select disk_number, mode_status, name, path from v$asm_disk where group_number = 1 order by disk_number, name ;

DISK_NUMBER MODE_ST NAME
----------- ------- ------------------------------
PATH
-----------------------------------------------------------
0 ONLINE P1CSM2_ASM_DG1_0000
/dev/rhdisk2
1 ONLINE P1CSM2_ASM_DG1_0001
/dev/rhdisk3
2 ONLINE P1CSM2_ASM_DG1_0002
/dev/rhdisk4

SQL>

3. Make sure our new volumes can be seen by asm as our disk string will be different.

SQL> alter system set ASM_DISKSTRING='/dev/rhdisk*','/dev/rrawlv*' scope=both ;

System altered.

SQL>

4. Add new disks to our disk group

SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 ADD DISK '/dev/rrawlv01' NAME P1CSM2_ASM_DG1_0010 NOFORCE ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- ----------

EST_RATE EST_MINUTES
-------- -----------

1 REBAL RUN 1 1 910 1167
1577 0

SQL> select * from v$asm_operation ; no rows selected SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 ADD DISK '/dev/rrawlv02' NAME P1CSM2_ASM_DG1_0011 NOFORCE ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------
EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 413 1881
1498 0

SQL> select * from v$asm_operation ;

no rows selected

SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 ADD DISK '/dev/rrawlv03' NAME P1CSM2_ASM_DG1_0012 NOFORCE ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------
EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 254 1786
1514 1

SQL> select * from v$asm_operation ;

no rows selected

SQL>


5. Check that our new disk volumes are now part of our disk group.

SQL> select disk_number, mode_status, name, path from v$asm_disk where group_number = 1 order by disk_number, name ;

DISK_NUMBER MODE_ST NAME
----------- ------- ------------------------------
PATH
-----------------------------------------------------------
0 ONLINE P1CSM2_ASM_DG1_0000
/dev/rhdisk2
1 ONLINE P1CSM2_ASM_DG1_0001
/dev/rhdisk3
2 ONLINE P1CSM2_ASM_DG1_0002
/dev/rhdisk4
3 ONLINE P1CSM2_ASM_DG1_0010
/dev/rrawlv01
4 ONLINE P1CSM2_ASM_DG1_0011
/dev/rrawlv02
5 ONLINE P1CSM2_ASM_DG1_0012
/dev/rrawlv03

6 rows selected.

SQL>


6. Remove the disks we want to remove from the group
SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 DROP DISK P1CSM2_ASM_DG1_0000 ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------

EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 548 2884

1588 1

SQL> select * from v$asm_operation ;


no rows selected

SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 DROP DISK P1CSM2_ASM_DG1_0001 ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------

EST RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 906 3274

1731 1


SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------

EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 3020 3226

1972 0


SQL> select * from v$asm_operation ;

no rows selected

SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 DROP DISK P1CSM2_ASM_DG1_0002 ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------

EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 1513 3197

1957 0

SQL> select * from v$asm_operation ; no rows selected SQL>



7. Check the removal of the disk volumes.
SQL> select disk_number, mode_status, name, path from v$asm_disk where group_number = 1 order by disk_number, name ;


DISK_NUMBER MODE_ST NAME
----------- ------- ------------------------------
PATH
------------------------------------------------------------------
3 ONLINE P1CSM2_ASM_DG1_0010
/dev/rrawlv01
4 ONLINE P1CSM2_ASM_DG1_0011
/dev/rrawlv02
5 ONLINE P1CSM2_ASM_DG1_0012
/dev/rrawlv03

SQL>

Thursday, December 13, 2007

Relocating or Increase Size of Oracle Online Redo Logs on a Live Database

1. Get a list of the existing Online Redo Log Members

SQL> select group#, member from v$logfile ;

GROUP#
----------
MEMBER
--------------------------------------------------------------------------------
3
/ora_ENV/app/oracle/product/DB/10g_ocs/oradata/CSP2/onlinelog/o1_mf_3_3j3cth7n_.
log

3
/ora_ENV/app/oracle/product/DB/10g_ocs/flash_recovery_area/CSP2/onlinelog/o1_mf_
3_3j3cthgv_.log

2
/ora_ENV/app/oracle/product/DB/10g_ocs/oradata/CSP2/onlinelog/o1_mf_2_3j3ctgrc_.
log

2
/ora_ENV/app/oracle/product/DB/10g_ocs/flash_recovery_area/CSP2/onlinelog/o1_mf_
2_3j3ctgz6_.log

1
/ora_ENV/app/oracle/product/DB/10g_ocs/oradata/CSP2/onlinelog/o1_mf_1_3j3ctg9k_.
log

1
/ora_ENV/app/oracle/product/DB/10g_ocs/flash_recovery_area/CSP2/onlinelog/o1_mf_
1_3j3ctgjp_.log


6 rows selected.

SQL>


2. Find groups and the size of the Online Redo Logs so we know the number of groups and the current size.

SQL> select group#, bytes/1024 from v$log ;

GROUP# BYTES/1024
---------- ----------
1 10240
2 10240
3 10240

SQL>


3. Create New Redo Log Groups and Members for database, we will increase size here in our example.

$sqlplus "/ AS SYSDBA"

SQL*Plus: Release 10.1.0.4.2 - Production on Wed Dec 12 19:05:24 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.4.2 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> ALTER DATABASE ADD LOGFILE GROUP 4 ('/ora_DB/REDO1/log/CSP2/redo4a.log', '/ora_DB/REDO2/log/CSP2/redo4b.log')
SIZE 30M ;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE GROUP 5 ('/ora_DB/REDO1/log/CSP2/redo5a.log', '/ora_DB/REDO2/log/CSP2/redo5b.log')
SIZE 30M ;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE GROUP 6 ('/ora_DB/REDO1/log/CSP2/redo6a.log', '/ora_DB/REDO2/log/CSP2/redo6b.log')
SIZE 30M ;

Database altered.

SQL>
4. Now that we have created our new log files we need to determine the current log and get past our original log files so that they can now be removed.

SQL> select group#, status from v$log ;

GROUP# STATUS
---------- ----------------
1 INACTIVE
2 CURRENT
3 INACTIVE
4 UNUSED
5 UNUSED
6 UNUSED

6 rows selected.

SQL>

5. Switch the logfile until the new files are current and the old redo logs are no longer current. In this example we will switch 2 times as that should get us into our new logs.

SQL> alter system switch logfile ;

System altered.

SQL> alter system switch logfile ;

System altered.

SQL> select group#, status from v$log ;

GROUP# STATUS
---------- ----------------
1 INACTIVE
2 INACTIVE
3 INACTIVE
4 INACTIVE
5 CURRENT
6 UNUSED

6 rows selected.

SQL>



5. Now Remove the old Online Redo Logs, we will remove by groups

SQL> ALTER DATABASE DROP LOGFILE GROUP 1 ;

Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 2 ;

Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 3 ;

Database altered.


6. Check that our Online Redo Logs all show correctly as we expect.

SQL> select group#, member from v$logfile ;

GROUP#
----------
MEMBER
--------------------------------------------------------------------------------
4
/ora_DB/REDO1/log/CSP2/redo4a.log

4
/ora_DB/REDO2/log/CSP2/redo4b.log

5
/ora_DB/REDO1/log/CSP2/redo5a.log

5
/ora_DB/REDO2/log/CSP2/redo5b.log

6
/ora_DB/REDO1/log/CSP2/redo6a.log

6
/ora_DB/REDO2/log/CSP2/redo6b.log


6 rows selected.

SQL>

Wednesday, December 12, 2007

Oracle Table/Data Compression

Description/Overview

Oracle Tables/Data Compression was introduced with the 9i version of the database.

This feature was a huge step in reducing the size of our large tables and helping reduce physical I/O resulting from queries against those large tables.

Table data compression works by eliminating duplicate values in a block. The more duplicate values that are in the block the better the compression is. All duplicate values for the block are stored once and are put in the beginning of the block called a symbol table for the block. All the values in the symbol table for the block are referenced by the occurrences of those values in the block. This type of compression could achieve better compression ratios by loading data in a specific order, say like primary key order where primary key makes up several columns. Also larger block sizes can yield greater compression ratios main reason is the possibility of duplication in a block is greater to more data that can go into the block. Also keep in mind that pctfree is set to 0 on all tables created with the compress keyword.

Table data Compression works on partitioned and non-partitioned tables as well as partitioned and non-partitioned materialized views (snapshots). Data compression when the tables is partitioned compression can be applied to all partitions or select individual partitions. This allows the flexibility for instance to allow a partition to be come static before partitioning it. However table data compression will not maintain compression therefore a table, tablespace, table partition can contain compressed and uncompressed data. The reason the compression is not maintained as compression only works with direct load type operations such as Direct SQL Load, Create Table … as Select and parallel inserts or inserts using the /*+ APPEND */ hint. Since the compression is not maintained it is possible to cause some fragmentation through row chaining and wasted disk space when modifying compressed data.

Oracle indicates the impact of compression for read operations is so minimal that negative impact on queries does not need to be taken into account when considering implementing table data compression. The impact of table data partitioning happens at the time of the bulk loading of the data into the table.

There are several ways to apply data compression to a table:

1. Set at the tablespace level, this will set so that all tables created in the tablespace will be compressed by default.
2. Set with the compress keyword at table creation.
3. Move the table via the alter table … move.


Benefits

1. Can reduce physical I/O required especially for full table scans as well as physical I/O operations while using indexes when many rows are returned as each physical block read can contain more data.
2. Improve performance of queries with higher physical I/O activity. Some query testing has shown in a lot of cases a 50+% reduction in physical I/O.
3. Most tables during testing showed a 50% or better disk space savings by compressing the table. Therefore compressing large tables can significantly reduce physical disk space usage.
4. Improved Buffer Cache utilization from compressed blocks in buffer cache memory.

Limitations

1. CPU hit, impact seems minimal in most cases. Larger full table scans appear to have the largest CPU impact. Be sure to test larger queries against large tables to be sure CPU impact does not reach the limits of the host hardware therefore impacting performance though a CPU bottleneck vs. an I/O one.
2. Table data compression does not compress new rows. Compression only can happen on direct DML operations. This creates a situation were to maintain the compression periodic maintenance may be required.
3. Direct DML operations performance is impacted due to having to compress the data.
4. Table data compression does not work on LOB data type columns (CLOB, BLOB, etc.) and data types derived from LOBs like VARRAYs or XML data stored in CLOB column.



Calculating the Compression Ratio for an existing table

Below is a function that will calculate the compression ratio for an existing table, it accepts the table name as a parameter. The function came from Table Compression in Oracle9i Release2 An Oracle White Paper May 2002

create function compression_ratio (tabname varchar2)
return number
is

-- sample percentage
pct number := 0.000099;
-- original block count (should be less than 10k)
blkcnt number := 0;
-- compressed block count
blkcntc number;

begin
execute immediate ' create table TEMP_UNCOMPRESSED pctfree 0 as select * from ' tabname ' where rownum <>

while ((pct <>

execute immediate 'truncate table TEMP_UNCOMPRESSED';

execute immediate 'insert into TEMP_UNCOMPRESSED select * from ' tabname ' sample block (' pct ',10)';

execute immediate 'select count(distinct(dbms_rowid.rowid_block_number(rowid))) from TEMP_UNCOMPRESSED' into blkcnt;

pct := pct * 10;

end loop;

execute immediate 'create table TEMP_COMPRESSED compress as select * from TEMP_UNCOMPRESSED';

execute immediate 'select count(distinct(dbms_rowid.rowid_block_number(rowid))) from TEMP_COMPRESSED' into blkcntc;

execute immediate 'drop table TEMP_COMPRESSED';

execute immediate 'drop table TEMP_UNCOMPRESSED';

return (blkcnt/blkcntc);

end;

/

Summary

Table data compression does indeed shrink the amount of space a table takes in the database. The small test shows that there is indeed an impact on creating the table with a create table … as select where the compress table create took 2.01 seconds and the uncompressed table took .40 seconds. Not a large difference in time here but one that does show that there is indeed an impact on the create table … as select DDL. One very large loads the time could be significant. More testing with larger sets appears to be needed.

The table data compression loads using /*+ APPEND */ hint of an insert select had much closer timings to normal insert where compressed load test took 4.85 seconds and the uncompressed table insert too 3.26 seconds.

Table data compression reads did show a slight performance gain in elapse time as expected. Indeed the read from the compressed table showed fewer buffer gets and fewer physical reads as well as showed better elapsed time as we would expect doing less physical and logical reads. The read from the compressed table showed 4121 logical reads and 303 physical reads and a elapse time of 2.25 seconds where as the read from the uncompressed table did 5436 logical reads and 1724 physical reads and had an elapse time of 3.12 seconds. This shows that there is a level of compression for the blocks in the buffer cache as well as the compression for the physical disk reads, but the gets do not match the 50% reduction in size of the table.

Conclusion, this test was done on a Windows laptop running an Oracle 10.2.0.1 database using Windows XP and that based on the small test and the laptop it was done on it does show that further testing on a more robust system and varying table sizes is needed. Based on the tests physical and logical read differences and knowing the overhead compression can have on the CPU of a system I would conclude that compression for the data in a table can offer significant space savings for sure and potentially good I/O read savings for queries which can result in lower elapse times. However for the lower elapse times there needs to be sufficient CPU capacity to handle the decompression on larger physical read operations.


Test Results Summary

SQL>
SQL> -- let see how long it takes to create the tables
SQL> -- and load the data for compressed and
SQL> -- uncompressed versions.
SQL> set timing on
SQL>
SQL> -- create a table compressed and load with data
SQL> create table test_compression_compressed
2 COMPRESS
3 as
4 select * from dba_tables ;

Table created.

Elapsed: 00:00:02.01
SQL>
SQL> commit ;

Commit complete.

Elapsed: 00:00:00.00
SQL>
SQL> -- create a table and load with data uncompressed
SQL> create table test_compression_uncompressed
2 as
3 select * from dba_tables ;

Table created.

Elapsed: 00:00:00.40
SQL>
SQL> commit ;

Commit complete.

Elapsed: 00:00:00.00
SQL>
SQL> set timing off
SQL>
SQL> -- Lets see how large our compressed table is
SQL> select sum(bytes)/1024 from dba_segments where owner = 'MRMESSIN' and segment_name = 'TEST_COMPRESSION_COMPRESSED' ;

SUM(BYTES)/1024
---------------
128

SQL>
SQL> -- Lets see how large out uncompressed table is
SQL> select sum(bytes)/1024 from dba_segments where owner = 'MRMESSIN' and segment_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

SUM(BYTES)/1024
---------------
512

SQL>
SQL> -----------------------------------------------------------------------------------------------------------------------
SQL> set timing on
SQL>
SQL> -- Now lets insert a ton more records into the compressed table and show timing
SQL> declare
2 l number := 0 ;
3
4 begin
5
6 loop
7 if l > 30 then
8 exit ;
9 end if ;
10
11 l := l + 1 ;
12
13 insert /*+ APPEND */ into test_compression_compressed select * from dba_tables ;
14 -- we have to commit after each insert append operation or get an error.
15 commit ;
16 end loop ;
17 end ;
18 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.85
SQL>
SQL>
SQL> -- Now lets insert a ton more records into the uncompressed table and show timing
SQL> declare
2 l number := 0 ;
3
4 begin
5
6 loop
7 if l > 30 then
8 exit ;
9 end if ;
10
11 l := l + 1 ;
12
13 insert into test_compression_uncompressed select * from dba_tables ;
14 commit ;
15 end loop ;
16 end ;
17 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.26
SQL>
SQL> set timing off
SQL>
SQL> -- Lets see how large our compressed table is
SQL> select sum(bytes)/1024 from dba_segments where owner = 'MRMESSIN' and segment_name = 'TEST_COMPRESSION_COMPRESSED' ;

SUM(BYTES)/1024
---------------
3072

SQL>
SQL> -- Lets see how large out uncompressed table is
SQL> select sum(bytes)/1024 from dba_segments where owner = 'MRMESSIN' and segment_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

SUM(BYTES)/1024
---------------
14336

SQL>
SQL> -- flush buffer cache
SQL> alter system flush buffer_cache ;

System altered.

SQL>
SQL> -- Check timing on compressed table with full table scan
SQL> set autotrace traceonly
SQL> set timing on
SQL> select * from test_compression_compressed ;

56318 rows selected.

Elapsed: 00:00:02.25

Execution Plan
---------------------------------------------------------
Plan hash value: 1423006034

------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 44034 19M 100 (6) 00:00:02
1 TABLE ACCESS FULL TEST_COMPRESSION_COMPRESSED 44034 19M 100 (6) 00:00:02
-----------------------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
28 recursive calls
1 db block gets
4121 consistent gets
303 physical reads
4432 redo size
3161898 bytes sent via SQL*Net to client
41675 bytes received via SQL*Net from client
3756 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts(disk)
56318 rows processed

SQL> set autotrace off
SQL> set timing off
SQL>
SQL> -- Flush buffer cache
SQL> alter system flush buffer_cache ;

System altered.

SQL>
SQL> -- Check timing on uncompressed table with full table scan
SQL> set autotrace traceonly
SQL> set timing on
SQL> select * from test_compression_uncompressed ;

56319 rows selected.

Elapsed: 00:00:03.12


Execution Plan
----------------------------------------------------------
Plan hash value: 3640389637

---------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 59403 26M 530 (2) 00:00:06
1 TABLE ACCESS FULL TEST_COMPRESSION_UNCOMPRESSED 59403 26M 530 (2) 00:00:06
--------------------------------------------------------------------------------------------------

Note
----
- dynamic sampling used for this statement


Statistics
-------------------------------------------------------
29 recursive calls
1 db block gets
5436 consistent gets
1724 physical reads
132 redo size
12545290 bytes sent via SQL*Net to client
41675 bytes received via SQL*Net from client
3756 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
56319 rows processed

SQL> set autotrace off
SQL> set timing off
SQL>
SQL> set echo off


Monday, December 10, 2007

Oracle Index Key Compression

Description

Oracle introduced Index Key Compression in 8i. Though it has been around since 8i more recent versions appear to have improved it and it has become are more noted feature to use to improve space utilization as well as performance.

Index Key Compression applies to B-Tree Indexes and index organized tables (IOT), however Bitmap Indexes have a form of compression by default. Index Key Compression works a lot like table/data compression where duplicated values are removed from index entries. Due to the way indexes are structure as compared to tables it is a little more complicated and has some more restrictions and considerations than table compression. Index compression works at the column level and compresses repeating groups at the column level where table does something similar but done at the block level.

Compression only happens with new data being loading, therefore it is possible for an object to have compressed and uncompressed data at the same time. Therefore to maintain compression you must perform maintenance activities over time.

Index compression can be turned on and off for an index at anytime with a rebuild of the index.
Index Key compression can be used:
A non-unique index single and multi-column
You have a unique multi-column index.
Generally the larger the block size the better the compression, example 16k block size will give better compression than an 8k block size.

In non-unique indexes Index Key Compression all index columns are stored compressed, however in unique indexes at least one column of the index has to be stored uncompressed. The Index Key, with a unique index, compression creates a grouping piece, a grouping piece is the column or columns that form a leading subset of the columns in the index the second piece is the suffix or also called the unique piece. Then the grouping piece is stored in a way that it is shared by multiple suffix or unique pieces. The cardinality of the columns compressed determines the level of compression that can occur. For example, if a unique index consists of four columns and provides uniqueness mostly by the last two columns, it is best to choose the two leading columns to be stored compressed. If you choose to compress three columns, the repetitiveness will be very low, and the compression will not be as good.



Using Index Compression

There are 3 ways to compress and index.

1. Compress the index when it is created with the compress keyword
a. create index scott.dept_dname on scott.dept (dname) compress ;
2. Rebuild the index compressed with the alter index rebuild and the compress keyword.
a. alter index scott.dept_dname on scott.dept (dname) rebuild compress ;
3. Rebuild the index compressed online with the alter index rebuild online compress keyword (Tested 10g and 11g)
a. alter index scott.dept_dname on scott.dept (dname) rebuild online compress ;

To rebuild a compressed index to an uncompressed index we can just use the same 3 way just not compressed.

1. Un-Compress the index by dropping and recreating the index
a. drop index scott.dept_dname ;
b. create index scott.dept_dname on scott.dept (dname) ;
2. Rebuild the index compressed with the alter index rebuild and the nocompress keyword.
a. alter index scott.dept (dname) rebuild nocompress ;
b. Rebuild the index compressed online with the alter index rebuild online and the nocompress keyword
c. alter index scott.dept (dname) rebuild online nocompress ;

The compress keyword of the create index or alter index will accept an integer parameter. The integer parameter tells oracle how many columns make up the grouping piece. This can allow us to control the number of columns in an index that make up the grouping piece. The group piece default is all but the last column.



Benefits

1. Increase in performance by reducing the # of blocks the index is stored in. Therefore reducing the number of blocks that have to be read when reading from the index.
2. Index uses less space therefore saving valuable disk space.
3. The implementation of Index Key Compression is transparent to all applications as the compressed indexes are used just like all other B-Tree indexes.



Limitations

1. Performance may suffer during index scans as the burden on the CPU is increased in order to rebuild the key values.
2. Index compression cannot be used on a unique index that has only one attribute as a single column unique index by its very nature will not have a repeating group for the grouping piece.

Example:

SQL> alter index scott.pk_dept rebuild compress ;
alter index scott.pk_dept rebuild compress
*
ERROR at line 1:
ORA-25193: cannot use COMPRESS option for a single column key


SQL>



Summary

Like with any feature of Oracle you must test it in your environment and evaluate for yourself the features value. Based on the small test performed it appears that index compression does indeed shrink the size of the index. The small test showed out compressed index size as 2048 bytes and the uncompressed index size at 4096. This would indicate significant reduction in index size. The reduction in index size helped reduce I/O which can help improve performance of queries that use the compressed index. The small test does seem to indicate that the query runtime can be faster as a query using the compressed index ran in .01 seconds where as the same index columns with the index uncompressed, same query, took .06 seconds.

The test is too small to draw conclusive results, but it certainly showed that the feature is worth exploring further with much larger data sets. Future test should be performed on very large tables with very large indexes and would like to examine 10046 traces with before compression and after compression results to give a best comparison to the full impact of Index Key Compression.

** Note: For specifics of tests and the results step by step see Specific Test Results Section



Specific Test Results

Built a small test using index compression against an uncompressed table. Though the table is not very large we can see that we can make some preliminary conclusions from the test.

SQL>
SQL> ---------------------------------------------
SQL> -- Working with the uncompressed table.
SQL> ---------------------------------------------
SQL>
SQL> -----------------------------------------------------------------
SQL> -- set our timing on so we get timings on index build
SQL> set timing on
SQL>
SQL> -- Create new compressed index on our uncompressed table
SQL> create index test_index_compressed
2 on TEST_COMPRESSION_UNCOMPRESSED (OWNER, TABLE_NAME)
3 compress ;

Index created.

Elapsed: 00:00:00.20
SQL>
SQL> -- turn the timing off as we are in just a query for size
SQL> set timing off
SQL>
SQL> -- Showsize of the new compressed index
SQL> select sum(bytes)/1024 from dba_extents where segment_name = 'TEST_INDEX_COMPRESSED' ;

SUM(BYTES)/1024
---------------
2048

SQL>
SQL> -- set autotrace on so we se I/O stats
SQL> -- when we query using the index
SQL> set autotrace traceonly
SQL>
SQL> -- set timing on to test time it takes to complete the query using the index
SQL> set timing on
SQL>
SQL> -- Check our access times, plan and resources with the new compressed index
SQL> select owner, table_name, tablespace_name, num_rows
2 from TEST_COMPRESSION_UNCOMPRESSED
3 where owner = 'MRMESSIN'
4 and table_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

31 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 461570243

-------------------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 31 1984 34 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID TEST_COMPRESSION_UNCOMPRESSED 31 1984 34 (0) 00:00:01
* 2 INDEX RANGE SCAN TEST_INDEX_COMPRESSED 31 1 (0) 00:00:01
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OWNER"='MRMESSIN' AND "TABLE_NAME"='TEST_COMPRESSION_UNCOMPRESSED')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
61 recursive calls
0 db block gets
122 consistent gets
1 physical reads
0 redo size
1038 bytes sent via SQL*Net to client
403 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
31 rows processed

SQL>
SQL> -- Turn the autotrace back off
SQL> set autotrace off
SQL>
SQL> -- Turn timing off again as we are just dropping the index
SQL> set timing off
SQL>
SQL> -- drop the index so we can create a compressed one using the exact same columns
SQL> drop index test_index_compressed ;

Index dropped.

SQL>
SQL> -----------------------------------------------------------------
SQL> -- set timing on to get timing on index build
SQL> set timing on
SQL>
SQL> -- create a new uncompressed index just like the compressed one from before
SQL> create index test_index_uncompressed
2 on TEST_COMPRESSION_UNCOMPRESSED (OWNER, TABLE_NAME) ;

Index created.

Elapsed: 00:00:00.51
SQL>
SQL> -- set timing off as we are just checking size
SQL> set timing off
SQL>
SQL> -- Show size of the new uncompressed index
SQL> select sum(bytes)/1024 from dba_extents where segment_name = 'TEST_INDEX_UNCOMPRESSED' ;

SUM(BYTES)/1024
---------------
4096

SQL>
SQL> -- set autotrace on so we se I/O stats
SQL> -- when we query using the index
SQL> set autotrace traceonly
SQL>
SQL> -- set timing on to check for time for query to complete
SQL> -- so we can compare compressed and uncompressed at a high level
SQL> set timing on
SQL>
SQL> -- Check our access times, plan and resources with the new uncompressed index
SQL> select owner, table_name, tablespace_name, num_rows
2 from TEST_COMPRESSION_UNCOMPRESSED
3 where owner = 'MRMESSIN'
4 and table_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

31 rows selected.

Elapsed: 00:00:00.06

Execution Plan
----------------------------------------------------------
Plan hash value: 2607798795

-------------------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 31 1984 36 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID TEST_COMPRESSION_UNCOMPRESSED 31 1984 36 (0) 00:00:01
* 2 INDEX RANGE SCAN TEST_INDEX_UNCOMPRESSED 31 3 (0) 00:00:01
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"='MRMESSIN' AND "TABLE_NAME"='TEST_COMPRESSION_UNCOMPRESSED')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
61 recursive calls
0 db block gets
124 consistent gets
2 physical reads
0 redo size
1038 bytes sent via SQL*Net to client
403 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
31 rows processed

SQL>
SQL> -- Turn the autotrace back off
SQL> set autotrace off
SQL>
SQL> -- Can we rebuild the uncompressed index to a compressed index using alter index rebuild online?
SQL> alter index test_index_uncompressed rebuild online compress ;

Index altered.

Elapsed: 00:00:00.89
SQL>
SQL> -- set timing off only checking size of index
SQL> set timing off
SQL>
SQL> -- Show size of the new uncompressed index
SQL> select sum(bytes)/1024 from dba_extents where segment_name = 'TEST_INDEX_UNCOMPRESSED' ;

SUM(BYTES)/1024
---------------
2048

SQL> -- Drop the newly rebuilt compressed index
SQL> drop index test_index_uncompressed ;

Index dropped.

SQL>
SQL> -- turn the spool to log file off
SQL> spool off

Tuesday, November 27, 2007

Oracle 11g RMAN Data Recovery Advisor

The data recovery advisor helps simplify recoveries. There are 2 flavors to the Data Recovery Advisor, command line and Oracle Enterprise Manager Database Control. This blog will cover the command line.

The command line offers improvements for scripting a recovery and scheduling it via cron or another scheduling utility.

The command line utility is used via rman command line.

Lets use a missing file for our example:

SQL*Plus: Release 11.1.0.6.0 - Production on Tue Nov 27 14:45:28 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to an idle instance.

SQL*Plus: Release 11.1.0.6.0 - Production on Tue Nov 27 15:13:53 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> startup
ORACLE instance started.

Total System Global Area 535662592 bytes
Fixed Size 1334380 bytes
Variable Size 209716116 bytes
Database Buffers 318767104 bytes
Redo Buffers 5844992 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF'

rman target=/

Recovery Manager: Release 11.1.0.6.0 - Production on Tue Nov 27 15:17:51 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: ORCL11G (DBID=744414708, not open)



RMAN can list any failures

RMAN> list failure ;

using target database control file instead of recovery catalog
List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
222 HIGH OPEN 27-NOV-07 One or more non-system datafiles are
missing

If there are no failures then Oracle will return a message:
using target database control file instead of recovery catalog
no failures found that match specification

The message indicates that I have a non-system tablespace databfile missing. A missing datafile is considered critical and therefore Priority is high.

Since it is a non-system tablespace datafile the database can stay up with the tablespace this datafile goes with offline.


Can list the detail on the failure.

RMAN> list failure 222 detail ;

List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
222 HIGH OPEN 27-NOV-07 One or more non-system datafiles are
missing
Impact: See impact for individual child failures
List of child failures for parent failure ID 222
Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
225 HIGH OPEN 27-NOV-07 Datafile 4: 'C:\ORACLE\ORADATA\ORC
L11G\USERS01.DBF' is missing
Impact: Some objects in tablespace USERS might be unavailable



RMAN will advise on automated repair options.

RMAN> advise failure ;

List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
222 HIGH OPEN 27-NOV-07 One or more non-system datafiles are
missing
Impact: See impact for individual child failures
List of child failures for parent failure ID 222
Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
225 HIGH OPEN 27-NOV-07 Datafile 4: 'C:\ORACLE\ORADATA\ORC
L11G\USERS01.DBF' is missing
Impact: Some objects in tablespace USERS might be unavailable

analyzing automatic repair options; this may take some time
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=152 device type=DISK
analyzing automatic repair options complete

Mandatory Manual Actions
========================
no manual actions available

Optional Manual Actions
=======================
1. If file C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF was unintentionally renamed or
moved, restore it

Automated Repair Options
========================
Option Repair Description
------ ------------------
1 Restore and recover datafile 4
Strategy: The repair includes complete media recovery with no data loss
Repair script: c:\oracle\diag\rdbms\orcl11g\orcl11g\hm\reco_1022222764.hm



Now we can have RMAN give us a preview of the failure repair.

RMAN> repair failure preview ;

Strategy: The repair includes complete media recovery with no data loss
Repair script: c:\oracle\diag\rdbms\orcl11g\orcl11g\hm\reco_1022222764.hm

contents of repair script:
# restore and recover datafile
restore datafile 4;
recover datafile 4;




RMAN has the ability to automatically repair the failure.

RMAN> repair failure ;

Strategy: The repair includes complete media recovery with no data loss
Repair script: c:\oracle\diag\rdbms\orcl11g\orcl11g\hm\reco_1022222764.hm

contents of repair script:
# restore and recover datafile
restore datafile 4;
recover datafile 4;

Do you really want to execute the above repair (enter YES or NO)? YES
executing repair script

Starting restore at 27-NOV-07
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to C:\ORACLE\ORADATA\ORCL11G\USERS0
1.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\ORCL
11G\BACKUPSET\2007_11_27\O1_MF_NNNDF_TAG20071127T144712_3NRX264W_.BKP
channel ORA_DISK_1: piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL11G\BACKUPSET
\2007_11_27\O1_MF_NNNDF_TAG20071127T144712_3NRX264W_.BKP tag=TAG20071127T144712
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 27-NOV-07

Starting recover at 27-NOV-07
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 27-NOV-07
repair failure complete

Do you want to open the database (enter YES or NO)? YES
database opened

Oracle 11g RMAN Proactive Health Check

I. Database

RMAN> validate database ;

Starting validate at 27-NOV-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=138 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00001 name=C:\ORACLE\ORADATA\ORCL11G\SYSTEM01.DBF
input datafile file number=00002 name=C:\ORACLE\ORADATA\ORCL11G\SYSAUX01.DBF
input datafile file number=00004 name=C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
input datafile file number=00006 name=C:\ORACLE\ORADATA\ORCL11G\UNDOTBS02.DBF
input datafile file number=00005 name=C:\ORACLE\ORADATA\ORCL11G\EXAMPLE01.DBF
channel ORA_DISK_1: validation complete, elapsed time: 00:03:25
List of Datafiles
=================

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
1 OK 0 11278 512000 13548003
File Name: C:\ORACLE\ORADATA\ORCL11G\SYSTEM01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 487268
Index 0 11166
Other 0 2288

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2 OK 0 26072 75512 13548017
File Name: C:\ORACLE\ORADATA\ORCL11G\SYSAUX01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 12195
Index 0 10225
Other 0 27020

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4 OK 0 51199 64000 13457224
File Name: C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 12357
Index 0 33
Other 0 411

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
5 OK 0 1719 12800 13345172
File Name: C:\ORACLE\ORADATA\ORCL11G\EXAMPLE01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 4406
Index 0 1264
Other 0 5411

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
6 OK 0 23908 64000 13548017
File Name: C:\ORACLE\ORADATA\ORCL11G\UNDOTBS02.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 0
Index 0 0
Other 0 40092

channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
including current control file for validation
including current SPFILE in backup set
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Control File and SPFILE
===============================

File Type Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
SPFILE OK 0 2
Control File OK 0 624
Finished validate at 27-NOV-07



II.Tablespace

RMAN> validate tablespace users ;

Starting validate at 27-NOV-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=128 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00004 name=C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
channel ORA_DISK_1: validation complete, elapsed time: 00:00:15
List of Datafiles
=================

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4 OK 0 51199 64000 13457224
File Name: C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 12357
Index 0 33
Other 0 411

Finished validate at 27-NOV-07



III. Datafile

RMAN> validate datafile 4 block 1 ;

Starting validate at 27-NOV-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00004 name=C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Datafiles
=================

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4 OK 0 0 1 0
File Name: C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 0
Index 0 0
Other 0 1

Finished validate at 27-NOV-07

Wednesday, November 14, 2007

Oracle Enterprise Manager Agent Upload Troubleshooting

Every find the Oracle Enterprise Manager Grid Control agents not uploading.
There are many reasons for the agent not being able to upload.

Examine the agent


1. First I always check the status of the agent
emctl status agent

2. Examine status output, may see that the agent is not uploading by see
Last successful upload : (none)
Last attempted upload : (none)
Total Megabytes of XML files uploaded so far : 0.00
Number of XML files pending upload : 828
Size of XML files pending upload(MB) : 56.61
Available disk space on upload filesystem : 8.50%
Last successful heartbeat to OMS : 2007-10-21 10:55:39

3. Next I always attempt to do a manual upload for the agent to check/verify upload problem
emctl upload agent

4. Check the following log file for errors
$AGENT_HOME/sysman/log/emagent.trc

In Most cases it is due to a bad .xml file or unable to contact the management service. If it is due to contacting the management service you will need to fix the OMS Service or connectivity to the service, however if it is due to a bad xml file you can then try to just remove that file from the upload location. I usually save the file to another location so that I can send the file to Oracle Support where hopefully they can tell me why the file would not upload.Once you have removed the file from the upload location you can attempt a manual upload again.emctl upload agent Check the logfile again for errors. you can repeat for subsequent bad xml files, but if all else fails we can always clear the agent completely. Keep in mind yuo will lose data from the pending xmls therefore the data will not make it to the Grid Control Repository.


Clear the agent up


1. Stop the agent on the target node
emctl stop agent

2. Delete any pending upload files from the agent home
rm -r $ORACLE_HOME/sysman/emd/state/*

rm -r $ORACLE_HOME/sysman/emd/collection/*
rm -r $ORACLE_HOME/sysman/emd/upload/*
rm $ORACLE_HOME/sysman/emd/lastupld.xml
rm $ORACLE_HOME/sysman/emd/agntstmp.txt
rm $ORACLE_HOME/sysman/emd/blackouts.xml


3. agent clearstate
emctl clearstate


4. Start the agent again
emctl start agent

5. Force an upload to the Oracle Management Server/Service (OMS)
emctl upload agent

Monday, November 12, 2007

Oracle 11g Advanced Compression

Oracle 11g introduces a new feature that potentially offers an organization the promise of tables that take less space therefore equaling a smaller database. A smaller database taking less disk space equals a lower cost for disk storage lowering cost for disk storage for databases.

With database sizes continually growing at an alarming rate the ability to increase the amount of data store per gigabyte is exciting.

Oracle first introduced compression in 9i where you could compress tables upon creation via operations like create table as select or direct load operations. This is well suited for initial loads of data, but to remain compressed required maintenance. With pressure to increase availability of databases this compression was not well suited for our OLTP systems since most data is not direct loaded. Oracle’s introduction of Advanced Compression changes that. Keep in mind though that Advanced Compression is a new option with 11g and therefore will have additional license requirements.

Oracle boasted 2-3x less disk space and minimal DML overhead.

Is that really true can we expect a table to be half the size it is today uncompressed? If so will it maintain itself to be continually half the size over time?

Also how bad does the compression affect my read and write operations? Oracle boasts that read operations are not impacted and claims that write operations have a minimal impact.

I did a small test to see if I could at least at some level quickly get some answers to those questions.

In my test I create a compressed table with a create table as select, much like I would creating a compressed table and loading with initial data in 9i. Created a table un-compressed with initial data load and uncompressed with initial data load. Then I moved on to load more data with an insert select looping though doing that insert upwards of 30 times each time taking a measure of the table size and how long the load took.

I have included my script that I used and output results from the execution with the load operation looping 30 times. Keep in mind that this is a small test to get familiar with the feature and get some initial thoughts.

Script used in testing:

-------------------------------------------------
-- script: test_11g_compression.sql
-- author: Michael Messina
--
-- description: Script the sill create tables
-- with and without compression
-- load tables with data and
-- show sizes and timings
-------------------------------------------------
-- spool to a log file for us
spool test_11g_compression.log

-- Lets drop the tables for our test
drop table test_compression_compressed ;
drop table test_compression_uncompressed ;

-- let see how long it takes to create the tables
-- and load the data for compressed and
-- uncompressed versions.
set timing on

-- create a table compressed and load with data
create table test_compression_compressed
COMPRESS FOR ALL OPERATIONS
as
select * from dba_tables ;

-- create a table and load with data uncompressed
create table test_compression_uncompressed
as
select * from dba_tables ;

set timing off

-- Lets see how large our compressed table is
select sum(bytes)/1024 from dba_segments where owner = 'SYSTEM' and segment_name = 'TEST_COMPRESSION_COMPRESSED' ;

-- Lets see how large out uncompressed table is
select sum(bytes)/1024 from dba_segments where owner = 'SYSTEM' and segment_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

-----------------------------------------------------------------------------------------------------------------------
set timing on

-- Now lets insert a ton more records into the compressed table and show timing
declare
l number := 0 ;

begin

loop
if l > 30 then
exit ;
end if ;

l := l + 1 ;

insert into test_compression_compressed select * from dba_tables ;
end loop ;
end ;
/


-- Now lets insert a ton more records into the uncompressed table and show timing
declare
l number := 0 ;

begin

loop
if l > 30 then
exit ;
end if ;

l := l + 1 ;

insert into test_compression_uncompressed select * from dba_tables ;
end loop ;
end ;
/

set timing off

-- Lets see how large our compressed table is
select sum(bytes)/1024 from dba_segments where owner = 'SYSTEM' and segment_name = 'TEST_COMPRESSION_COMPRESSED' ;

-- Lets see how large out uncompressed table is
select sum(bytes)/1024 from dba_segments where owner = 'SYSTEM' and segment_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

-- Turn the spool off
spool off


Results from script execution:

SQL> @test_11g_compression.sql

Table dropped.


Table dropped.


Table created.

Elapsed: 00:00:00.48

Table created.

Elapsed: 00:00:00.39

SUM(BYTES)/1024
---------------
192


SUM(BYTES)/1024
---------------
704


PL/SQL procedure successfully completed.

Elapsed: 00:00:09.17

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.79

SUM(BYTES)/1024
---------------
9216


SUM(BYTES)/1024
---------------
20480

SQL>



Query performance results:

UnCompressed table

SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.40
SQL> select count(*) from test_compression_uncompressed ;

COUNT(*)
----------
80767

Elapsed: 00:00:00.60
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.07
SQL> select count(*) from test_compression_uncompressed ;

COUNT(*)
----------
80767

Elapsed: 00:00:00.57
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.07
SQL> select count(*) from test_compression_uncompressed ;

COUNT(*)
----------
80767

Elapsed: 00:00:00.61
SQL>


Compressed table:

SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.06
SQL> select count(*) from test_compression_compressed ;

COUNT(*)
----------
80766

Elapsed: 00:00:00.40
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.09
SQL> select count(*) from test_compression_compressed ;

COUNT(*)
----------
80766

Elapsed: 00:00:00.39
SQL>


Trace Results from full table scan reads:

SQL> set autotrace traceonly
SQL> set timing on
SQL>
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.03
SQL> select * from test_compression_compressed ;

80766 rows selected.

Elapsed: 00:00:01.93

Execution Plan
----------------------------------------------------------
Plan hash value: 1423006034

-------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 57433 26M 311 (1) 00:00:04
1 TABLE ACCESS FULL TEST_COMPRESSION_COMPRESSED 57433 26M 311 (1) 00:00:04
-------------------------------------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1156 consistent gets
1114 physical reads
0 redo size
3523277 bytes sent via SQL*Net to client
821 bytes received via SQL*Net from client
42 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
80766 rows processed

SQL>
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.04
SQL> select * from test_compression_uncompressed ;

80767 rows selected.

Elapsed: 00:00:01.84

Execution Plan
----------------------------------------------------------
Plan hash value: 3640389637

---------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 92920 43M 690 (1) 00:00:09
1 TABLE ACCESS FULL TEST_COMPRESSION_UNCOMPRESSED 92920 43M 690 (1) 00:00:09
---------------------------------------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
2559 consistent gets
2513 physical reads
0 redo size
3557685 bytes sent via SQL*Net to client
821 bytes received via SQL*Net from client
42 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
80767 rows processed

SQL>
SQL> set timing off
SQL>


Summary:

As you can see from my very small test that the compression is a little more then 50% and appeared to be consistent with loads at 10 times, 20 times and 30 times. This shows that Oracle is maintaining the compression and is consistent.

If you look at the time for the load however you can clearly see the impact that the compression seems to be having. At 10 times, 20 times and 30 times the load consistently seemed to impact the load by taking 2x as long to load verses the uncompressed table.

At very low volumes this impact may not be overly noticeable but to a large load however the impact may be significant as far as runtime. This is very true if you have short windows to get your data loaded.

Over all the compression write overhead is light enough that small transactional activity could handle the overhead, however larger loads might pose a problem.

The disk space savings can be huge for large tables a savings of 50% is big when you are talking very large tables and very large databases.

A quick look at query performance with doing full table scans shows little impact to overall performance. However it does appear that on the compressed table ½ the total number of blocks are visited as compared to the non-compressed table. With ½ the block visits I would expect that a slight performance advantage might go to the compressed table, however the results do not back that up.

Another interesting difference is in the explain plan for the full table scan. The cost of the query is lower with the compressed table then with the uncompressed table. The cost difference may be fully related to the number of I/O operations difference. This would also lead you to believe that you may see a slight performance advantage to the scan on the compressed table, yet the results do not back that assumption up.

My small test only showed me that I need to do some more extensive testing, but initial results do show that there are advantages to this new feature and that further testing will be needed to be able to best apply this new feature.

Check for Blocking Locks

-------------------------------------------------------------
-- script: locks_blocking.sql
-- author: Mike Messina
-- desciption: Shows the blocking locks in a database
-- to trouble shoot sessions that appear to
-- be hung or possibly showing poor performance
-------------------------------------------------------------
SET PAGESIZE 500
SET FEEDBACK ONSET LINESIZE 200
SET TRIMSPOOL ON
SET ECHO OFF
set feedback off

-- Get Database Instance Name
column database_name noprint new_value i_name

SELECT UPPER(name) '_locks_blocking.log' database_name
FROM v$database ;

set feedback on

-- Spool to database instance file
SPOOL &i_name
select /*+ ORDERED */
l.sid,
s.serial#,
p.spid os_pid,
SUBSTR(s.username,1,15) username,
DECODE(l.request, 0, 'NO', 'YES') waiter,
SUBSTR(decode(l.type,
'MR', 'Media Recovery',
'RT', 'Redo Thread',
'UN', 'User Name',
'TX', 'Transaction',
'TM', 'DML',
'UL', 'PL/SQL User Lock',
'DX', 'Distributed Xaction',
'CF', 'Control File',
'IS', 'Instance State',
'FS', 'File Set',
'IR', 'Instance Recovery',
'ST', 'Disk Space Transaction',
'TS', 'Temp Segment',
'IV', 'Library Cache Invalidation',
'LS', 'Log Start or Switch',
'RW', 'Row Wait',
'SQ', 'Sequence Number',
'TE', 'Extend Table',
'TT', 'Temp Table',
l.type),1,30) lock_type,
SUBSTR(decode(l.lmode,
0, 'None',
1, 'Null',
2, 'Row-S (SS)',
3, 'Row-X (SX)',
4, 'Share',
5, 'S/Row-X (SSX)',
6, 'Exclusive',
to_char(l.lmode)),1,15) mode_held,
SUBSTR(decode(l.block,
0, 'Not Blocking', /* Not blocking any other processes */
1, 'Blocking', /* This lock blocks other processes */
2, 'Global', /* This lock is global, so we can't tell */
to_char(l.block)),1,15) blocking,
SUBSTR(do.owner '.' do.object_name,1,50) object
from v$lock l,
v$session s,
v$process p,
v$locked_object lo,
dba_objects do
where l.sid = s.sid
and s.username is not null
and s.sid = lo.session_id (+)
and s.paddr = p.addr (+)
and lo.object_id = do.object_id (+)
and l.block <> 0 ;

SPOOL off

See SQL Being Executed for SID

----------------------------------------------------
-- Script: see_sql_for_sid.sql
-- Author: Michael Messina
-- Parameters: SID
-- Description: See SQL Text executing for SID
--------------------------------------------------
set long 4000

-- 10g and above
select sql_fulltext
from v$sql
WHERE (address, hash_value) IN
(SELECT sql_address, sql_hash_value
FROM v$session WHERE sid = &1);

Example:

SQL> @see_sql_for_sid.sql 139
old 6: WHERE sid = &1)
new 6: WHERE sid = 139)

SQL_FULLTEXT
--------------------------------------------------------------------------------
select sql_fulltext
from v$sql
WHERE (address, hash_value) IN
(SELECT sql_address, sql_hash_value
FROM v$session
WHERE sid = 139)

1 row selected.

Thursday, November 8, 2007

OEM Grid Control Rediscover targets on host

Ever found the need to Rediscover targets for host with Oracle Enterprise Manager Grid Control?

We will need to work with the agent on the host where the targets need rediscovring.

Here is some steps that may help:

1. Check that the ENV is set for the Management Agent
- ORACLE_HOME set
- $ORACLE_HOME/bin in $PATH

The agentca utility is expecting the oraInst.loc to be located in the agents ORACLE_HOME. Please check that the oraInst.loc is located in the agent ORACLE_HOME location.

** Speical Note for Solaris Platform: The Solaris Operating System puts the oraInst.loc in /var/opt/oracle. This means that the agentca command will not find the oraInst.loc file. You need to create a soft link: $ORACLE_HOME/oraInst.loc to /var/opt/oracle/oraInst.loc for the agentca to work properly.

2. Copy the file $ORACLE_HOME/sysman/emd/targets.xml. to targets.xml.orig

3. Run agentca -d

4. Check that the targets.xml has been created in $ORACLE_HOME/sysman/emd directory

Check that the Management Agent uploads successfully to the Management Service with emctl status agent

emctl upload
emctl status agent

5. For any agentca issue consult the following log files: - $ORACLE_HOME/cfgtoollogs/oui/configActions.log - $ORACLE_HOME/cfgtoollogs/oui/configActions.err - $ORACLE_HOME/cfgtoollogs/cfgfw/CfmLogger__.log

Oracle Audit Failed Logon Attempts

Ever need to audit failed login attempts to your database. Ever have accounts becoming locked due to failed login attempts and not sure why.

Here is a solution

You must set the audit_trail=DB in the init.ora/spfile
then you must audit with audit session whenever not successful ;
Here is a complete walk through on the setup and a script that will help query the audit trail for the failed login attempts. Be sure to clean up you audit trail so it does not grow out of control.

The Example Walk Through for Setup:

SQL> connect sys as sysdba
Enter password: ******
Connected.
SQL> alter system set audit_trail=DB scope=spfile ;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup ;
ORACLE instance started.

Total System Global Area 612368384 bytes
Fixed Size 1250428 bytes
Variable Size 234883972 bytes
Database Buffers 369098752 bytes
Redo Buffers 7135232 bytes
Database mounted.
Database opened.
SQL> audit session whenever not successful ;

Audit succeeded.

SQL> connect dummy/dummy
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> connect sys as sysdba
Enter password: ******
Connected.
SQL> select os_username,
username,
userhost,
to_char(timestamp,'mm/dd/yyyy hh24:mi:ss') timestamp,
returncode
from dba_audit_session
where action_name = 'LOGON'
and returncode > 0
order by timestamp ;

OS_USERNAME
--------------------------------------------------------------------------------
USERNAME
------------------------------
USERHOST
--------------------------------------------------------------------------------TIMESTAMP RETURNCODE
------------------- ----------------
MRMESSIN\Mike Messina
DUMMYWORKGROUP\MRMESSIN
11/08/2007 09:07:54 1017
SQL>

Here is a script that will show you the failed login attempts made to an Oracle Database after your setup.

-----------------------------------------------
-- see_failed_login_attempts.sql
--
-- Michael Messina
--
-- query the Oracle Audit Trail and
-- will write a log file of the failed
-- login attempts for the database.
--
-- Requires:
-- audit_trail=DB in init.ora/spfile
-- audit session whenever not successful ;
-----------------------------------------------
set pagesize 200
set linesize 150
column os_username format a15
column username format a15
column userhost format a40
column timestamp format a20
column returncode format 9999999999

spool failed_login_attempts.log

select os_username,
username,
userhost,
to_char(timestamp,'mm/dd/yyyy hh24:mi:ss') timestamp,
-- action_name,
returncode
from dba_audit_session
where action_name = 'LOGON'
and returncode > 0
order by timestamp ;

spool off

Oracle Security Resolving the OEM Grid Control execute to PUBLIC Security Violations

Oracle Enterprise Manager highlights many execute to public permissions on packages like UTL_FILE, UTL_HTTP, UTL_TCP, UTL_SMTP, DBMS_LOB, DBMS_JOB and DBMS_RANDOM.

The following is a script that will help take care of that.

Be sure to check for invalid objects after script execution. This script covers Oracle internal accounts, but your database may have other accounts that need permissions as well.

----------------------------------------
-- fix_database_security.sql
--
-- Michael Messina
--
----------------------------------------
-- Script will resolve the most common
-- security policy violations OEM looks
-- for in a Oracle database
----------------------------------------
grant execute on DBMS_LOB to CONTENT ;
grant execute on DBMS_LOB to CWSYS ;
grant execute on DBMS_LOB to CTX_SUBSTR ;
grant execute on dbms_lob to rtc ;
grant execute on utl_smtp to es_mail ;
grant execute on utl_file to es_mail ;
grant execute on dbms_lob to es_mail ;
-- Oracle AS Metadata Repository Needs
grant execute on dbms_lob to ORASSO ;
grant execute on dbms_job to ORASSO ;
grant execute on dbms_random to orasso ;
grant execute on utl_file to orasso ;
grant execute on utl_http to orasso ;
grant execute on utl_smtp to orasso ;
grant execute on utl_tcp to bam ;
grant execute on utl_smtp to bam ;
grant execute on dbms_lob to bam ;
grant execute on utl_file to ods ;
grant execute on dbms_job to ods ;
grant execute on dbms_lob to OWF_MGR ;
grant execute on dbms_job to OWF_MGR ;
grant execute on dbms_random to OWF_MGR ;
grant execute on utl_http to OWF_MGR ;
grant execute on utl_file to OWF_MGR ;
grant execute on dbms_lob to exfsys ;
grant execute on dbms_lob to PORTAL ;
grant execute on dbms_job to PORTAL ;
grant execute on dbms_random to PORTAL ;
grant execute on utl_http to PORTAL ;
grant execute on utl_file to PORTAL ;
grant execute on utl_SMTP to PORTAL ;
grant execute on dbms_lob to WIRELESS ;
grant execute on dbms_job to WIRELESS ;
grant execute on dbms_random to WIRELESS ;
grant execute on dbms_job to B2B ;
-- Enterprise Manager Repository Owner Needs
grant execute on utl_file to sysman ;
grant execute on dbms_random to sysman ;
grant execute on utl_http to sysman ;
grant execute on utl_smtp to sysman ;
grant execute on utl_tcp to sysman ;
grant execute on dbms_lob to sysman ;
grant execute on dbms_job to sysman ;
grant execute on utl_file to ordplugins ;
grant execute on dbms_random to ordplugins ;
grant execute on utl_http to ordplugins ;
grant execute on utl_smtp to ordplugins ;
grant execute on utl_tcp to ordplugins ;
grant execute on dbms_lob to ordplugins ;
grant execute on dbms_job to ordplugins ;
grant execute on utl_file to ordsys ;
grant execute on dbms_random to ordsys ;
grant execute on utl_http to ordsys ;
grant execute on utl_smtp to ordsys ;
grant execute on utl_tcp to ordsys ;
grant execute on dbms_lob to ordsys ;
grant execute on dbms_job to ordsys ;
grant execute on utl_file to mdsys ;
grant execute on dbms_random to mdsys ;
grant execute on utl_http to mdsys ;
grant execute on utl_smtp to mdsys ;
grant execute on utl_tcp to mdsys ;
grant execute on dbms_lob to mdsys ;
grant execute on dbms_job to mdsys ;
grant execute on utl_file to dmsys ;
grant execute on dbms_random to dmsys ;
grant execute on utl_http to dmsys ;
grant execute on utl_smtp to dmsys ;
grant execute on utl_tcp to dmsys ;
grant execute on dbms_lob to dmsys ;
grant execute on dbms_job to dmsys ;
grant execute on utl_file to xdb ;
grant execute on dbms_random to xdb ;
grant execute on utl_http to xdb ;
grant execute on utl_smtp to xdb ;
grant execute on utl_tcp to xdb ;
grant execute on dbms_lob to xdb ;
grant execute on dbms_job to xdb ;
grant execute on utl_file to system ;
grant execute on dbms_random to system ;
grant execute on utl_http to system ;
grant execute on utl_smtp to system ;
grant execute on utl_tcp to system ;
grant execute on dbms_lob to system ;
grant execute on dbms_job to system ;
grant execute on utl_file to outln ;
grant execute on dbms_random to outln ;
grant execute on utl_http to outln ;
grant execute on utl_smtp to outln ;
grant execute on utl_tcp to outln ;
grant execute on dbms_lob to outln ;
grant execute on dbms_job to outln ;
grant execute on utl_file to dbsnmp ;
grant execute on dbms_random to dbsnmp ;
grant execute on utl_http to dbsnmp ;
grant execute on utl_smtp to dbsnmp ;
grant execute on utl_tcp to dbsnmp ;
grant execute on dbms_lob to dbsnmp ;
grant execute on dbms_job to dbsnmp ;
grant execute on utl_file to ordsys ;
grant execute on utl_http to mdsys ;
grant execute on utl_file to olapsys ;
grant execute on dbms_random to olapsys ;
grant execute on utl_http to olapsys ;
grant execute on utl_smtp to olapsys ;
grant execute on utl_tcp to olapsys ;
grant execute on dbms_lob to olapsys ;
grant execute on dbms_job to olapsys ;
grant execute on dbms_lob to ctxsys ;
grant execute on dbms_job to ctxsys ;
grant execute on dbms_job to exfsys ;
grant execute on dbms_job to wksys ;
grant execute on dbms_lob to wksys ;
revoke execute on utl_file from public ;
revoke execute on dbms_random from public ;
revoke execute on utl_http from public ;
revoke execute on utl_smtp from public ;
revoke execute on utl_tcp from public ;
revoke execute on dbms_lob from public ;
revoke execute on dbms_job from public ;
@$ORACLE_HOME/rdbms/admin/utlrp