Wednesday, December 26, 2007

Add/Remove Disks from ASM disk group on live database

ASM will allow the adding and removing of volumes from a disk group while disk group is still in use by a database. This will allow the replacement of disk volumes from olds disk to new without having to bring the database down.

Steps:

1. Find out or disk groups

SQL> select group_number, name from v$asm_diskgroup ;
GROUP_NUMBER NAME
------------ ------------------------------
1 P1CSM2_ASM_DG1

SQL>


2. Find the disk volumes that make up the ASM diskgroup that we want to work with. In this example we will be replacing the 3 old disk volumes with 3 new ones.

SQL> select disk_number, mode_status, name, path from v$asm_disk where group_number = 1 order by disk_number, name ;

DISK_NUMBER MODE_ST NAME
----------- ------- ------------------------------
PATH
-----------------------------------------------------------
0 ONLINE P1CSM2_ASM_DG1_0000
/dev/rhdisk2
1 ONLINE P1CSM2_ASM_DG1_0001
/dev/rhdisk3
2 ONLINE P1CSM2_ASM_DG1_0002
/dev/rhdisk4

SQL>

3. Make sure our new volumes can be seen by asm as our disk string will be different.

SQL> alter system set ASM_DISKSTRING='/dev/rhdisk*','/dev/rrawlv*' scope=both ;

System altered.

SQL>

4. Add new disks to our disk group

SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 ADD DISK '/dev/rrawlv01' NAME P1CSM2_ASM_DG1_0010 NOFORCE ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- ----------

EST_RATE EST_MINUTES
-------- -----------

1 REBAL RUN 1 1 910 1167
1577 0

SQL> select * from v$asm_operation ; no rows selected SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 ADD DISK '/dev/rrawlv02' NAME P1CSM2_ASM_DG1_0011 NOFORCE ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------
EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 413 1881
1498 0

SQL> select * from v$asm_operation ;

no rows selected

SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 ADD DISK '/dev/rrawlv03' NAME P1CSM2_ASM_DG1_0012 NOFORCE ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------
EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 254 1786
1514 1

SQL> select * from v$asm_operation ;

no rows selected

SQL>


5. Check that our new disk volumes are now part of our disk group.

SQL> select disk_number, mode_status, name, path from v$asm_disk where group_number = 1 order by disk_number, name ;

DISK_NUMBER MODE_ST NAME
----------- ------- ------------------------------
PATH
-----------------------------------------------------------
0 ONLINE P1CSM2_ASM_DG1_0000
/dev/rhdisk2
1 ONLINE P1CSM2_ASM_DG1_0001
/dev/rhdisk3
2 ONLINE P1CSM2_ASM_DG1_0002
/dev/rhdisk4
3 ONLINE P1CSM2_ASM_DG1_0010
/dev/rrawlv01
4 ONLINE P1CSM2_ASM_DG1_0011
/dev/rrawlv02
5 ONLINE P1CSM2_ASM_DG1_0012
/dev/rrawlv03

6 rows selected.

SQL>


6. Remove the disks we want to remove from the group
SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 DROP DISK P1CSM2_ASM_DG1_0000 ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------

EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 548 2884

1588 1

SQL> select * from v$asm_operation ;


no rows selected

SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 DROP DISK P1CSM2_ASM_DG1_0001 ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------

EST RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 906 3274

1731 1


SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------

EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 3020 3226

1972 0


SQL> select * from v$asm_operation ;

no rows selected

SQL> ALTER DISKGROUP P1CSM2_ASM_DG1 DROP DISK P1CSM2_ASM_DG1_0002 ;

Diskgroup altered.

SQL> select * from v$asm_operation ;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK
------------ ----- ---- ----- ------ ----- --------

EST_RATE EST_MINUTES
-------- -----------
1 REBAL RUN 1 1 1513 3197

1957 0

SQL> select * from v$asm_operation ; no rows selected SQL>



7. Check the removal of the disk volumes.
SQL> select disk_number, mode_status, name, path from v$asm_disk where group_number = 1 order by disk_number, name ;


DISK_NUMBER MODE_ST NAME
----------- ------- ------------------------------
PATH
------------------------------------------------------------------
3 ONLINE P1CSM2_ASM_DG1_0010
/dev/rrawlv01
4 ONLINE P1CSM2_ASM_DG1_0011
/dev/rrawlv02
5 ONLINE P1CSM2_ASM_DG1_0012
/dev/rrawlv03

SQL>

Thursday, December 13, 2007

Relocating or Increase Size of Oracle Online Redo Logs on a Live Database

1. Get a list of the existing Online Redo Log Members

SQL> select group#, member from v$logfile ;

GROUP#
----------
MEMBER
--------------------------------------------------------------------------------
3
/ora_ENV/app/oracle/product/DB/10g_ocs/oradata/CSP2/onlinelog/o1_mf_3_3j3cth7n_.
log

3
/ora_ENV/app/oracle/product/DB/10g_ocs/flash_recovery_area/CSP2/onlinelog/o1_mf_
3_3j3cthgv_.log

2
/ora_ENV/app/oracle/product/DB/10g_ocs/oradata/CSP2/onlinelog/o1_mf_2_3j3ctgrc_.
log

2
/ora_ENV/app/oracle/product/DB/10g_ocs/flash_recovery_area/CSP2/onlinelog/o1_mf_
2_3j3ctgz6_.log

1
/ora_ENV/app/oracle/product/DB/10g_ocs/oradata/CSP2/onlinelog/o1_mf_1_3j3ctg9k_.
log

1
/ora_ENV/app/oracle/product/DB/10g_ocs/flash_recovery_area/CSP2/onlinelog/o1_mf_
1_3j3ctgjp_.log


6 rows selected.

SQL>


2. Find groups and the size of the Online Redo Logs so we know the number of groups and the current size.

SQL> select group#, bytes/1024 from v$log ;

GROUP# BYTES/1024
---------- ----------
1 10240
2 10240
3 10240

SQL>


3. Create New Redo Log Groups and Members for database, we will increase size here in our example.

$sqlplus "/ AS SYSDBA"

SQL*Plus: Release 10.1.0.4.2 - Production on Wed Dec 12 19:05:24 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.4.2 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> ALTER DATABASE ADD LOGFILE GROUP 4 ('/ora_DB/REDO1/log/CSP2/redo4a.log', '/ora_DB/REDO2/log/CSP2/redo4b.log')
SIZE 30M ;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE GROUP 5 ('/ora_DB/REDO1/log/CSP2/redo5a.log', '/ora_DB/REDO2/log/CSP2/redo5b.log')
SIZE 30M ;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE GROUP 6 ('/ora_DB/REDO1/log/CSP2/redo6a.log', '/ora_DB/REDO2/log/CSP2/redo6b.log')
SIZE 30M ;

Database altered.

SQL>
4. Now that we have created our new log files we need to determine the current log and get past our original log files so that they can now be removed.

SQL> select group#, status from v$log ;

GROUP# STATUS
---------- ----------------
1 INACTIVE
2 CURRENT
3 INACTIVE
4 UNUSED
5 UNUSED
6 UNUSED

6 rows selected.

SQL>

5. Switch the logfile until the new files are current and the old redo logs are no longer current. In this example we will switch 2 times as that should get us into our new logs.

SQL> alter system switch logfile ;

System altered.

SQL> alter system switch logfile ;

System altered.

SQL> select group#, status from v$log ;

GROUP# STATUS
---------- ----------------
1 INACTIVE
2 INACTIVE
3 INACTIVE
4 INACTIVE
5 CURRENT
6 UNUSED

6 rows selected.

SQL>



5. Now Remove the old Online Redo Logs, we will remove by groups

SQL> ALTER DATABASE DROP LOGFILE GROUP 1 ;

Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 2 ;

Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 3 ;

Database altered.


6. Check that our Online Redo Logs all show correctly as we expect.

SQL> select group#, member from v$logfile ;

GROUP#
----------
MEMBER
--------------------------------------------------------------------------------
4
/ora_DB/REDO1/log/CSP2/redo4a.log

4
/ora_DB/REDO2/log/CSP2/redo4b.log

5
/ora_DB/REDO1/log/CSP2/redo5a.log

5
/ora_DB/REDO2/log/CSP2/redo5b.log

6
/ora_DB/REDO1/log/CSP2/redo6a.log

6
/ora_DB/REDO2/log/CSP2/redo6b.log


6 rows selected.

SQL>

Wednesday, December 12, 2007

Oracle Table/Data Compression

Description/Overview

Oracle Tables/Data Compression was introduced with the 9i version of the database.

This feature was a huge step in reducing the size of our large tables and helping reduce physical I/O resulting from queries against those large tables.

Table data compression works by eliminating duplicate values in a block. The more duplicate values that are in the block the better the compression is. All duplicate values for the block are stored once and are put in the beginning of the block called a symbol table for the block. All the values in the symbol table for the block are referenced by the occurrences of those values in the block. This type of compression could achieve better compression ratios by loading data in a specific order, say like primary key order where primary key makes up several columns. Also larger block sizes can yield greater compression ratios main reason is the possibility of duplication in a block is greater to more data that can go into the block. Also keep in mind that pctfree is set to 0 on all tables created with the compress keyword.

Table data Compression works on partitioned and non-partitioned tables as well as partitioned and non-partitioned materialized views (snapshots). Data compression when the tables is partitioned compression can be applied to all partitions or select individual partitions. This allows the flexibility for instance to allow a partition to be come static before partitioning it. However table data compression will not maintain compression therefore a table, tablespace, table partition can contain compressed and uncompressed data. The reason the compression is not maintained as compression only works with direct load type operations such as Direct SQL Load, Create Table … as Select and parallel inserts or inserts using the /*+ APPEND */ hint. Since the compression is not maintained it is possible to cause some fragmentation through row chaining and wasted disk space when modifying compressed data.

Oracle indicates the impact of compression for read operations is so minimal that negative impact on queries does not need to be taken into account when considering implementing table data compression. The impact of table data partitioning happens at the time of the bulk loading of the data into the table.

There are several ways to apply data compression to a table:

1. Set at the tablespace level, this will set so that all tables created in the tablespace will be compressed by default.
2. Set with the compress keyword at table creation.
3. Move the table via the alter table … move.


Benefits

1. Can reduce physical I/O required especially for full table scans as well as physical I/O operations while using indexes when many rows are returned as each physical block read can contain more data.
2. Improve performance of queries with higher physical I/O activity. Some query testing has shown in a lot of cases a 50+% reduction in physical I/O.
3. Most tables during testing showed a 50% or better disk space savings by compressing the table. Therefore compressing large tables can significantly reduce physical disk space usage.
4. Improved Buffer Cache utilization from compressed blocks in buffer cache memory.

Limitations

1. CPU hit, impact seems minimal in most cases. Larger full table scans appear to have the largest CPU impact. Be sure to test larger queries against large tables to be sure CPU impact does not reach the limits of the host hardware therefore impacting performance though a CPU bottleneck vs. an I/O one.
2. Table data compression does not compress new rows. Compression only can happen on direct DML operations. This creates a situation were to maintain the compression periodic maintenance may be required.
3. Direct DML operations performance is impacted due to having to compress the data.
4. Table data compression does not work on LOB data type columns (CLOB, BLOB, etc.) and data types derived from LOBs like VARRAYs or XML data stored in CLOB column.



Calculating the Compression Ratio for an existing table

Below is a function that will calculate the compression ratio for an existing table, it accepts the table name as a parameter. The function came from Table Compression in Oracle9i Release2 An Oracle White Paper May 2002

create function compression_ratio (tabname varchar2)
return number
is

-- sample percentage
pct number := 0.000099;
-- original block count (should be less than 10k)
blkcnt number := 0;
-- compressed block count
blkcntc number;

begin
execute immediate ' create table TEMP_UNCOMPRESSED pctfree 0 as select * from ' tabname ' where rownum <>

while ((pct <>

execute immediate 'truncate table TEMP_UNCOMPRESSED';

execute immediate 'insert into TEMP_UNCOMPRESSED select * from ' tabname ' sample block (' pct ',10)';

execute immediate 'select count(distinct(dbms_rowid.rowid_block_number(rowid))) from TEMP_UNCOMPRESSED' into blkcnt;

pct := pct * 10;

end loop;

execute immediate 'create table TEMP_COMPRESSED compress as select * from TEMP_UNCOMPRESSED';

execute immediate 'select count(distinct(dbms_rowid.rowid_block_number(rowid))) from TEMP_COMPRESSED' into blkcntc;

execute immediate 'drop table TEMP_COMPRESSED';

execute immediate 'drop table TEMP_UNCOMPRESSED';

return (blkcnt/blkcntc);

end;

/

Summary

Table data compression does indeed shrink the amount of space a table takes in the database. The small test shows that there is indeed an impact on creating the table with a create table … as select where the compress table create took 2.01 seconds and the uncompressed table took .40 seconds. Not a large difference in time here but one that does show that there is indeed an impact on the create table … as select DDL. One very large loads the time could be significant. More testing with larger sets appears to be needed.

The table data compression loads using /*+ APPEND */ hint of an insert select had much closer timings to normal insert where compressed load test took 4.85 seconds and the uncompressed table insert too 3.26 seconds.

Table data compression reads did show a slight performance gain in elapse time as expected. Indeed the read from the compressed table showed fewer buffer gets and fewer physical reads as well as showed better elapsed time as we would expect doing less physical and logical reads. The read from the compressed table showed 4121 logical reads and 303 physical reads and a elapse time of 2.25 seconds where as the read from the uncompressed table did 5436 logical reads and 1724 physical reads and had an elapse time of 3.12 seconds. This shows that there is a level of compression for the blocks in the buffer cache as well as the compression for the physical disk reads, but the gets do not match the 50% reduction in size of the table.

Conclusion, this test was done on a Windows laptop running an Oracle 10.2.0.1 database using Windows XP and that based on the small test and the laptop it was done on it does show that further testing on a more robust system and varying table sizes is needed. Based on the tests physical and logical read differences and knowing the overhead compression can have on the CPU of a system I would conclude that compression for the data in a table can offer significant space savings for sure and potentially good I/O read savings for queries which can result in lower elapse times. However for the lower elapse times there needs to be sufficient CPU capacity to handle the decompression on larger physical read operations.


Test Results Summary

SQL>
SQL> -- let see how long it takes to create the tables
SQL> -- and load the data for compressed and
SQL> -- uncompressed versions.
SQL> set timing on
SQL>
SQL> -- create a table compressed and load with data
SQL> create table test_compression_compressed
2 COMPRESS
3 as
4 select * from dba_tables ;

Table created.

Elapsed: 00:00:02.01
SQL>
SQL> commit ;

Commit complete.

Elapsed: 00:00:00.00
SQL>
SQL> -- create a table and load with data uncompressed
SQL> create table test_compression_uncompressed
2 as
3 select * from dba_tables ;

Table created.

Elapsed: 00:00:00.40
SQL>
SQL> commit ;

Commit complete.

Elapsed: 00:00:00.00
SQL>
SQL> set timing off
SQL>
SQL> -- Lets see how large our compressed table is
SQL> select sum(bytes)/1024 from dba_segments where owner = 'MRMESSIN' and segment_name = 'TEST_COMPRESSION_COMPRESSED' ;

SUM(BYTES)/1024
---------------
128

SQL>
SQL> -- Lets see how large out uncompressed table is
SQL> select sum(bytes)/1024 from dba_segments where owner = 'MRMESSIN' and segment_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

SUM(BYTES)/1024
---------------
512

SQL>
SQL> -----------------------------------------------------------------------------------------------------------------------
SQL> set timing on
SQL>
SQL> -- Now lets insert a ton more records into the compressed table and show timing
SQL> declare
2 l number := 0 ;
3
4 begin
5
6 loop
7 if l > 30 then
8 exit ;
9 end if ;
10
11 l := l + 1 ;
12
13 insert /*+ APPEND */ into test_compression_compressed select * from dba_tables ;
14 -- we have to commit after each insert append operation or get an error.
15 commit ;
16 end loop ;
17 end ;
18 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.85
SQL>
SQL>
SQL> -- Now lets insert a ton more records into the uncompressed table and show timing
SQL> declare
2 l number := 0 ;
3
4 begin
5
6 loop
7 if l > 30 then
8 exit ;
9 end if ;
10
11 l := l + 1 ;
12
13 insert into test_compression_uncompressed select * from dba_tables ;
14 commit ;
15 end loop ;
16 end ;
17 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.26
SQL>
SQL> set timing off
SQL>
SQL> -- Lets see how large our compressed table is
SQL> select sum(bytes)/1024 from dba_segments where owner = 'MRMESSIN' and segment_name = 'TEST_COMPRESSION_COMPRESSED' ;

SUM(BYTES)/1024
---------------
3072

SQL>
SQL> -- Lets see how large out uncompressed table is
SQL> select sum(bytes)/1024 from dba_segments where owner = 'MRMESSIN' and segment_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

SUM(BYTES)/1024
---------------
14336

SQL>
SQL> -- flush buffer cache
SQL> alter system flush buffer_cache ;

System altered.

SQL>
SQL> -- Check timing on compressed table with full table scan
SQL> set autotrace traceonly
SQL> set timing on
SQL> select * from test_compression_compressed ;

56318 rows selected.

Elapsed: 00:00:02.25

Execution Plan
---------------------------------------------------------
Plan hash value: 1423006034

------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 44034 19M 100 (6) 00:00:02
1 TABLE ACCESS FULL TEST_COMPRESSION_COMPRESSED 44034 19M 100 (6) 00:00:02
-----------------------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
28 recursive calls
1 db block gets
4121 consistent gets
303 physical reads
4432 redo size
3161898 bytes sent via SQL*Net to client
41675 bytes received via SQL*Net from client
3756 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts(disk)
56318 rows processed

SQL> set autotrace off
SQL> set timing off
SQL>
SQL> -- Flush buffer cache
SQL> alter system flush buffer_cache ;

System altered.

SQL>
SQL> -- Check timing on uncompressed table with full table scan
SQL> set autotrace traceonly
SQL> set timing on
SQL> select * from test_compression_uncompressed ;

56319 rows selected.

Elapsed: 00:00:03.12


Execution Plan
----------------------------------------------------------
Plan hash value: 3640389637

---------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 59403 26M 530 (2) 00:00:06
1 TABLE ACCESS FULL TEST_COMPRESSION_UNCOMPRESSED 59403 26M 530 (2) 00:00:06
--------------------------------------------------------------------------------------------------

Note
----
- dynamic sampling used for this statement


Statistics
-------------------------------------------------------
29 recursive calls
1 db block gets
5436 consistent gets
1724 physical reads
132 redo size
12545290 bytes sent via SQL*Net to client
41675 bytes received via SQL*Net from client
3756 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
56319 rows processed

SQL> set autotrace off
SQL> set timing off
SQL>
SQL> set echo off


Monday, December 10, 2007

Oracle Index Key Compression

Description

Oracle introduced Index Key Compression in 8i. Though it has been around since 8i more recent versions appear to have improved it and it has become are more noted feature to use to improve space utilization as well as performance.

Index Key Compression applies to B-Tree Indexes and index organized tables (IOT), however Bitmap Indexes have a form of compression by default. Index Key Compression works a lot like table/data compression where duplicated values are removed from index entries. Due to the way indexes are structure as compared to tables it is a little more complicated and has some more restrictions and considerations than table compression. Index compression works at the column level and compresses repeating groups at the column level where table does something similar but done at the block level.

Compression only happens with new data being loading, therefore it is possible for an object to have compressed and uncompressed data at the same time. Therefore to maintain compression you must perform maintenance activities over time.

Index compression can be turned on and off for an index at anytime with a rebuild of the index.
Index Key compression can be used:
A non-unique index single and multi-column
You have a unique multi-column index.
Generally the larger the block size the better the compression, example 16k block size will give better compression than an 8k block size.

In non-unique indexes Index Key Compression all index columns are stored compressed, however in unique indexes at least one column of the index has to be stored uncompressed. The Index Key, with a unique index, compression creates a grouping piece, a grouping piece is the column or columns that form a leading subset of the columns in the index the second piece is the suffix or also called the unique piece. Then the grouping piece is stored in a way that it is shared by multiple suffix or unique pieces. The cardinality of the columns compressed determines the level of compression that can occur. For example, if a unique index consists of four columns and provides uniqueness mostly by the last two columns, it is best to choose the two leading columns to be stored compressed. If you choose to compress three columns, the repetitiveness will be very low, and the compression will not be as good.



Using Index Compression

There are 3 ways to compress and index.

1. Compress the index when it is created with the compress keyword
a. create index scott.dept_dname on scott.dept (dname) compress ;
2. Rebuild the index compressed with the alter index rebuild and the compress keyword.
a. alter index scott.dept_dname on scott.dept (dname) rebuild compress ;
3. Rebuild the index compressed online with the alter index rebuild online compress keyword (Tested 10g and 11g)
a. alter index scott.dept_dname on scott.dept (dname) rebuild online compress ;

To rebuild a compressed index to an uncompressed index we can just use the same 3 way just not compressed.

1. Un-Compress the index by dropping and recreating the index
a. drop index scott.dept_dname ;
b. create index scott.dept_dname on scott.dept (dname) ;
2. Rebuild the index compressed with the alter index rebuild and the nocompress keyword.
a. alter index scott.dept (dname) rebuild nocompress ;
b. Rebuild the index compressed online with the alter index rebuild online and the nocompress keyword
c. alter index scott.dept (dname) rebuild online nocompress ;

The compress keyword of the create index or alter index will accept an integer parameter. The integer parameter tells oracle how many columns make up the grouping piece. This can allow us to control the number of columns in an index that make up the grouping piece. The group piece default is all but the last column.



Benefits

1. Increase in performance by reducing the # of blocks the index is stored in. Therefore reducing the number of blocks that have to be read when reading from the index.
2. Index uses less space therefore saving valuable disk space.
3. The implementation of Index Key Compression is transparent to all applications as the compressed indexes are used just like all other B-Tree indexes.



Limitations

1. Performance may suffer during index scans as the burden on the CPU is increased in order to rebuild the key values.
2. Index compression cannot be used on a unique index that has only one attribute as a single column unique index by its very nature will not have a repeating group for the grouping piece.

Example:

SQL> alter index scott.pk_dept rebuild compress ;
alter index scott.pk_dept rebuild compress
*
ERROR at line 1:
ORA-25193: cannot use COMPRESS option for a single column key


SQL>



Summary

Like with any feature of Oracle you must test it in your environment and evaluate for yourself the features value. Based on the small test performed it appears that index compression does indeed shrink the size of the index. The small test showed out compressed index size as 2048 bytes and the uncompressed index size at 4096. This would indicate significant reduction in index size. The reduction in index size helped reduce I/O which can help improve performance of queries that use the compressed index. The small test does seem to indicate that the query runtime can be faster as a query using the compressed index ran in .01 seconds where as the same index columns with the index uncompressed, same query, took .06 seconds.

The test is too small to draw conclusive results, but it certainly showed that the feature is worth exploring further with much larger data sets. Future test should be performed on very large tables with very large indexes and would like to examine 10046 traces with before compression and after compression results to give a best comparison to the full impact of Index Key Compression.

** Note: For specifics of tests and the results step by step see Specific Test Results Section



Specific Test Results

Built a small test using index compression against an uncompressed table. Though the table is not very large we can see that we can make some preliminary conclusions from the test.

SQL>
SQL> ---------------------------------------------
SQL> -- Working with the uncompressed table.
SQL> ---------------------------------------------
SQL>
SQL> -----------------------------------------------------------------
SQL> -- set our timing on so we get timings on index build
SQL> set timing on
SQL>
SQL> -- Create new compressed index on our uncompressed table
SQL> create index test_index_compressed
2 on TEST_COMPRESSION_UNCOMPRESSED (OWNER, TABLE_NAME)
3 compress ;

Index created.

Elapsed: 00:00:00.20
SQL>
SQL> -- turn the timing off as we are in just a query for size
SQL> set timing off
SQL>
SQL> -- Showsize of the new compressed index
SQL> select sum(bytes)/1024 from dba_extents where segment_name = 'TEST_INDEX_COMPRESSED' ;

SUM(BYTES)/1024
---------------
2048

SQL>
SQL> -- set autotrace on so we se I/O stats
SQL> -- when we query using the index
SQL> set autotrace traceonly
SQL>
SQL> -- set timing on to test time it takes to complete the query using the index
SQL> set timing on
SQL>
SQL> -- Check our access times, plan and resources with the new compressed index
SQL> select owner, table_name, tablespace_name, num_rows
2 from TEST_COMPRESSION_UNCOMPRESSED
3 where owner = 'MRMESSIN'
4 and table_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

31 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 461570243

-------------------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 31 1984 34 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID TEST_COMPRESSION_UNCOMPRESSED 31 1984 34 (0) 00:00:01
* 2 INDEX RANGE SCAN TEST_INDEX_COMPRESSED 31 1 (0) 00:00:01
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OWNER"='MRMESSIN' AND "TABLE_NAME"='TEST_COMPRESSION_UNCOMPRESSED')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
61 recursive calls
0 db block gets
122 consistent gets
1 physical reads
0 redo size
1038 bytes sent via SQL*Net to client
403 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
31 rows processed

SQL>
SQL> -- Turn the autotrace back off
SQL> set autotrace off
SQL>
SQL> -- Turn timing off again as we are just dropping the index
SQL> set timing off
SQL>
SQL> -- drop the index so we can create a compressed one using the exact same columns
SQL> drop index test_index_compressed ;

Index dropped.

SQL>
SQL> -----------------------------------------------------------------
SQL> -- set timing on to get timing on index build
SQL> set timing on
SQL>
SQL> -- create a new uncompressed index just like the compressed one from before
SQL> create index test_index_uncompressed
2 on TEST_COMPRESSION_UNCOMPRESSED (OWNER, TABLE_NAME) ;

Index created.

Elapsed: 00:00:00.51
SQL>
SQL> -- set timing off as we are just checking size
SQL> set timing off
SQL>
SQL> -- Show size of the new uncompressed index
SQL> select sum(bytes)/1024 from dba_extents where segment_name = 'TEST_INDEX_UNCOMPRESSED' ;

SUM(BYTES)/1024
---------------
4096

SQL>
SQL> -- set autotrace on so we se I/O stats
SQL> -- when we query using the index
SQL> set autotrace traceonly
SQL>
SQL> -- set timing on to check for time for query to complete
SQL> -- so we can compare compressed and uncompressed at a high level
SQL> set timing on
SQL>
SQL> -- Check our access times, plan and resources with the new uncompressed index
SQL> select owner, table_name, tablespace_name, num_rows
2 from TEST_COMPRESSION_UNCOMPRESSED
3 where owner = 'MRMESSIN'
4 and table_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

31 rows selected.

Elapsed: 00:00:00.06

Execution Plan
----------------------------------------------------------
Plan hash value: 2607798795

-------------------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 31 1984 36 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID TEST_COMPRESSION_UNCOMPRESSED 31 1984 36 (0) 00:00:01
* 2 INDEX RANGE SCAN TEST_INDEX_UNCOMPRESSED 31 3 (0) 00:00:01
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"='MRMESSIN' AND "TABLE_NAME"='TEST_COMPRESSION_UNCOMPRESSED')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
61 recursive calls
0 db block gets
124 consistent gets
2 physical reads
0 redo size
1038 bytes sent via SQL*Net to client
403 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
31 rows processed

SQL>
SQL> -- Turn the autotrace back off
SQL> set autotrace off
SQL>
SQL> -- Can we rebuild the uncompressed index to a compressed index using alter index rebuild online?
SQL> alter index test_index_uncompressed rebuild online compress ;

Index altered.

Elapsed: 00:00:00.89
SQL>
SQL> -- set timing off only checking size of index
SQL> set timing off
SQL>
SQL> -- Show size of the new uncompressed index
SQL> select sum(bytes)/1024 from dba_extents where segment_name = 'TEST_INDEX_UNCOMPRESSED' ;

SUM(BYTES)/1024
---------------
2048

SQL> -- Drop the newly rebuilt compressed index
SQL> drop index test_index_uncompressed ;

Index dropped.

SQL>
SQL> -- turn the spool to log file off
SQL> spool off