Thursday, August 11, 2011

Optimal Undo Tablespace Size and Undo Retention

The undo tablespace and its size is related to the retention period and the transactional activity of the database.  These must be regularly reviewed and check to ensure that 1, you are not trying to have a retention period that the undo tablespace can not handle, 2 that you are size properly so that you do not encounter the dreaded ORA-01555 snapshot too old.

Optimal Undo Size and Retention
I have seen many environments where the undo tablespace is far to small for the undo retention setting and they target undo retention the environment requires.  This is due to the fact that the undo tablespace was size and undo retention set, but never rechecked or updated as the environment grew or activity changed.  Therefore if the expectation is that a flashback query can go back 2 hours, but the tablespace is too small the reality is that they really can not go back 2 hours.

First we need to check the Optimal Undo Size based on our desired retention as in most environments this will change regularly based on increases or decreases in the trasnactional activity.

The script below will ask for the desired undo retention, then will examine your database and tell you the optimal undo retention setting for the current environment as the undo tablespace size and transactional activity indicates, but will also show you what the size of the undo tablespace needs to be to be able to support the desired undo retention you have entered.

---------------------------------------------------------------------------------
-- Script: see_optimal_undo_retention.sql
-- Author: Michael Messina, Management Consultant TUSC a Rolta Company
--
-- Description:  Will calculate based on undo utilization and undo size
--               the optimal value for undo_retention.
---------------------------------------------------------------------------------
set serveroutput on size 1000000
set feedback off

ACCEPT desired_undo_retention PROMPT "Enter Desired Undo Retention in Seconds ->"

DECLARE
   v_desired_undo_retention          NUMBER := &desired_undo_retention ;
   v_block_size                      NUMBER ;
   v_undo_size                       NUMBER ;
   v_undo_blocks_per_sec             NUMBER ;
   v_optimal_undo_retention          NUMBER ;
   v_current_undo_retention          NUMBER ;
   v_undo_size_desired_ret           NUMBER ;
  
BEGIN
   -- get the current undo retention setting
   select TO_NUMBER(value)
   INTO v_current_undo_retention
   FROM v$parameter
   WHERE name = 'undo_retention' ;
  
   -- Calculate Actual Undo Size
   SELECT SUM(a.bytes)
   INTO v_undo_size
   FROM v$datafile a,
        v$tablespace b,
        dba_tablespaces c
   WHERE c.contents = 'UNDO'
     AND c.status = 'ONLINE'
     AND b.name = c.tablespace_name
     AND a.ts# = b.ts#;

   -- Calcuate the Undo Blocks per Second
   SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
   INTO v_undo_blocks_per_sec
   FROM v$undostat ;

   -- Get the database block size
   SELECT TO_NUMBER(value)
   INTO v_block_size
   FROM v$parameter
   WHERE name = 'db_block_size';

   v_optimal_undo_retention := v_undo_size/(v_block_size * v_undo_blocks_per_sec) ;
   v_undo_size_desired_ret := ((v_block_size * v_undo_blocks_per_sec) * (v_desired_undo_retention)) / 1024 / 1024 ;
  
   DBMS_OUTPUT.PUT_LINE ('              Current undo Size --> ' || TO_CHAR(v_undo_size/1024/1024) || 'MB') ;
   DBMS_OUTPUT.PUT_LINE (' Current undo_retention Setting --> ' || TO_CHAR(v_current_undo_retention) || ' Seconds') ;
   DBMS_OUTPUT.PUT_LINE (' Optimal undo_retention Setting --> ' || TO_CHAR(ROUND(v_optimal_undo_retention,2)) || ' Seconds') ;
   DBMS_OUTPUT.PUT_LINE (' Desired undo_retention Setting --> ' || TO_CHAR(ROUND(v_desired_undo_retention,2)) || ' Seconds') ;
   DBMS_OUTPUT.PUT_LINE ('Undo Size for desired retention --> ' || TO_CHAR(ROUND(v_undo_size_desired_ret,2)) || 'MB') ;
END ;
/


Example Execution of script:
SQL> @see_optimal_undo_retention.sql
Enter Desired Undo Retention in Seconds ->7200
old   2:    v_desired_undo_retention          NUMBER := &desired_undo_retention ;
new   2:    v_desired_undo_retention          NUMBER := 7200 ;
Current undo Size --> 1980MB
Current undo_retention Setting --> 900 Seconds
Optimal undo_retention Setting --> 100504.96 Seconds
Desired undo_retention Setting --> 7200 Seconds
Undo Size for desired retention --> 141.84MB


In this example on a very small database with no activity I want a retention of 2 hours, The output shows the current undo tablespace size, the current undo retention setting, the optimal undo retention setting indicates what I could make the undo retention based on the current activity and size of the undo tablespace, the desired undo retention that I entered and the undo tablespace size required to meet my desired undo retention setting. 



Tuesday, August 9, 2011

SQL Performance Analyzer, Step 2 Create Analysis Task, Make the Changes and Review Impact

I started this with the goal of being able to measure the impact of partitioning a table or set of tables.  The first Step is to get all the SQL I can from the system that executes against the table(s) that are intended to be partitioned so that we can ensure that we make a positive impact on the SQL and ensure that we are not going to make a negative impact on any existing SQL.

In this case I am choosing to use SQL Performance Analyzer to measure the impact on the SQL I gathered into my SQL Tuning Set  from

** Keep in mind I am using a test environment where I can capture the SQL, execute and measure my changes all in the same database, in same cases you may need to capture the SQL tuning set in production and then transfer the SQL Tuning Set to a test environment to execute the analysis make the changes and then make the comparison.  See Transfer SQL Tuning Sets between Databases post from October 2010.

Create an Analysis Task
In this case I will be using the dbms_sqlpa package using the create_analysis_task which will allow me to analyze the SQL in my SQL Tuning Set which is the SQL I captured that executes against the table(s) I am planning on partitioning.


dbms_sqlpa.create_analysis_task(sqlset_name => STS_SPA_1,
task_name => sts_spa1_task,
description => test partitioning);

Execute the Analysis Task
Once the Analysis Task has been created we will need to execute the task to create the baseline that will be used to compare against after we make the changes.  We will be using the dbms_sqlpa package and the execute_analysis_task procedure.


dbms_sqlpa.execute_analysis_task(task_name => STS_SPA1_task,

execution_type => test execute,

execution_name => before_partition_change);


Once the execution of the analysis task is complete we can make our changes.  Once the change has been made then we will need to execute another analysis task to get the information to compare against the base after the change.


dbms_sqlpa.execute_analysis_task(task_name => STS_SPA1_task,
execution_type => test execute,
execution_name => after_partition_change);

Compare Analysis Task
Once the initial analysis task is done, the change is made the post analysis task is done we are read yo compare the initial to the after.  Again we will use the dbms_sqlpa package, this time we will use the execute_analysis task again but will set the analysis to compare performance.


dbms_sqlpa.execute_analysis_task(
task_name => STS_SPA1_task,
execution_type => compare performance,
execution_name => analysis_results,
execution_params => dbms_advisor.arglist(execution_name1,before_partition_change,
execution_name2, after_partition_change,
comparison_metric,buffer_gets));


Run Compare Report
Once the Compare Analysis is complete we can run a report that will show us the results of the compare analysis.  The follow short script will output to a file called SPA_COMPARE_REPORT.out 3 possible reports, the first being the full analysis report, the second being a summary report and the 3 being like a top offenders report called a finding report from the analysis.


---------------------------------------------------------------------------
-- Script: spa_compare.sql
-- Author; Michael R. Messina, Managing Consultant 
-- Rolta-TUSC, Infrastructure Services
--
-- Executes a SQL Tuning Set Comparison and then outputs a report
-- from Performance Analyzer. 
---------------------------------------------------------------------------
spool SPA_COMPARE_REPORT.out
 
-- Get the whole report for the single statement case.
SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('STS_SPA1_task') from dual;
 
-- Show me the summary for the sts case.
SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('STS_SPA1_task', 'TEXT', 'TYPICAL', 'SUMMARY')
FROM DUAL;
 
-- Show me the findings for the statement I'm interested in.
SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('STS_SPA1_task', 'TEXT', 'TYPICAL', 'FINDINGS', 5) from dual;
 
spool off

SQL Performance Analyzer, Step 1 Create a SQL Tuning Set

I was dealing with a performance issue for a particular application and wanted to partition the table then measure the impact of the partitioning based on the the SQL uses that table.  This got me thinking how could make use of SQL performance Analyzer for this that way I could have the before and after analysis.

First step would be to gather the SQL that executes against the table from the live production system.  We can do this by creating a SQL tuning set and putting that SQL into that SQL tuning set.  We would base this SQL tuning set on tuning set from the cache over a period time to capture the SQL into our SQL tuning set for the SQL that executes on the table. 


Create a SQL Tuning Set

We can use the DBMS_SQLTUNE package to create the SQL tuning set with the SQL that executes against the table.  For example if we want to capture the SQL execute against the table for a 24 hour period, take a sample at 5 minute intervals it would looks like the following.  We create a SQL Tuning Set called STS_SPA_1, then setup the capture of the SQL Tuning Set to come from the cursor cache over a the 24 hour period for just the SQL that contains ORDER for the ORDERS and ORDERS_ITEMS tables.

---------------------------------------------------------------------------
-- Script: spa_create_sts.sql
-- Author; Michael R. Messina, Managing Consultant 
-- Rolta-TUSC, Infrastructure Services
--
-- Setup a SQL Tuning Set to create a for the SQL
-- This will serve SQL Performance Analyzer well or allow us to create profiles.  
-- From there we can make generate before and after pictures
-- Then evaluate the impact of changes.
---------------------------------------------------------------------------
--
set feedback on
---------------------------------------------------------------------------
-- Set up the SQL Set and what sql we want to make
-- part of the sql tuning set
---------------------------------------------------------------------------
BEGIN
    -- Create the sql set
    DBMS_SQLTUNE.CREATE_SQLSET(sqlset_name => 'STS_SPA_1');
 
    -- Limit the sql in the set to Just on the ORDERS and ORDER_ITEMS
    -- The CAPTURE_CURSOR_CACHE_SQLSET will collect SQL statements
    -- over a peeriod of time from the cursor cache.  This helps build a more realistic
    -- set of SQL load from the system by capturing the live SQL for 24 hours at 5 min
    -- intervals
    DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET(
         sqlset_name => 'STS_SPA_1'
        ,basic_filter=> q'#UPPER(sql_text) LIKE '%ORDER%' #'
        ,time_limit  => 86400
        ,repeat_interval => 300
    );
 
 
END;
/
 
** I would show the steps for Grid Control except the blog would never save with the Grid 
Images. 

The next Step will be to use the SQL Tuning Set to compare a before and after our changes, which I will be doing with SQL Performance Analyzer.  To do this we Create an analysis task..... SQL Performance Analyzer, Step 2 Create Analysis Task

Tuesday, April 5, 2011

Configure SQL Net using Infiniband Interface


Infiniband
Infiniband is a high speed network interface typically utilized in System/Storage Area Networks (SANs) which are networks connecting machines and storage in close proximity and require very low latency and very high bandwidth. Oracle has supported the use of infiniband for SQL Net traffic for quite some time.  Typically TCP over Infiniband has been utilized.

Sockets Direct Protocol Support

The Sockets Direct Protocol (SDP) is a standard protocol for Infiniband.  The SDP protocol is an interface the moves most of the messaging burden to the network interface card therefore reducing the amount of host CPU involved.  This frees the host CPU bandwidth for other processing.   This is a big advantage for mid-tier server connections to the database that require high performance communication between the database and middle tier application servers.  While Oracle provides support for SQL Net client activity via the SDP protocol for SQL Net traffic over an infiniband network it is recommended to stick with TCP over infiniband as I have never gotten the SDP protocol to work as of yet. 

Database Server Configuration

A database server with an infiniband network interface has the ability to communicate across that infiniband interface allowing database connections the increased bandwidth and lower latency of infiniband.  The SQL-Net interface for the infiniband can be configured using TCP protocol.  When using the TCP protocol it is recommended to configure the SDU sizes appropriately on the client and server.  On the database server side the Oracle TNS listener must configured to listen for the specific protocol on the infiniband network address so that will be used in the communication initiated to the database so that communication can be recognized.  To configure the server to accept connections to the database with the TCP it is recommended to configure 2 listeners, 1 listener for TCP communication over the normal network interface and a second listener for the TCP communication over the infiniband network.  If the infiniband connection is management network for activities such as backup, Storage communication, management, data guard, streams or golden gate connectivity a second dedicated listener for this communication is a must.  When the second listener is configured a second port is utilized and therefore the connection requests are completely isolated.

Additional Listener TCP

LISTENER_MGMT=
  (DESCRIPTION=
    (ADDRESS_LIST=
      (ADDRESS=(PROTOCOL=tcp)(HOST=infini-server)(PORT=1522)) ) )

Database Client Configuration

The database sqlnet client configuration will require a tns entry in the local tnsanames.ora as well as entries in the local sqlnet.ora.  When using SDP with infiniband rather than TCP there is no complicated listener or tns setup as with TCP when dealing with larger bandwidth data transfers.  Typically the hostname given to a database servers infiniband network interface is not in the DNS therefore you must add the database server hostname for that interface to the client and server hosts file (applies to both Windows and Unix/Linux).

hosts

# Connections (infiniband)
192.168.1.2      db02-priv.localdomain db02-priv
192.168.1.1      db01-priv.localdomain db01-priv

tnsnames.ora (non-rac example)

prd =
(DESCRIPTION=
   (ADDRESS_LIST=
     (ADDRESS=(PROTOCOL=tcp)(HOST=db01-priv)(PORT=1522))
   (CONNECT_DATA =
     (SERVICE_NAME = prd.localdomain)
   )
  )

tnsnames.ora (rac example)

prd_rac =
(DESCRIPTION=
   (ADDRESS_LIST=
     (ADDRESS=(PROTOCOL=tcp)(HOST=db01-priv)(PORT=1522))
     (ADDRESS=(PROTOCOL=tcp)(HOST=db02-priv)(PORT=1522))
   (LOAD_BALANCE = ON)
   (FAILOVER = ON)
   (CONNECT_DATA =
     (SERVICE_NAME = prd.localdomain)
     (FAILOVER_MODE=(TYPE=select)(METHOD=basic)(RETRIES=20)(DELAY=15))
   )
  )

Check Database Connection Configuration

After the configuration is complete a good way to test the connectivity is working is to test the connection across the infiniband from a client machine connected to the infiniband network.  Using SQL Plus or another client tool that will utilize the Oracle Client interface test the connection to the database through the listener listening on the infiniband network.
C:\> sqlplus messinam@prd

SQL*Plus: Release 11.2.0.1.0 Production on Wed Sep 1 18:00:10 2010

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options

SQL>

Running an AWR Compare Report Across Databases

Extract the AWR Data from Production Database

There are time when I want to run an AWR compare report from my production activity and my test database load test activity. We can do this by extracting the AWR data from the production database and loading it into the test database and then run a compare report between the 2 sets of snap data.

Oracle provides a utility to extract the AWR data from a database. In $ORACLE_HOME/rdbms the script awrextr.sql is used for this purpose for an interactive extract.

ORACLE_HOME/rdbms/admin/awrextr.sql

** Note: This will extract the AWR data for the snap periods that are selected. This processes uses the data pump utility to extract the AWR data to a data pump dump file. Therefore you must have a defined directory to write the dump file.


Move the AWR Datapump Dump File.

Move the Data Pump dump file created by the awrextr.sql script to the test server so it can be loaded.


Load the AWR Data into the Test Database

Oracle makes the import of AWR date as simple as the extraction of AWR data.

Oracle provides a script awrload.sql in the ORACLE_HOME/rdbms/admin directory.

** Note: Before attempting to load the AWR data from the data pump dump file ensure that an Oracle directory exists for the location of the data pump dump file the needs loaded. Also When loading AWR data into another database make sure the tablespace SYSAUX, the stage tablespace selected, example will use SYSAUX, and the TEMP tablespace have enough room to complete the operation. May want to have another session open to the database to monitor freespace.

ORACLE_HOME/rdbms/admin/awrload.sql


Run the Compare Report Between the 2 databases AWR Data

Now that we have AWR data from multiple databases we can compare across snap time periods from one database, run normal AWR reports over snap time periods, or even compare snap time periods across the different databases. Oracle provides an easy way to execute the compares across the script awrddrpi.sql in the ORACLE_HOME/rdbms/admin directory.

ORACLE_HOME/rdbms/admin/awrddrpi.sql


Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: text

Type Specified: text



Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DB Id Inst Num DB Name Instance Host
------------ -------- ------------ ------------ ------------
3742289852 1 AWR2 awr2 ORADBPROD01
* 744414708 1 ORCL11G orcl11g MRMESSIN

Database Id and Instance Number for the First Pair of Snapshots
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for dbid: 3742289852
Using 3742289852 for Database Id for the first pair of snapshots
Enter value for inst_num: 1
Using 1 for Instance Number for the first pair of snapshots


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.


Enter value for num_days: 3

Listing the last 3 days of Completed Snapshots

Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
awr2 AWR2 1 20 Feb 2008 17:42 1
2 20 Feb 2008 19:00 1
3 20 Feb 2008 20:00 1
4 20 Feb 2008 21:00 1
5 20 Feb 2008 22:00 1
6 20 Feb 2008 23:00 1
7 21 Feb 2008 00:00 1
8 21 Feb 2008 01:00 1
9 21 Feb 2008 02:00 1
10 21 Feb 2008 03:00 1
………


Specify the First Pair of Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1
First Begin Snapshot Id specified: 1

Enter value for end_snap: 9
First End Snapshot Id specified: 9


Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DB Id Inst Num DB Name Instance Host
------------ -------- ------------ ------------ ------------
3742289852 1 AWR2 awr2 ORADBPROD01
* 744414708 1 ORCL11G orcl11g MRMESSIN


Database Id and Instance Number for the Second Pair of Snapshots
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Enter value for dbid2: 744414708
Using 744414708 for Database Id for the second pair of snapshots
Enter value for inst_num2: 1
Using 1 for Instance Number for the second pair of snapshots


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.


Enter value for num_days2: 3

Listing the last 3 days of Completed Snapshots

Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
orcl11g ORCL11G 355 26 Feb 2008 07:34 1
356 26 Feb 2008 09:00 1
357 26 Feb 2008 10:00 1
358 26 Feb 2008 11:00 1
……………
405 02 Mar 2008 00:00 1
406 02 Mar 2008 01:00 1
407 02 Mar 2008 02:00 1
408 02 Mar 2008 03:00 1
409 02 Mar 2008 04:00 1
410 02 Mar 2008 05:00 1
411 02 Mar 2008 06:00 1
412 02 Mar 2008 07:00 1
413 02 Mar 2008 08:00 1
………


Specify the Second Pair of Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap2: 405
Second Begin Snapshot Id specified: 405

Enter value for end_snap2: 413
Second End Snapshot Id specified: 413


Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrdiff_1_1_1_405.txt To use this name,
press to continue, otherwise enter an alternative.

Enter value for report_name: awrdiff_awr2_orcl11g.txt

Collaborate 2011

Are you getting ready for collaborate?

I am speaking again this year, hope to see you there.

Oracle 11gR2 Installing CTX Knowledge Base

Summary
The Oracle Text option requires the use of certain context knowledge bases that are not part of the typical installation of the Oracle 11gR2 database software. To be able to create context indexes these knowledge bases may be required.

Example:
DRG-11446: supplied knowledge base file /u01/app/oracle/product/11.2.0/dbhome_1/ctx/data/enlx/droldUS.dat not installed


1. Download the Oracle Examples Installation (formally Companion CD)

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-100572.html

2. Unzip the Oracle Examples Installation

$ unzip linux.x64_11gR2_examples.zip

3. Goto the location in the installation media for the CTX Filegroups

$ cd examples/stage/Components/oracle.ctx.companion/11.2.0.1.0/1/DataFiles

4. unzip the CTX Knowledge base files

$ unzip filegroup1.jar
Archive: filegroup1.jar
inflating: ctx/data/enlx/drofdUS.dat
inflating: ctx/data/enlx/drofiUS.dat
inflating: ctx/data/enlx/droldUS.dat
inflating: ctx/data/enlx/droliUS.dat
inflating: ctx/data/enlx/drolkUS.dat
inflating: ctx/data/enlx/drolsUS.dat

5. Goto the location of the knowledge base files we just extracted and copy them to the Oracle Database Home location

$ cd ctx/data/enlx/
$ cp * $ORACLE_HOME/ctx/data/enlx