Monday, October 18, 2010

Oracle Grid Control Exadata Storage Cell Plug-in Installation/Upgrade

The Oracle Enterprise Manager Grid Control Exadata Storage Cell Plug-in allows that management agent to be able to monitor and assist with the basic management of the Exadata Storage Cells. The Exadata Storage Cell Plug-in is compatible with Oracle Enterprise Manager Grid Control version 10.2.0.4 and above.
The installation and upgrade process is basically the same process. The difference is that on an upgrade is the pre-installation setup will not be needed.

Pre-Installation Setup
Prior to initial installation of the Exadata Storage Cell Plug-in you will need to identify a server that will have an agent that will be the monitoring agent for the Storage Cells. The that will have the Grid Control Agent that will monitor the Storage Cells must have connectivity to the Exadata Storage Cells via SSH. For this to happen that server must have a network interface on the management network or the infini-band for the Exadata.

1. Setup Server with Network connectivity to the Exadata Network which has the storage cells.

2. Setup hosts file on the server that will house the agents to see each of the storage cells.

3. Setup ssh connectivity to each of the storage cells
     a. generate the ssh keys on the sever that will house the agent
          i. ssh-keygen -t dsa -f id_dsa

     b. copy the ssh key to each of the storage cell servers as the celladmin user
          i. scp id_dsa.pub celladmin@dm01cel01:~
          ii. repeat for each storage cell

     c. log into the storage cell as the root user
          i. ssh root@dm01cel01

     d. Add contents of the id_dsa file to the authorized_keys file for the cellmonitor user on each of the storage cells.
          i. check if the authorized_keys file exists for the cellmonitor user
               a. ls -l /home/cellmonitor/.ssh/*
          ii. if the authorized keys file exists for the cellmonitor user
               a. cat /home/celladmin/id_dsa.pub >> /home/cellmonitor/.ssh/authorized_keys
          iii. if the authorized keys file does not exist for the cell monitor user
               a. cp /home/celladmin/id_dsa.pub /home/cellmonitor/.ssh/authorized_keys
          iv. repeat for each of the storage cell servers

4. Test ssh connectivity from the server that will house the Grid Control Agent for each of the storage cells.
     a. ssh -l cellmonitor dm01cel01 cellcli -e 'list cell detail'
        ** If you prompted for a password it not not working correctly need to check/repeat setup steps
        ** If you asked to confirm indicates Yes to continue with connection
     b. repeat test for each storage cell server.

5. Install the Oracle Enterprise Manager Grid Control Agent on the server that will serve as the monitoring Exadata Storage Cell monitoring Agent for the Grid Control Plug-in.

** It is ok for the monitoring agent for the Exadata Storage Cells to be the Gird Control Management Server Agent.

Plug-in Installation
For the plug-in installation the plug-in is imported into OEM Grid Control. This is the case for a new installation or a new version for an existing installation. This also assumes that you have an Oracle Grid Control Management Server/Service Setup and accessible.

1. Download the Oracle Enterprise Manager Grid Control Exadata Storage Server Plug-in.
     a. http://www.oracle.com/technetwork/database/exadata/index.html

2. Import the Plug-in into the Oracle Enterprise Manager Grid Control.
     a. Select Setup from the Top Right portion of the Browser Window
     b. Select Management Plug-ins from the list menu on the right hand side of the window.
     c. Select Import Option
     d. Select Browse to locate the Exadata Storage Cell Plug-in
     e. Locate and Select the Exadata Storage Cell Plug-in File Downloaded and Select Open
     f. Select List Archive to Expand out the Plug-in
     g. Select the oracle_cell Plug-in ensuring the check mark is present next to the Plug-in then select OK
     h. Verify the Plug-in Version is now in the list of Plug-ins.
     j. Management Plug-in Installation Complete. 

Plug-in Deployment
The Plug-in deployment is done the same regardless if there is an existing plug-in version deployed. Once the Plug-in is imported you deploy the plug-in the same. This process assumes that the Grid Control Monitoring Agent that will serve as the monitoring agent for the Exadata Storage Cells is installed and registered with you Oracle Management Service.
1. Log into Oracle Enterprise Manager Grid Control Management Service.

2. Set Preferred Credentials for the host of monitoring agent Plug-in will be deployed to as well as the monitoring agent.
     a. Select Preferences from top right of Browser Window displaying management console.
     b. select preferred credentials
     c. select Set Credentials Icon for Host
     d. locate host and enter the credentials for the oracle operating system user credentials
     e. select test credentials to ensure credentials entered are working.
     f. select apply.
     g. select Preferred Credentials in the upper left part of the screen
     h. select Set Credentials Icon for Agent
     i. Locate the agent to set credentials for and enter the oracle operating system user credentials
     j. select apply
     k. Select Home Tab to return to the Home Console Screen

3. Select Setup from the Top Right portion of the Browser Window

4. Select Management Plug-ins from the list menu on the right hand side of the window.

5. For the Exadata Plug-in to Deploy Select the Deploy icon. When Upgrading the Exadata Plug-in Select the new version to deploy.

6. Add Agent to the List of Agents to Deploy Plug-in to.

7. Select the Agent that will serve as the monitoring agent for the Exadata Storage Cells and then press Select.

8. To Add More Agents, repeat the Add Agent Steps, When finished Select Next to Continue with Deploying Plug-in.

9. Select Finish to Deploy Agent.

10. Deployment Completed

Friday, October 1, 2010

Exadata Hybrid Columnar Compression

Summary
Hybrid Columnar Compression was introduced with the Exadata version 2 and is one of the most highlighted advantages of an Exadata as it improves performance and significantly reduces the storage footprint of the data on physical storage. By reducing the foot print of the data on physical storage it also reduces the foot print of that data in the database cache area as well therefore offering overall improvement in system memory resources in addition to physical storage. While you can use the columnar compression options in Exadata it does not prevent you from using the Oracle 11gR2 Advanced Compression as well giving you three different levels of compress based on the data within your database.

There are 2 types of columnar compression with Exadata, compress for query and compress for archive. The compress for query option offers a high level of compression, but not as high as for archive, but offers better query performance then the for archive option. While the OLTP compression in 11gR2 could work well for more online type access.

The advantages to the multiple types of compression is the level of compression based on the type of data. This gives the ability to true start to consider the Information Lifecycle Management of your data in regards to how it is stored and better manage your storage resources. For example let say you have a very large table partitioned by month where the first 24 months of data is subject to regular insert, update, delete and query activity, then from 24 months to 60 months the data is not updated, but query activity is heavy then data over 60 months is rarely accessed but does need to be available. Your best option for the most recent 24 months of database would most likely be OLTP compression. For the data between 24 and 60 months compress for query columnar compression would be your best option and the data over 60 months old would be best suited for compress for archive column compression.

Compress for Query Columnar Compression
Compress for query Columnar Compress offers a good balance between frequent query access and compression level. The compress for query can be high (default) or low. This indicates the level of compression and subsequently impact on CPU resources to handle the compression. The query option has between a 5-10% runtime hot on direct loads over an uncompress table load which in most cases can be considered minimal when compared to the disk savings achieved. The compression significantly reduces the size of the table and at the same time improves the performance of queries without very high impact on other system resources. With the implementation of Hybrid Columnar Compression for query high reduction consistently has between 60-70% . The table compress for query below shows some real world results when using the compress for query high option. With the implementation of Hybrid Columnar Compression with the compress for archive option

Compress for Query High
Uncompressed(MB) Compress      Reduction(MB)    % Reduction
903              344           559              61.90
1088             408           680              62.50
960              361           599              62.40
1088             416           672              61.76
1152             400           752              65.28
1091             400           691              63.33
1216             456           760              62.50
1112             408           704              63.31

Compress for Archive Columnar Compression
The compress for archive columnar compression offers the highest level of compression available, but with the higher level of compression the impact to system CPU resources increases therefore making it a less desired option where consistent query access is needed. The archive compress option averages between 70-80% size reduction over uncompressed tables. The archive option is also a significant hit on direct load runtimes as an average 50% runtime hits have been experienced over query compressed table direct loads. This hit on load time could also make this a less desired option for objects with frequent load activities where load runtimes are critical. As the name indicates it works very well for compressing down data that is archived and needs to be available, but is not accessed frequently or load times are a critical requirement.

Compress for Archive
Uncompressed(MB) Compress     Reduction(MB)    % Reduction
903              264          639              70.76
1088             304          784              72.06
960              272          688              71.67
1088             312          776              71.32
1152             336          816              70.83
1091             328          763              69.94
1216             352          864              71.05
1112             304          808              72.66 

Effectively Utilizing the Hybrid Column Compression              
Query and Archive Hybrid Columnar Compression is most effective for historical tables/date where the data no longer changes.  Being that they are historical tables they are typically quite large we usually will utilize with a partitioned table as best practice for large tables is to partition them.  The most effective use of the Hybrid Columnar Compression is to use them both within your partitioned historical table where the more frequently queries data partitions are query hybrid columnar compressed and the less frequently accessed data is archive columnar compressed.  This creates an effect Information Life Cycle Management that will help you manage your partition sizes based on their age and access frequency.  

Notice in the example below the more recently accessed data is partitioned by month allowing faster access to data through more fine tuned partition elimination.  Space is saved through the query hybrid columnar compression on those partitions.  Now as the data gets older partition maintenance is performed to combined partitions by year and the new partition for the entire year utilizes hybrid columnar compression to save a little more space and still keep the data accessible.

Example
CREATE TABLE ARCHIVE."CLAIM"
   (   "CLAIM_ID"            VARCHAR2(36)   NOT NULL ENABLE,
       "CLAIM_TYPE_CODE"     VARCHAR2(2)    NOT NULL ENABLE,
       "PAYER_PROCESS_STAGE" VARCHAR2(10),
       "SYS_CREATE_DATE"     DATE,
       "SYS_UPDATE_DATE"     DATE)
TABLESPACE ARCHIVE_2004
PARTITION BY RANGE (SYS_CREATE_DATE) interval (NUMTOYMINTERVAL(1,'month'))               
   (PARTITION CLAIM_2004 VALUES LESS THAN (TO_DATE ('1-JAN-2005','DD-MON-YYYY'))
                   NOLOGGING TABLESPACE ARCHIVE_2004 compress for archive high,
    PARTITION CLAIM_2005 VALUES LESS THAN (TO_DATE ('1-JAN-2006','DD-MON-YYYY'))
                   NOLOGGING TABLESPACE ARCHIVE_2005 compress for archive high,
    PARTITION CLAIM_2006 VALUES LESS THAN (TO_DATE ('1-JAN-2007','DD-MON-YYYY'))
                   NOLOGGING TABLESPACE ARCHIVE_2006 compress for archive high,
    PARTITION CLAIM_200701 VALUES LESS THAN (TO_DATE ('1-FEB-2007','DD-MON-YYYY'))
                   NOLOGGING TABLESPACE OTHER_PART_QUERY compress for query,
    PARTITION CLAIM_200702 VALUES LESS THAN (TO_DATE ('1-MAR-2007','DD-MON-YYYY'))
                   NOLOGGING TABLESPACE OTHER_PART_QUERY compress for query,
    PARTITION CLAIM_200703 VALUES LESS THAN (TO_DATE ('1-APR-2007','DD-MON-YYYY'))
                   NOLOGGING TABLESPACE OTHER_PART_QUERY compress for query
   )
compress for archive high;

Friday, September 17, 2010

Hope to see you all at open World
Will be attending the Oracle Exadata SIG
Presenting Sunday and Monday
Spending some time in the Rolta-TUSC Booth Monday
Come by and see me.

Tuesday, March 23, 2010

Oracle Compression Advisor

Every once and awhile I come across a valuable utility, I have come across a new
Oracle utility that can be utilized starting with 9i release 2 that can help estimate the space savings from compression.

The process creates tables in the default tablespace of the user executing the procedure. It is recommended that you make sure the default tablespace is other then SYSTEM or SYSAUX.

The process is created via a package procedure available via a download from Oracle Technet at

http://www.oracle.com/technology/products/database/compression/compression-advisor.html

Example of the package installation
[oracle@oel54node1 compression_advisor]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 23 22:33:57 2010

Copyright (c) 1982, 2009, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> @dbmscomp.sql

Package created.


Synonym created.


Grant succeeded.

No errors.
SQL> @prvtcomp.plb

Package body created.


Synonym created.


Grant succeeded.

No errors.


Here is an example of its execution

SQL> set serveroutput on
SQL> exec dbms_comp_advisor.getratio('SYS','AUD$','OLTP',20) ;
Sampling table: SYS.AUD$
Sampling percentage: 20%
Compression Type: OLTP
Compression ratio: 1.53

PL/SQL procedure successfully completed.

Saturday, February 20, 2010

Change OEM Grid Control SYSMAN Password

1. Set the ORACLE_HOME and associated environment variables such as PATH, etc.
$ export ORACLE_HOME=/u01/app/oracle/product/oms10g
$ export PATH=ORACLE_HOME/bin:$ORACLE_HOME/opmn/bin:$PATH

2. Stop the OMS instances that are associated with the repository
$ cd $ORACLE_HOME/opmn/bin
$ opmnctl stopall

3. Log into the OEM Grid Control Repository Database as a DBA user and change the SYSMAN password
$ sqlplus system@oem
SQL> alter user sysman identified by newpassword;
SQL> exit

4. Update the emoms.properties file on each management server host
$ export ORACLE_HOME=/u01/app/oracle/product/oms10g
$ export PATH=ORACLE_HOME/bin:$ORACLE_HOME/opmn/bin:$PATH
$ cd $ORACLE_HOME/sysman/config
$ cp emoms.properties emoms.properties.backup
$ vi emoms.properties file
Change the Following:
oracle.system.eml.mntr.emdRepPwd=new_password oracle.sysman.eml.mntr.emdRepPwdEncrypted=FALSE

** NOTE: When you restart the OMS, the FALSE setting will change to TRUE and the password will become encrypted in the file

5. On the Management Repository host stop the Grid Control Agent
$ export ORACLE_HOME=/01/app/oracle/product/agent10g
$ cd $ORACLE_HOME/bin
$ ./emctl stop agent

6. Update the targets.xml file in the AGENT_HOME/sysman/emd directory and save changes
$ cd $ORACLE_HOME/sysman/emd
$ vi targets.xml

Update the Following
..
Property NAME="password" VALUE="newpassword" ENCRYPTED="FALSE"
..

** NOTE: The TRUE value will change to FALSE and the newpassword will become encrypted when you restart the Agent.

6. Re-Start the Agent
$ cd $ORACLE_HOME/bin
$ ./emctl start agent

7. Set the ORACLE_HOME, etc for the OMS
$ export ORACLE_HOME=/u01/app/oracle/product/oms10g
$ export PATH=ORACLE_HOME/bin:$ORACLE_HOME/opmn/bin:$PATH

8. Re-start each of the OMS's associated with the Grid Control Repository database.
$ cd $ORACLE_HOME/opmn/bin
$ ./opmnctl startall

9. Change the OMS using WEB Browser

http://oel54node1.localdomain:7777/em/console/logon/logon

Real Application Testing Database Replay

Apply all Recommended Patches for Database Replay

1. For 11.1.0.6 a patch is recommended for proper database replay functionality. The patch can be download for 11.1.0.6 for you platform from Metalink. For Unix the patch number is 6865809 and the patch for windows 32-bit 7044721
** For other database platforms see Appendix A: Real Application Testing for Earlier Releases

2. Shutdown all databases and listeners running within the ORACLE_HOME being patched.

3. Ensure that opatch utility version 11.1.0.6.3 is installed prior to installing patch set

C:\Oracle\product\11.1.0\db_1\OPatch>.\opatch version
Invoking OPatch 11.1.0.6.2

OPatch Version: 11.1.0.6.2

OPatch succeeded.

4. Change directory to the location where the 7044721 patch was extracted to

5. Apply the patch using the opatch utility
opatch apply

** Ensure that all patch instructions are followed completely including post apply steps. See specific patch notes.



6. Bring all databases and listeners back up.


7. Verify database connectivity and listener status
Setup Source Database for Database Replay Capture
Database Releases prior to 11g can be utilized for capturing a workload for replay on an 11g and above database, however there is some setup that is required. For database release 10.2.0.4 there are no required patches for the capture functionality however for releases prior to 10.2.0.4 a patch is required.
** See Appendix A: Real Application Testing for Earlier Releases, for Versions prior to 10.2.0.4

Run the wrrenbl.sql script, only required for 10.2.0.4 and is optional for other releases prior to 11g:
Script will enable the PRE_11G_ENABLE_CAPTURE database initialization parameter.

a. cd $ORACLE_HOME/rdbms/admin
b. sqlplus / as sysdba
c. SQL> @wrrenbl.sql



Prepare to Capture Workload

On the source database where we will be capturing the workload for replay we need to prepare the database for capture.

1. Create a directory location where the workload capture file can be written.
a. C:\Oracle>mkdir dbcapture

2. Create a directory within the source database where workload is to be captured.
a. SQL> create directory dbcapture as 'c:\oracle\dbcapture' ;

3. Determine the workload to be captured. In this example the workload is artificially generated using a script process.
** see Appendix B: Script that Generates an Artificial Workload



Capture Workload


1. start the capture process
a. SQL> exec dbms_workload_capture.start_capture(’NRML_ACTIVITY’, ’DBCAPTURE’) ;

2. Run the workload
a. This is where the script for the artificial workload is executed

3. When the workload is complete or you have captured the desired workload stop the capture process.
a. SQL> exec dbms_workload_capture.finish_capture() ;

4. Execute a report on the Capture.
** See Appendix C: Generate Workload Capture Report (HTML)



Prepare the Workload for Replay

** This database that processes the captured workload must be an 11.1.0.6 or above database as the packages to process the captured workload are not available in prior releases.

1. Create a directory location for the captured workload to be paced for processing and replay on both the processing database and the replay database if the processing and replay databases are different. Best practice is to process the workload on the same database that will execute the replay.
a. C:\Oracle> mkdir Replay

2. Copy the captured workload files to another directory if on same host as database where workload was captured or to destination system location to be processed.
a. copy C:\Oracle\dbcapture\* C:\Oracle\Replay

3. Create directory in the oracle database that will process the captured workload.
a. SQL> create directory REPLAY as ‘c:\Oracle\Replay’ ;


Process the Workload for Replay
** This is a resource intensive process therefore it should be done on a non-production system. Therefore if you captured the workload on your production system move the capture files to a test or development system to prepare them for replay.

1. Process the captured workload
a. SQL> exec dbms_workload_replay.process_capture ('REPLAY') ;

The process will create the following files in the REPLAY directory during the processing of the captured workload.

wcr_login.pp
WCR_SCN_ORDER.EXTB
WCR_SEQ_DATA.EXTB
WCR_CONN_DATA.EXTB
wcr_process.wmd

Prepare to Replay the Workload
1. Go to database where replay is to be executed if different from database the processed workload. If the replay database is the same as the database that processed the workload skip to step 3 otherwise process step 2.

2. If not done so already go to Prepare workload for Replay Section and execute the steps on the database that will replay the workload.

3. Initialize the replay
a. SQL> exec dbms_workload_replay.initialize_replay ('TEST_REPLAY','REPLAY') ;

4. Prepare for Workload Replay
a. SQL> exec dbms_workload_replay.prepare_replay(synchronization=>TRUE) ;

5. Calibrate the Workload to determine the number of workload replay clients.
a. At the command line change to the replay directory
i. cd C:\Oracle\Replay
ii. c:\Oracle\Replay> wrc mode=calibrate

Workload Replay Client: Release 11.1.0.6.0 - Production on Thu Jul 24 10:38:41 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Report for Workload in: .
-----------------------

Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).

Workload Characteristics:
- max concurrency: 9 sessions
- total number of sessions: 46

Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE


Execute the Workload Replay
Now that the workload has been captured and processed the workload is now ready to be replayed.

1. Take the number of workload replay clients indicated in the previous section that will be needed to execute the processed workload.

2. Open up a window for each workload replay client that will be needed as these will each be separate executions.
** Only Need 1 client as indicated in the pervious step for the example workload

3. From the command line enter the replay command with the wrc utility for the number of replay clients needed, should be one command in each window.
a. C:\Oracle\proccapture>wrc system/xxxxx@orcl11g replaydir=c:\Oracle\Replay

4. Start each of the replay clients needed.

C:\Oracle\proccapture>wrc system/xxxxx@orcl11g replaydir=c:\Oracle\Replay

Workload Replay Client: Release 11.1.0.6.0 - Production on Thu Jul 24 15:09:15 2
008

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Wait for the replay to start (15:09:15)
Replay started (15:09:40)
Replay finished (16:01:41)


5. AWR information will be available for the time period where the workload apply was executed, it is generally recommended that the workload apply that is executed at least cover 1 complete snap period to provide the most useful data or that a snap is taken prior to the start and immediately after the workload execution. (this is optional)

SQL> execute DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();

6. In another window login to the database where the workload apply is to be execute and start the workload apply process. You will see an indication that the workload has been started.

$ sqlplus /@orcl11g

SQL*Plus: Release 11.1.0.6.0 - Production on Thu Jul 24 13:55:39 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> exec dbms_workload_replay.start_replay ;

PL/SQL procedure successfully completed.


7. Monitor the workload apply until completed. AWR information will be available for the time period where the workload apply was executed, it is generally recommended that the workload apply that is executed at least cover 1 complete snap period to provide the most useful data or that a snap is taken prior to the start and immediately after the workload execution.


8. Once the workload apply is complete optionally take another AWR snaphot.

SQL> execute DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();


9. Execute workload replay report extraction and evaluate the results from the report.
** See Appendix D: Generate Workload Replay Report (HTML)




Appendix A: Real Application Testing for Earlier Releases

See Metalink Note: 560977.1


Appendix B: Script that Generates an Artificial Workload

-- create_workload.sql
-- put in sql script to be executed to generate workload.
create table orders tablespace users as select * from oe.orders ;

create table order_items tablespace users as select * from oe.order_items ;


-- insert more rows into orders and order items to bulk up amount of data for
-- a more true high volume test.
declare
cursor my_cur is
select * from oe.orders ;
lcnt number ;
v_order_id oe.orders.order_id%type ;
v_order_date oe.orders.order_date%type ;
v_order_mode oe.orders.order_mode%type ;
v_customer_id oe.orders.customer_id%type ;
v_order_status oe.orders.order_status%type ;
v_order_total oe.orders.order_total%type ;
v_sales_rep_id oe.orders.sales_rep_id%type ;
v_promotion_id oe.orders.promotion_id%type ;
cursor my_cur_2 is
select line_item_id, product_id, unit_price, quantity
from oe.order_items where order_id = v_order_id ;
v_line_item_id oe.order_items.line_item_id%type ;
v_product_id oe.order_items.product_id%type ;
v_unit_price oe.order_items.unit_price%type ;
v_quantity oe.order_items.quantity%type ;

begin
select max(order_id) + 1 into lcnt from orders ;

loop
if lcnt = 10000 then
exit ;
end if ;

open my_cur ;
loop
fetch my_cur into v_order_id, v_order_date, v_order_mode, v_customer_id,
v_order_status, v_order_total, v_sales_rep_id, v_promotion_id ;
exit when my_cur%notfound ;

insert into orders values (lcnt, v_order_date, v_order_mode, v_customer_id,
v_order_status, v_order_total, v_sales_rep_id, v_promotion_id) ;
commit ;

open my_cur_2 ;
loop
fetch my_cur_2 into v_line_item_id, v_product_id, v_unit_price, v_quantity ;
exit when my_cur_2%notfound ;

insert into order_items values (lcnt, v_line_item_id, v_product_id, v_unit_price, v_quantity) ;
end loop ;
close my_cur_2 ;

lcnt := lcnt + 1 ;
end loop ;

close my_cur ;
end loop ;
end ;
/


Appendix C: Generate Workload Capture Report (HTML)


-- Capture Report
DECLARE
cap_id NUMBER;
cap_rpt CLOB;
buffer VARCHAR2(32767);
buffer_size CONSTANT BINARY_INTEGER := 32767;
amount BINARY_INTEGER;
offset NUMBER(38);
file_handle UTL_FILE.FILE_TYPE;
directory_name CONSTANT VARCHAR2(80) := 'DBCAPTURE';
v_filename CONSTANT VARCHAR2(80) := 'Capture.html';

BEGIN
cap_id := DBMS_WORKLOAD_CAPTURE.GET_CAPTURE_INFO(dir=>directory_name);
cap_rpt := DBMS_WORKLOAD_CAPTURE.REPORT(capture_id => cap_id,
format => DBMS_WORKLOAD_CAPTURE.TYPE_HTML);

-- Write Report to file
DBMS_OUTPUT.ENABLE(100000);

-- --------------------------------
-- OPEN NEW XML FILE IN WRITE MODE
-- --------------------------------
file_handle := UTL_FILE.FOPEN(location => directory_name,
filename => v_filename,
open_mode => 'w',
max_linesize => buffer_size);

amount := buffer_size;
offset := 1;

WHILE amount >= buffer_size
LOOP
DBMS_LOB.READ(lob_loc => cap_rpt,
amount => amount,
offset => offset,
buffer => buffer);

offset := offset + amount;

UTL_FILE.PUT(file => file_handle,
buffer => buffer);

UTL_FILE.FFLUSH(file => file_handle);
END LOOP;

UTL_FILE.FCLOSE(file => file_handle);
END;
/



Appendix D: Generate Workload Replay Report (HTML)

-- Replay Report
DECLARE
cap_id NUMBER;
rep_id NUMBER;
rep_rpt CLOB;
buffer VARCHAR2(32767);
buffer_size CONSTANT BINARY_INTEGER := 32767;
amount BINARY_INTEGER;
offset NUMBER(38);
file_handle UTL_FILE.FILE_TYPE;
directory_name CONSTANT VARCHAR2(80) := 'PROCCAPTURE';
v_filename CONSTANT VARCHAR2(80) := 'Replay.html';

BEGIN
cap_id := DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO(dir => 'PROCCAPTURE');

select max(id)
into rep_id
from dba_workload_replays
where capture_id = cap_id ;

rep_rpt := DBMS_WORKLOAD_REPLAY.REPORT(replay_id => rep_id,
format => DBMS_WORKLOAD_REPLAY.TYPE_HTML);


-- Write Report to file
DBMS_OUTPUT.ENABLE(100000);

-- --------------------------------
-- OPEN NEW XML FILE IN WRITE MODE
-- --------------------------------
file_handle := UTL_FILE.FOPEN(location => directory_name,
filename => v_filename,
open_mode => 'w',
max_linesize => buffer_size);

amount := buffer_size;
offset := 1;

WHILE amount >= buffer_size
LOOP
DBMS_LOB.READ(lob_loc => rep_rpt,
amount => amount,
offset => offset,
buffer => buffer);

offset := offset + amount;

UTL_FILE.PUT(file => file_handle,
buffer => buffer);

UTL_FILE.FFLUSH(file => file_handle);
END LOOP;

UTL_FILE.FCLOSE(file => file_handle);
END;
/

Transfer SQL Tuning Sets between Databases

1. Create a staging table to put the SQL tuning sets into. Then put the SQL Tuning Sets into the staging table.
See Appendix A: Create SQL Tuning Set table and put SQL Tuning Sets into table


2. Ensure that you have a directory in Oracle to use to export the table using datapump
SQL> create directory DATA_PUMP_DIR as ‘C:\oracle\product\10.2.0\admin\orcl\dpdump\’ ;


3. Export the SQL Tuning Set Staging Table to a file as to analyze the SQL tuning set will need to be done in a 11g database.

expdp /@orcl directory=DATA_PUMP_DIR dumpfile=sqlset_stage_tab_exp.dmp logfile=sqlnet_stage_tab_exp.log tables=mrmessin.sqlnet_stage_tab

Export: Release 10.2.0.4.0 - Production on Monday, 04 August, 2008 19:40:41

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "MRMESSIN"."SYS_EXPORT_TABLE_01": /********@orcl directory=DATA_PUMP_DIR dumpfi
xp.dmp logfile=sqlnet_stage_tab_exp.log tables=MRMESSIN.SQLSET_STAGE_TAB
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 640 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
. . exported "MRMESSIN"."SQLSET_STAGE_TAB" 49.13 KB 44 rows
. . exported "MRMESSIN"."SQLSET_STAGE_TAB_CPLANS" 73.24 KB 175 rows
. . exported "MRMESSIN"."SQLSET_STAGE_TAB_CBINDS" 9.492 KB 0 rows
Master table "MRMESSIN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MRMESSIN.SYS_EXPORT_TABLE_01 is:
C:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\DPDUMP\SQLSET_STAGE_TAB_EXP.DMP
Job "MRMESSIN"."SYS_EXPORT_TABLE_01" successfully completed at 19:41:16


4. create a directory for the 11g database we are going to import the sqlset staging table into.
create directory DATA_PUMP_DIR as ‘c:\oracle\admin\orcl11g\dpdump\’ ;


5. Copy the file to the 10g database DATA_PUMP_DIR location to the DATA_PUMP_DIR location for the 11g database.
copy C:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\DPDUMP\SQLSET_STAGE_TAB_EXP.DMP c:\oracle\admin\orcl11g\dpdump\



6. Import the SQL tuning set staging table into the 11g database.

impdp /@orcl11g full=y dumpfile=sqlset_stage_tab_exp.dmp directory=DATA_PUMP_DIR logfile=sqlset_stage_tab_imp.log

Import: Release 11.1.0.6.0 - Production on Monday, 04 August, 2008 20:04:38

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "MRMESSIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "MRMESSIN"."SYS_IMPORT_FULL_01": /********@orcl11g full=y dumpfile=sqlset_stage_ta
y=DATA_PUMP_DIR logfile=sqlset_stage_tab_imp.log
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MRMESSIN"."SQLSET_STAGE_TAB" 49.13 KB 44 rows
. . imported "MRMESSIN"."SQLSET_STAGE_TAB_CPLANS" 73.24 KB 175 rows
. . imported "MRMESSIN"."SQLSET_STAGE_TAB_CBINDS" 9.492 KB 0 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Job "MRMESSIN"."SYS_IMPORT_FULL_01" successfully completed at 20:06:22


7. Move the SQL Tuning Sets in the imported Staging table back into database SQL Tuning Sets
See APPENDIX B: Put SQL Tuning Sets in Imported Staging Table Back into Database SQL Tuning Sets


8. Run Analysis on the SQL Tuning Sets


Appendix A: Create SQL Tuning Set table and Put SQL Tuning Sets into Table

---------------------------------------------------------------------------
-- Script: spa_sts_to_table.sql
-- Author; Michael R. Messina, Senior Consultant TUSC, A Rolta Company
--
-- create a staging table to put the SQL tuning sets into and then
-- put the SQL tuning sets into the stage table to be moved to another
-- database.
---------------------------------------------------------------------------
--
set serveroutput on size 1000000

BEGIN
DBMS_OUTPUT.PUT_LINE ('Creating the staging table to put the sql tuning set into.') ;
-- create the staging table to put the sql tuning set into
DBMS_SQLTUNE.create_stgtab_sqlset(table_name => 'SQLSET_STAGE_TAB',
schema_name => 'MRMESSIN',
tablespace_name => 'USERS');

DBMS_OUTPUT.PUT_LINE ('Putting the first SQL tuning sets into the staging table.') ;
-- put the first SQL tuning set into the staging table
DBMS_SQLTUNE.pack_stgtab_sqlset(sqlset_name => 'STS_SPA_1',
sqlset_owner => 'MRMESSIN',
staging_table_name => 'SQLSET_STAGE_TAB',
staging_schema_owner => 'MRMESSIN');

END ;
/

---------------------------------------------------------------------------
-- Query and view our sql tuning sets
---------------------------------------------------------------------------
select owner, name, created, statement_count from dba_sqlset ;

spool off



APPENDIX B: Put SQL Tuning Sets in Imported Staging Table Back into Database SQL Tuning Sets

---------------------------------------------------------------------------
-- Script: spa_stage_tab_to_sts.sql
-- Author; Michael R. Messina, Senior Consultant TUSC, A Rolta Company
--
-- take the imported staging table for the SQL Tuning Sets and unpack them
-- back into STS for analysis
---------------------------------------------------------------------------
--
-- Write Output to a file for later review if needed.
spool spa_stage_tab_to_sts.out


BEGIN
DBMS_SQLTUNE.unpack_stgtab_sqlset(sqlset_name => '%',
sqlset_owner => 'MRMESSIN',
replace => TRUE,
staging_table_name => 'SQLSET_STAGE_TAB',
staging_schema_owner => 'MRMESSIN');
END;
/

---------------------------------------------------------------------------
-- Query and view our sql tuning sets
---------------------------------------------------------------------------
select owner, name, created, statement_count from dba_sqlset ;

spool off

Friday, February 19, 2010

Just came back from RMOUG and excited about all the technical discussions I was able to have with everyone. I saw some good presentations and met some great people. I was able to get through all my presentations and able to give away a lot of free stuff in them as well. Hope everyone enjoyed the free stuff, I enjoyed giving it out. Come see me at collaborate, I will give out some more free stuff, you never know what I might give out next.