Saturday, February 20, 2010

Change OEM Grid Control SYSMAN Password

1. Set the ORACLE_HOME and associated environment variables such as PATH, etc.
$ export ORACLE_HOME=/u01/app/oracle/product/oms10g
$ export PATH=ORACLE_HOME/bin:$ORACLE_HOME/opmn/bin:$PATH

2. Stop the OMS instances that are associated with the repository
$ cd $ORACLE_HOME/opmn/bin
$ opmnctl stopall

3. Log into the OEM Grid Control Repository Database as a DBA user and change the SYSMAN password
$ sqlplus system@oem
SQL> alter user sysman identified by newpassword;
SQL> exit

4. Update the emoms.properties file on each management server host
$ export ORACLE_HOME=/u01/app/oracle/product/oms10g
$ export PATH=ORACLE_HOME/bin:$ORACLE_HOME/opmn/bin:$PATH
$ cd $ORACLE_HOME/sysman/config
$ cp emoms.properties emoms.properties.backup
$ vi emoms.properties file
Change the Following:
oracle.system.eml.mntr.emdRepPwd=new_password oracle.sysman.eml.mntr.emdRepPwdEncrypted=FALSE

** NOTE: When you restart the OMS, the FALSE setting will change to TRUE and the password will become encrypted in the file

5. On the Management Repository host stop the Grid Control Agent
$ export ORACLE_HOME=/01/app/oracle/product/agent10g
$ cd $ORACLE_HOME/bin
$ ./emctl stop agent

6. Update the targets.xml file in the AGENT_HOME/sysman/emd directory and save changes
$ cd $ORACLE_HOME/sysman/emd
$ vi targets.xml

Update the Following
..
Property NAME="password" VALUE="newpassword" ENCRYPTED="FALSE"
..

** NOTE: The TRUE value will change to FALSE and the newpassword will become encrypted when you restart the Agent.

6. Re-Start the Agent
$ cd $ORACLE_HOME/bin
$ ./emctl start agent

7. Set the ORACLE_HOME, etc for the OMS
$ export ORACLE_HOME=/u01/app/oracle/product/oms10g
$ export PATH=ORACLE_HOME/bin:$ORACLE_HOME/opmn/bin:$PATH

8. Re-start each of the OMS's associated with the Grid Control Repository database.
$ cd $ORACLE_HOME/opmn/bin
$ ./opmnctl startall

9. Change the OMS using WEB Browser

http://oel54node1.localdomain:7777/em/console/logon/logon

Real Application Testing Database Replay

Apply all Recommended Patches for Database Replay

1. For 11.1.0.6 a patch is recommended for proper database replay functionality. The patch can be download for 11.1.0.6 for you platform from Metalink. For Unix the patch number is 6865809 and the patch for windows 32-bit 7044721
** For other database platforms see Appendix A: Real Application Testing for Earlier Releases

2. Shutdown all databases and listeners running within the ORACLE_HOME being patched.

3. Ensure that opatch utility version 11.1.0.6.3 is installed prior to installing patch set

C:\Oracle\product\11.1.0\db_1\OPatch>.\opatch version
Invoking OPatch 11.1.0.6.2

OPatch Version: 11.1.0.6.2

OPatch succeeded.

4. Change directory to the location where the 7044721 patch was extracted to

5. Apply the patch using the opatch utility
opatch apply

** Ensure that all patch instructions are followed completely including post apply steps. See specific patch notes.



6. Bring all databases and listeners back up.


7. Verify database connectivity and listener status
Setup Source Database for Database Replay Capture
Database Releases prior to 11g can be utilized for capturing a workload for replay on an 11g and above database, however there is some setup that is required. For database release 10.2.0.4 there are no required patches for the capture functionality however for releases prior to 10.2.0.4 a patch is required.
** See Appendix A: Real Application Testing for Earlier Releases, for Versions prior to 10.2.0.4

Run the wrrenbl.sql script, only required for 10.2.0.4 and is optional for other releases prior to 11g:
Script will enable the PRE_11G_ENABLE_CAPTURE database initialization parameter.

a. cd $ORACLE_HOME/rdbms/admin
b. sqlplus / as sysdba
c. SQL> @wrrenbl.sql



Prepare to Capture Workload

On the source database where we will be capturing the workload for replay we need to prepare the database for capture.

1. Create a directory location where the workload capture file can be written.
a. C:\Oracle>mkdir dbcapture

2. Create a directory within the source database where workload is to be captured.
a. SQL> create directory dbcapture as 'c:\oracle\dbcapture' ;

3. Determine the workload to be captured. In this example the workload is artificially generated using a script process.
** see Appendix B: Script that Generates an Artificial Workload



Capture Workload


1. start the capture process
a. SQL> exec dbms_workload_capture.start_capture(’NRML_ACTIVITY’, ’DBCAPTURE’) ;

2. Run the workload
a. This is where the script for the artificial workload is executed

3. When the workload is complete or you have captured the desired workload stop the capture process.
a. SQL> exec dbms_workload_capture.finish_capture() ;

4. Execute a report on the Capture.
** See Appendix C: Generate Workload Capture Report (HTML)



Prepare the Workload for Replay

** This database that processes the captured workload must be an 11.1.0.6 or above database as the packages to process the captured workload are not available in prior releases.

1. Create a directory location for the captured workload to be paced for processing and replay on both the processing database and the replay database if the processing and replay databases are different. Best practice is to process the workload on the same database that will execute the replay.
a. C:\Oracle> mkdir Replay

2. Copy the captured workload files to another directory if on same host as database where workload was captured or to destination system location to be processed.
a. copy C:\Oracle\dbcapture\* C:\Oracle\Replay

3. Create directory in the oracle database that will process the captured workload.
a. SQL> create directory REPLAY as ‘c:\Oracle\Replay’ ;


Process the Workload for Replay
** This is a resource intensive process therefore it should be done on a non-production system. Therefore if you captured the workload on your production system move the capture files to a test or development system to prepare them for replay.

1. Process the captured workload
a. SQL> exec dbms_workload_replay.process_capture ('REPLAY') ;

The process will create the following files in the REPLAY directory during the processing of the captured workload.

wcr_login.pp
WCR_SCN_ORDER.EXTB
WCR_SEQ_DATA.EXTB
WCR_CONN_DATA.EXTB
wcr_process.wmd

Prepare to Replay the Workload
1. Go to database where replay is to be executed if different from database the processed workload. If the replay database is the same as the database that processed the workload skip to step 3 otherwise process step 2.

2. If not done so already go to Prepare workload for Replay Section and execute the steps on the database that will replay the workload.

3. Initialize the replay
a. SQL> exec dbms_workload_replay.initialize_replay ('TEST_REPLAY','REPLAY') ;

4. Prepare for Workload Replay
a. SQL> exec dbms_workload_replay.prepare_replay(synchronization=>TRUE) ;

5. Calibrate the Workload to determine the number of workload replay clients.
a. At the command line change to the replay directory
i. cd C:\Oracle\Replay
ii. c:\Oracle\Replay> wrc mode=calibrate

Workload Replay Client: Release 11.1.0.6.0 - Production on Thu Jul 24 10:38:41 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Report for Workload in: .
-----------------------

Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).

Workload Characteristics:
- max concurrency: 9 sessions
- total number of sessions: 46

Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE


Execute the Workload Replay
Now that the workload has been captured and processed the workload is now ready to be replayed.

1. Take the number of workload replay clients indicated in the previous section that will be needed to execute the processed workload.

2. Open up a window for each workload replay client that will be needed as these will each be separate executions.
** Only Need 1 client as indicated in the pervious step for the example workload

3. From the command line enter the replay command with the wrc utility for the number of replay clients needed, should be one command in each window.
a. C:\Oracle\proccapture>wrc system/xxxxx@orcl11g replaydir=c:\Oracle\Replay

4. Start each of the replay clients needed.

C:\Oracle\proccapture>wrc system/xxxxx@orcl11g replaydir=c:\Oracle\Replay

Workload Replay Client: Release 11.1.0.6.0 - Production on Thu Jul 24 15:09:15 2
008

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Wait for the replay to start (15:09:15)
Replay started (15:09:40)
Replay finished (16:01:41)


5. AWR information will be available for the time period where the workload apply was executed, it is generally recommended that the workload apply that is executed at least cover 1 complete snap period to provide the most useful data or that a snap is taken prior to the start and immediately after the workload execution. (this is optional)

SQL> execute DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();

6. In another window login to the database where the workload apply is to be execute and start the workload apply process. You will see an indication that the workload has been started.

$ sqlplus /@orcl11g

SQL*Plus: Release 11.1.0.6.0 - Production on Thu Jul 24 13:55:39 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> exec dbms_workload_replay.start_replay ;

PL/SQL procedure successfully completed.


7. Monitor the workload apply until completed. AWR information will be available for the time period where the workload apply was executed, it is generally recommended that the workload apply that is executed at least cover 1 complete snap period to provide the most useful data or that a snap is taken prior to the start and immediately after the workload execution.


8. Once the workload apply is complete optionally take another AWR snaphot.

SQL> execute DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();


9. Execute workload replay report extraction and evaluate the results from the report.
** See Appendix D: Generate Workload Replay Report (HTML)




Appendix A: Real Application Testing for Earlier Releases

See Metalink Note: 560977.1


Appendix B: Script that Generates an Artificial Workload

-- create_workload.sql
-- put in sql script to be executed to generate workload.
create table orders tablespace users as select * from oe.orders ;

create table order_items tablespace users as select * from oe.order_items ;


-- insert more rows into orders and order items to bulk up amount of data for
-- a more true high volume test.
declare
cursor my_cur is
select * from oe.orders ;
lcnt number ;
v_order_id oe.orders.order_id%type ;
v_order_date oe.orders.order_date%type ;
v_order_mode oe.orders.order_mode%type ;
v_customer_id oe.orders.customer_id%type ;
v_order_status oe.orders.order_status%type ;
v_order_total oe.orders.order_total%type ;
v_sales_rep_id oe.orders.sales_rep_id%type ;
v_promotion_id oe.orders.promotion_id%type ;
cursor my_cur_2 is
select line_item_id, product_id, unit_price, quantity
from oe.order_items where order_id = v_order_id ;
v_line_item_id oe.order_items.line_item_id%type ;
v_product_id oe.order_items.product_id%type ;
v_unit_price oe.order_items.unit_price%type ;
v_quantity oe.order_items.quantity%type ;

begin
select max(order_id) + 1 into lcnt from orders ;

loop
if lcnt = 10000 then
exit ;
end if ;

open my_cur ;
loop
fetch my_cur into v_order_id, v_order_date, v_order_mode, v_customer_id,
v_order_status, v_order_total, v_sales_rep_id, v_promotion_id ;
exit when my_cur%notfound ;

insert into orders values (lcnt, v_order_date, v_order_mode, v_customer_id,
v_order_status, v_order_total, v_sales_rep_id, v_promotion_id) ;
commit ;

open my_cur_2 ;
loop
fetch my_cur_2 into v_line_item_id, v_product_id, v_unit_price, v_quantity ;
exit when my_cur_2%notfound ;

insert into order_items values (lcnt, v_line_item_id, v_product_id, v_unit_price, v_quantity) ;
end loop ;
close my_cur_2 ;

lcnt := lcnt + 1 ;
end loop ;

close my_cur ;
end loop ;
end ;
/


Appendix C: Generate Workload Capture Report (HTML)


-- Capture Report
DECLARE
cap_id NUMBER;
cap_rpt CLOB;
buffer VARCHAR2(32767);
buffer_size CONSTANT BINARY_INTEGER := 32767;
amount BINARY_INTEGER;
offset NUMBER(38);
file_handle UTL_FILE.FILE_TYPE;
directory_name CONSTANT VARCHAR2(80) := 'DBCAPTURE';
v_filename CONSTANT VARCHAR2(80) := 'Capture.html';

BEGIN
cap_id := DBMS_WORKLOAD_CAPTURE.GET_CAPTURE_INFO(dir=>directory_name);
cap_rpt := DBMS_WORKLOAD_CAPTURE.REPORT(capture_id => cap_id,
format => DBMS_WORKLOAD_CAPTURE.TYPE_HTML);

-- Write Report to file
DBMS_OUTPUT.ENABLE(100000);

-- --------------------------------
-- OPEN NEW XML FILE IN WRITE MODE
-- --------------------------------
file_handle := UTL_FILE.FOPEN(location => directory_name,
filename => v_filename,
open_mode => 'w',
max_linesize => buffer_size);

amount := buffer_size;
offset := 1;

WHILE amount >= buffer_size
LOOP
DBMS_LOB.READ(lob_loc => cap_rpt,
amount => amount,
offset => offset,
buffer => buffer);

offset := offset + amount;

UTL_FILE.PUT(file => file_handle,
buffer => buffer);

UTL_FILE.FFLUSH(file => file_handle);
END LOOP;

UTL_FILE.FCLOSE(file => file_handle);
END;
/



Appendix D: Generate Workload Replay Report (HTML)

-- Replay Report
DECLARE
cap_id NUMBER;
rep_id NUMBER;
rep_rpt CLOB;
buffer VARCHAR2(32767);
buffer_size CONSTANT BINARY_INTEGER := 32767;
amount BINARY_INTEGER;
offset NUMBER(38);
file_handle UTL_FILE.FILE_TYPE;
directory_name CONSTANT VARCHAR2(80) := 'PROCCAPTURE';
v_filename CONSTANT VARCHAR2(80) := 'Replay.html';

BEGIN
cap_id := DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO(dir => 'PROCCAPTURE');

select max(id)
into rep_id
from dba_workload_replays
where capture_id = cap_id ;

rep_rpt := DBMS_WORKLOAD_REPLAY.REPORT(replay_id => rep_id,
format => DBMS_WORKLOAD_REPLAY.TYPE_HTML);


-- Write Report to file
DBMS_OUTPUT.ENABLE(100000);

-- --------------------------------
-- OPEN NEW XML FILE IN WRITE MODE
-- --------------------------------
file_handle := UTL_FILE.FOPEN(location => directory_name,
filename => v_filename,
open_mode => 'w',
max_linesize => buffer_size);

amount := buffer_size;
offset := 1;

WHILE amount >= buffer_size
LOOP
DBMS_LOB.READ(lob_loc => rep_rpt,
amount => amount,
offset => offset,
buffer => buffer);

offset := offset + amount;

UTL_FILE.PUT(file => file_handle,
buffer => buffer);

UTL_FILE.FFLUSH(file => file_handle);
END LOOP;

UTL_FILE.FCLOSE(file => file_handle);
END;
/

Transfer SQL Tuning Sets between Databases

1. Create a staging table to put the SQL tuning sets into. Then put the SQL Tuning Sets into the staging table.
See Appendix A: Create SQL Tuning Set table and put SQL Tuning Sets into table


2. Ensure that you have a directory in Oracle to use to export the table using datapump
SQL> create directory DATA_PUMP_DIR as ‘C:\oracle\product\10.2.0\admin\orcl\dpdump\’ ;


3. Export the SQL Tuning Set Staging Table to a file as to analyze the SQL tuning set will need to be done in a 11g database.

expdp /@orcl directory=DATA_PUMP_DIR dumpfile=sqlset_stage_tab_exp.dmp logfile=sqlnet_stage_tab_exp.log tables=mrmessin.sqlnet_stage_tab

Export: Release 10.2.0.4.0 - Production on Monday, 04 August, 2008 19:40:41

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "MRMESSIN"."SYS_EXPORT_TABLE_01": /********@orcl directory=DATA_PUMP_DIR dumpfi
xp.dmp logfile=sqlnet_stage_tab_exp.log tables=MRMESSIN.SQLSET_STAGE_TAB
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 640 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
. . exported "MRMESSIN"."SQLSET_STAGE_TAB" 49.13 KB 44 rows
. . exported "MRMESSIN"."SQLSET_STAGE_TAB_CPLANS" 73.24 KB 175 rows
. . exported "MRMESSIN"."SQLSET_STAGE_TAB_CBINDS" 9.492 KB 0 rows
Master table "MRMESSIN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MRMESSIN.SYS_EXPORT_TABLE_01 is:
C:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\DPDUMP\SQLSET_STAGE_TAB_EXP.DMP
Job "MRMESSIN"."SYS_EXPORT_TABLE_01" successfully completed at 19:41:16


4. create a directory for the 11g database we are going to import the sqlset staging table into.
create directory DATA_PUMP_DIR as ‘c:\oracle\admin\orcl11g\dpdump\’ ;


5. Copy the file to the 10g database DATA_PUMP_DIR location to the DATA_PUMP_DIR location for the 11g database.
copy C:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\DPDUMP\SQLSET_STAGE_TAB_EXP.DMP c:\oracle\admin\orcl11g\dpdump\



6. Import the SQL tuning set staging table into the 11g database.

impdp /@orcl11g full=y dumpfile=sqlset_stage_tab_exp.dmp directory=DATA_PUMP_DIR logfile=sqlset_stage_tab_imp.log

Import: Release 11.1.0.6.0 - Production on Monday, 04 August, 2008 20:04:38

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "MRMESSIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "MRMESSIN"."SYS_IMPORT_FULL_01": /********@orcl11g full=y dumpfile=sqlset_stage_ta
y=DATA_PUMP_DIR logfile=sqlset_stage_tab_imp.log
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MRMESSIN"."SQLSET_STAGE_TAB" 49.13 KB 44 rows
. . imported "MRMESSIN"."SQLSET_STAGE_TAB_CPLANS" 73.24 KB 175 rows
. . imported "MRMESSIN"."SQLSET_STAGE_TAB_CBINDS" 9.492 KB 0 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Job "MRMESSIN"."SYS_IMPORT_FULL_01" successfully completed at 20:06:22


7. Move the SQL Tuning Sets in the imported Staging table back into database SQL Tuning Sets
See APPENDIX B: Put SQL Tuning Sets in Imported Staging Table Back into Database SQL Tuning Sets


8. Run Analysis on the SQL Tuning Sets


Appendix A: Create SQL Tuning Set table and Put SQL Tuning Sets into Table

---------------------------------------------------------------------------
-- Script: spa_sts_to_table.sql
-- Author; Michael R. Messina, Senior Consultant TUSC, A Rolta Company
--
-- create a staging table to put the SQL tuning sets into and then
-- put the SQL tuning sets into the stage table to be moved to another
-- database.
---------------------------------------------------------------------------
--
set serveroutput on size 1000000

BEGIN
DBMS_OUTPUT.PUT_LINE ('Creating the staging table to put the sql tuning set into.') ;
-- create the staging table to put the sql tuning set into
DBMS_SQLTUNE.create_stgtab_sqlset(table_name => 'SQLSET_STAGE_TAB',
schema_name => 'MRMESSIN',
tablespace_name => 'USERS');

DBMS_OUTPUT.PUT_LINE ('Putting the first SQL tuning sets into the staging table.') ;
-- put the first SQL tuning set into the staging table
DBMS_SQLTUNE.pack_stgtab_sqlset(sqlset_name => 'STS_SPA_1',
sqlset_owner => 'MRMESSIN',
staging_table_name => 'SQLSET_STAGE_TAB',
staging_schema_owner => 'MRMESSIN');

END ;
/

---------------------------------------------------------------------------
-- Query and view our sql tuning sets
---------------------------------------------------------------------------
select owner, name, created, statement_count from dba_sqlset ;

spool off



APPENDIX B: Put SQL Tuning Sets in Imported Staging Table Back into Database SQL Tuning Sets

---------------------------------------------------------------------------
-- Script: spa_stage_tab_to_sts.sql
-- Author; Michael R. Messina, Senior Consultant TUSC, A Rolta Company
--
-- take the imported staging table for the SQL Tuning Sets and unpack them
-- back into STS for analysis
---------------------------------------------------------------------------
--
-- Write Output to a file for later review if needed.
spool spa_stage_tab_to_sts.out


BEGIN
DBMS_SQLTUNE.unpack_stgtab_sqlset(sqlset_name => '%',
sqlset_owner => 'MRMESSIN',
replace => TRUE,
staging_table_name => 'SQLSET_STAGE_TAB',
staging_schema_owner => 'MRMESSIN');
END;
/

---------------------------------------------------------------------------
-- Query and view our sql tuning sets
---------------------------------------------------------------------------
select owner, name, created, statement_count from dba_sqlset ;

spool off

Friday, February 19, 2010

Just came back from RMOUG and excited about all the technical discussions I was able to have with everyone. I saw some good presentations and met some great people. I was able to get through all my presentations and able to give away a lot of free stuff in them as well. Hope everyone enjoyed the free stuff, I enjoyed giving it out. Come see me at collaborate, I will give out some more free stuff, you never know what I might give out next.