Tuesday, November 27, 2007

Oracle 11g RMAN Data Recovery Advisor

The data recovery advisor helps simplify recoveries. There are 2 flavors to the Data Recovery Advisor, command line and Oracle Enterprise Manager Database Control. This blog will cover the command line.

The command line offers improvements for scripting a recovery and scheduling it via cron or another scheduling utility.

The command line utility is used via rman command line.

Lets use a missing file for our example:

SQL*Plus: Release 11.1.0.6.0 - Production on Tue Nov 27 14:45:28 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to an idle instance.

SQL*Plus: Release 11.1.0.6.0 - Production on Tue Nov 27 15:13:53 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> startup
ORACLE instance started.

Total System Global Area 535662592 bytes
Fixed Size 1334380 bytes
Variable Size 209716116 bytes
Database Buffers 318767104 bytes
Redo Buffers 5844992 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF'

rman target=/

Recovery Manager: Release 11.1.0.6.0 - Production on Tue Nov 27 15:17:51 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: ORCL11G (DBID=744414708, not open)



RMAN can list any failures

RMAN> list failure ;

using target database control file instead of recovery catalog
List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
222 HIGH OPEN 27-NOV-07 One or more non-system datafiles are
missing

If there are no failures then Oracle will return a message:
using target database control file instead of recovery catalog
no failures found that match specification

The message indicates that I have a non-system tablespace databfile missing. A missing datafile is considered critical and therefore Priority is high.

Since it is a non-system tablespace datafile the database can stay up with the tablespace this datafile goes with offline.


Can list the detail on the failure.

RMAN> list failure 222 detail ;

List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
222 HIGH OPEN 27-NOV-07 One or more non-system datafiles are
missing
Impact: See impact for individual child failures
List of child failures for parent failure ID 222
Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
225 HIGH OPEN 27-NOV-07 Datafile 4: 'C:\ORACLE\ORADATA\ORC
L11G\USERS01.DBF' is missing
Impact: Some objects in tablespace USERS might be unavailable



RMAN will advise on automated repair options.

RMAN> advise failure ;

List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
222 HIGH OPEN 27-NOV-07 One or more non-system datafiles are
missing
Impact: See impact for individual child failures
List of child failures for parent failure ID 222
Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
225 HIGH OPEN 27-NOV-07 Datafile 4: 'C:\ORACLE\ORADATA\ORC
L11G\USERS01.DBF' is missing
Impact: Some objects in tablespace USERS might be unavailable

analyzing automatic repair options; this may take some time
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=152 device type=DISK
analyzing automatic repair options complete

Mandatory Manual Actions
========================
no manual actions available

Optional Manual Actions
=======================
1. If file C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF was unintentionally renamed or
moved, restore it

Automated Repair Options
========================
Option Repair Description
------ ------------------
1 Restore and recover datafile 4
Strategy: The repair includes complete media recovery with no data loss
Repair script: c:\oracle\diag\rdbms\orcl11g\orcl11g\hm\reco_1022222764.hm



Now we can have RMAN give us a preview of the failure repair.

RMAN> repair failure preview ;

Strategy: The repair includes complete media recovery with no data loss
Repair script: c:\oracle\diag\rdbms\orcl11g\orcl11g\hm\reco_1022222764.hm

contents of repair script:
# restore and recover datafile
restore datafile 4;
recover datafile 4;




RMAN has the ability to automatically repair the failure.

RMAN> repair failure ;

Strategy: The repair includes complete media recovery with no data loss
Repair script: c:\oracle\diag\rdbms\orcl11g\orcl11g\hm\reco_1022222764.hm

contents of repair script:
# restore and recover datafile
restore datafile 4;
recover datafile 4;

Do you really want to execute the above repair (enter YES or NO)? YES
executing repair script

Starting restore at 27-NOV-07
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to C:\ORACLE\ORADATA\ORCL11G\USERS0
1.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\ORCL
11G\BACKUPSET\2007_11_27\O1_MF_NNNDF_TAG20071127T144712_3NRX264W_.BKP
channel ORA_DISK_1: piece handle=C:\ORACLE\FLASH_RECOVERY_AREA\ORCL11G\BACKUPSET
\2007_11_27\O1_MF_NNNDF_TAG20071127T144712_3NRX264W_.BKP tag=TAG20071127T144712
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 27-NOV-07

Starting recover at 27-NOV-07
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 27-NOV-07
repair failure complete

Do you want to open the database (enter YES or NO)? YES
database opened

Oracle 11g RMAN Proactive Health Check

I. Database

RMAN> validate database ;

Starting validate at 27-NOV-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=138 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00001 name=C:\ORACLE\ORADATA\ORCL11G\SYSTEM01.DBF
input datafile file number=00002 name=C:\ORACLE\ORADATA\ORCL11G\SYSAUX01.DBF
input datafile file number=00004 name=C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
input datafile file number=00006 name=C:\ORACLE\ORADATA\ORCL11G\UNDOTBS02.DBF
input datafile file number=00005 name=C:\ORACLE\ORADATA\ORCL11G\EXAMPLE01.DBF
channel ORA_DISK_1: validation complete, elapsed time: 00:03:25
List of Datafiles
=================

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
1 OK 0 11278 512000 13548003
File Name: C:\ORACLE\ORADATA\ORCL11G\SYSTEM01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 487268
Index 0 11166
Other 0 2288

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2 OK 0 26072 75512 13548017
File Name: C:\ORACLE\ORADATA\ORCL11G\SYSAUX01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 12195
Index 0 10225
Other 0 27020

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4 OK 0 51199 64000 13457224
File Name: C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 12357
Index 0 33
Other 0 411

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
5 OK 0 1719 12800 13345172
File Name: C:\ORACLE\ORADATA\ORCL11G\EXAMPLE01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 4406
Index 0 1264
Other 0 5411

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
6 OK 0 23908 64000 13548017
File Name: C:\ORACLE\ORADATA\ORCL11G\UNDOTBS02.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 0
Index 0 0
Other 0 40092

channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
including current control file for validation
including current SPFILE in backup set
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Control File and SPFILE
===============================

File Type Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
SPFILE OK 0 2
Control File OK 0 624
Finished validate at 27-NOV-07



II.Tablespace

RMAN> validate tablespace users ;

Starting validate at 27-NOV-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=128 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00004 name=C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
channel ORA_DISK_1: validation complete, elapsed time: 00:00:15
List of Datafiles
=================

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4 OK 0 51199 64000 13457224
File Name: C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 12357
Index 0 33
Other 0 411

Finished validate at 27-NOV-07



III. Datafile

RMAN> validate datafile 4 block 1 ;

Starting validate at 27-NOV-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00004 name=C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Datafiles
=================

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4 OK 0 0 1 0
File Name: C:\ORACLE\ORADATA\ORCL11G\USERS01.DBF
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 0
Index 0 0
Other 0 1

Finished validate at 27-NOV-07

Wednesday, November 14, 2007

Oracle Enterprise Manager Agent Upload Troubleshooting

Every find the Oracle Enterprise Manager Grid Control agents not uploading.
There are many reasons for the agent not being able to upload.

Examine the agent


1. First I always check the status of the agent
emctl status agent

2. Examine status output, may see that the agent is not uploading by see
Last successful upload : (none)
Last attempted upload : (none)
Total Megabytes of XML files uploaded so far : 0.00
Number of XML files pending upload : 828
Size of XML files pending upload(MB) : 56.61
Available disk space on upload filesystem : 8.50%
Last successful heartbeat to OMS : 2007-10-21 10:55:39

3. Next I always attempt to do a manual upload for the agent to check/verify upload problem
emctl upload agent

4. Check the following log file for errors
$AGENT_HOME/sysman/log/emagent.trc

In Most cases it is due to a bad .xml file or unable to contact the management service. If it is due to contacting the management service you will need to fix the OMS Service or connectivity to the service, however if it is due to a bad xml file you can then try to just remove that file from the upload location. I usually save the file to another location so that I can send the file to Oracle Support where hopefully they can tell me why the file would not upload.Once you have removed the file from the upload location you can attempt a manual upload again.emctl upload agent Check the logfile again for errors. you can repeat for subsequent bad xml files, but if all else fails we can always clear the agent completely. Keep in mind yuo will lose data from the pending xmls therefore the data will not make it to the Grid Control Repository.


Clear the agent up


1. Stop the agent on the target node
emctl stop agent

2. Delete any pending upload files from the agent home
rm -r $ORACLE_HOME/sysman/emd/state/*

rm -r $ORACLE_HOME/sysman/emd/collection/*
rm -r $ORACLE_HOME/sysman/emd/upload/*
rm $ORACLE_HOME/sysman/emd/lastupld.xml
rm $ORACLE_HOME/sysman/emd/agntstmp.txt
rm $ORACLE_HOME/sysman/emd/blackouts.xml


3. agent clearstate
emctl clearstate


4. Start the agent again
emctl start agent

5. Force an upload to the Oracle Management Server/Service (OMS)
emctl upload agent

Monday, November 12, 2007

Oracle 11g Advanced Compression

Oracle 11g introduces a new feature that potentially offers an organization the promise of tables that take less space therefore equaling a smaller database. A smaller database taking less disk space equals a lower cost for disk storage lowering cost for disk storage for databases.

With database sizes continually growing at an alarming rate the ability to increase the amount of data store per gigabyte is exciting.

Oracle first introduced compression in 9i where you could compress tables upon creation via operations like create table as select or direct load operations. This is well suited for initial loads of data, but to remain compressed required maintenance. With pressure to increase availability of databases this compression was not well suited for our OLTP systems since most data is not direct loaded. Oracle’s introduction of Advanced Compression changes that. Keep in mind though that Advanced Compression is a new option with 11g and therefore will have additional license requirements.

Oracle boasted 2-3x less disk space and minimal DML overhead.

Is that really true can we expect a table to be half the size it is today uncompressed? If so will it maintain itself to be continually half the size over time?

Also how bad does the compression affect my read and write operations? Oracle boasts that read operations are not impacted and claims that write operations have a minimal impact.

I did a small test to see if I could at least at some level quickly get some answers to those questions.

In my test I create a compressed table with a create table as select, much like I would creating a compressed table and loading with initial data in 9i. Created a table un-compressed with initial data load and uncompressed with initial data load. Then I moved on to load more data with an insert select looping though doing that insert upwards of 30 times each time taking a measure of the table size and how long the load took.

I have included my script that I used and output results from the execution with the load operation looping 30 times. Keep in mind that this is a small test to get familiar with the feature and get some initial thoughts.

Script used in testing:

-------------------------------------------------
-- script: test_11g_compression.sql
-- author: Michael Messina
--
-- description: Script the sill create tables
-- with and without compression
-- load tables with data and
-- show sizes and timings
-------------------------------------------------
-- spool to a log file for us
spool test_11g_compression.log

-- Lets drop the tables for our test
drop table test_compression_compressed ;
drop table test_compression_uncompressed ;

-- let see how long it takes to create the tables
-- and load the data for compressed and
-- uncompressed versions.
set timing on

-- create a table compressed and load with data
create table test_compression_compressed
COMPRESS FOR ALL OPERATIONS
as
select * from dba_tables ;

-- create a table and load with data uncompressed
create table test_compression_uncompressed
as
select * from dba_tables ;

set timing off

-- Lets see how large our compressed table is
select sum(bytes)/1024 from dba_segments where owner = 'SYSTEM' and segment_name = 'TEST_COMPRESSION_COMPRESSED' ;

-- Lets see how large out uncompressed table is
select sum(bytes)/1024 from dba_segments where owner = 'SYSTEM' and segment_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

-----------------------------------------------------------------------------------------------------------------------
set timing on

-- Now lets insert a ton more records into the compressed table and show timing
declare
l number := 0 ;

begin

loop
if l > 30 then
exit ;
end if ;

l := l + 1 ;

insert into test_compression_compressed select * from dba_tables ;
end loop ;
end ;
/


-- Now lets insert a ton more records into the uncompressed table and show timing
declare
l number := 0 ;

begin

loop
if l > 30 then
exit ;
end if ;

l := l + 1 ;

insert into test_compression_uncompressed select * from dba_tables ;
end loop ;
end ;
/

set timing off

-- Lets see how large our compressed table is
select sum(bytes)/1024 from dba_segments where owner = 'SYSTEM' and segment_name = 'TEST_COMPRESSION_COMPRESSED' ;

-- Lets see how large out uncompressed table is
select sum(bytes)/1024 from dba_segments where owner = 'SYSTEM' and segment_name = 'TEST_COMPRESSION_UNCOMPRESSED' ;

-- Turn the spool off
spool off


Results from script execution:

SQL> @test_11g_compression.sql

Table dropped.


Table dropped.


Table created.

Elapsed: 00:00:00.48

Table created.

Elapsed: 00:00:00.39

SUM(BYTES)/1024
---------------
192


SUM(BYTES)/1024
---------------
704


PL/SQL procedure successfully completed.

Elapsed: 00:00:09.17

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.79

SUM(BYTES)/1024
---------------
9216


SUM(BYTES)/1024
---------------
20480

SQL>



Query performance results:

UnCompressed table

SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.40
SQL> select count(*) from test_compression_uncompressed ;

COUNT(*)
----------
80767

Elapsed: 00:00:00.60
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.07
SQL> select count(*) from test_compression_uncompressed ;

COUNT(*)
----------
80767

Elapsed: 00:00:00.57
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.07
SQL> select count(*) from test_compression_uncompressed ;

COUNT(*)
----------
80767

Elapsed: 00:00:00.61
SQL>


Compressed table:

SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.06
SQL> select count(*) from test_compression_compressed ;

COUNT(*)
----------
80766

Elapsed: 00:00:00.40
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.09
SQL> select count(*) from test_compression_compressed ;

COUNT(*)
----------
80766

Elapsed: 00:00:00.39
SQL>


Trace Results from full table scan reads:

SQL> set autotrace traceonly
SQL> set timing on
SQL>
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.03
SQL> select * from test_compression_compressed ;

80766 rows selected.

Elapsed: 00:00:01.93

Execution Plan
----------------------------------------------------------
Plan hash value: 1423006034

-------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
-------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 57433 26M 311 (1) 00:00:04
1 TABLE ACCESS FULL TEST_COMPRESSION_COMPRESSED 57433 26M 311 (1) 00:00:04
-------------------------------------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1156 consistent gets
1114 physical reads
0 redo size
3523277 bytes sent via SQL*Net to client
821 bytes received via SQL*Net from client
42 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
80766 rows processed

SQL>
SQL> alter system flush buffer_cache ;

System altered.

Elapsed: 00:00:00.04
SQL> select * from test_compression_uncompressed ;

80767 rows selected.

Elapsed: 00:00:01.84

Execution Plan
----------------------------------------------------------
Plan hash value: 3640389637

---------------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------------------
0 SELECT STATEMENT 92920 43M 690 (1) 00:00:09
1 TABLE ACCESS FULL TEST_COMPRESSION_UNCOMPRESSED 92920 43M 690 (1) 00:00:09
---------------------------------------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
2559 consistent gets
2513 physical reads
0 redo size
3557685 bytes sent via SQL*Net to client
821 bytes received via SQL*Net from client
42 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
80767 rows processed

SQL>
SQL> set timing off
SQL>


Summary:

As you can see from my very small test that the compression is a little more then 50% and appeared to be consistent with loads at 10 times, 20 times and 30 times. This shows that Oracle is maintaining the compression and is consistent.

If you look at the time for the load however you can clearly see the impact that the compression seems to be having. At 10 times, 20 times and 30 times the load consistently seemed to impact the load by taking 2x as long to load verses the uncompressed table.

At very low volumes this impact may not be overly noticeable but to a large load however the impact may be significant as far as runtime. This is very true if you have short windows to get your data loaded.

Over all the compression write overhead is light enough that small transactional activity could handle the overhead, however larger loads might pose a problem.

The disk space savings can be huge for large tables a savings of 50% is big when you are talking very large tables and very large databases.

A quick look at query performance with doing full table scans shows little impact to overall performance. However it does appear that on the compressed table ½ the total number of blocks are visited as compared to the non-compressed table. With ½ the block visits I would expect that a slight performance advantage might go to the compressed table, however the results do not back that up.

Another interesting difference is in the explain plan for the full table scan. The cost of the query is lower with the compressed table then with the uncompressed table. The cost difference may be fully related to the number of I/O operations difference. This would also lead you to believe that you may see a slight performance advantage to the scan on the compressed table, yet the results do not back that assumption up.

My small test only showed me that I need to do some more extensive testing, but initial results do show that there are advantages to this new feature and that further testing will be needed to be able to best apply this new feature.

Check for Blocking Locks

-------------------------------------------------------------
-- script: locks_blocking.sql
-- author: Mike Messina
-- desciption: Shows the blocking locks in a database
-- to trouble shoot sessions that appear to
-- be hung or possibly showing poor performance
-------------------------------------------------------------
SET PAGESIZE 500
SET FEEDBACK ONSET LINESIZE 200
SET TRIMSPOOL ON
SET ECHO OFF
set feedback off

-- Get Database Instance Name
column database_name noprint new_value i_name

SELECT UPPER(name) '_locks_blocking.log' database_name
FROM v$database ;

set feedback on

-- Spool to database instance file
SPOOL &i_name
select /*+ ORDERED */
l.sid,
s.serial#,
p.spid os_pid,
SUBSTR(s.username,1,15) username,
DECODE(l.request, 0, 'NO', 'YES') waiter,
SUBSTR(decode(l.type,
'MR', 'Media Recovery',
'RT', 'Redo Thread',
'UN', 'User Name',
'TX', 'Transaction',
'TM', 'DML',
'UL', 'PL/SQL User Lock',
'DX', 'Distributed Xaction',
'CF', 'Control File',
'IS', 'Instance State',
'FS', 'File Set',
'IR', 'Instance Recovery',
'ST', 'Disk Space Transaction',
'TS', 'Temp Segment',
'IV', 'Library Cache Invalidation',
'LS', 'Log Start or Switch',
'RW', 'Row Wait',
'SQ', 'Sequence Number',
'TE', 'Extend Table',
'TT', 'Temp Table',
l.type),1,30) lock_type,
SUBSTR(decode(l.lmode,
0, 'None',
1, 'Null',
2, 'Row-S (SS)',
3, 'Row-X (SX)',
4, 'Share',
5, 'S/Row-X (SSX)',
6, 'Exclusive',
to_char(l.lmode)),1,15) mode_held,
SUBSTR(decode(l.block,
0, 'Not Blocking', /* Not blocking any other processes */
1, 'Blocking', /* This lock blocks other processes */
2, 'Global', /* This lock is global, so we can't tell */
to_char(l.block)),1,15) blocking,
SUBSTR(do.owner '.' do.object_name,1,50) object
from v$lock l,
v$session s,
v$process p,
v$locked_object lo,
dba_objects do
where l.sid = s.sid
and s.username is not null
and s.sid = lo.session_id (+)
and s.paddr = p.addr (+)
and lo.object_id = do.object_id (+)
and l.block <> 0 ;

SPOOL off

See SQL Being Executed for SID

----------------------------------------------------
-- Script: see_sql_for_sid.sql
-- Author: Michael Messina
-- Parameters: SID
-- Description: See SQL Text executing for SID
--------------------------------------------------
set long 4000

-- 10g and above
select sql_fulltext
from v$sql
WHERE (address, hash_value) IN
(SELECT sql_address, sql_hash_value
FROM v$session WHERE sid = &1);

Example:

SQL> @see_sql_for_sid.sql 139
old 6: WHERE sid = &1)
new 6: WHERE sid = 139)

SQL_FULLTEXT
--------------------------------------------------------------------------------
select sql_fulltext
from v$sql
WHERE (address, hash_value) IN
(SELECT sql_address, sql_hash_value
FROM v$session
WHERE sid = 139)

1 row selected.

Thursday, November 8, 2007

OEM Grid Control Rediscover targets on host

Ever found the need to Rediscover targets for host with Oracle Enterprise Manager Grid Control?

We will need to work with the agent on the host where the targets need rediscovring.

Here is some steps that may help:

1. Check that the ENV is set for the Management Agent
- ORACLE_HOME set
- $ORACLE_HOME/bin in $PATH

The agentca utility is expecting the oraInst.loc to be located in the agents ORACLE_HOME. Please check that the oraInst.loc is located in the agent ORACLE_HOME location.

** Speical Note for Solaris Platform: The Solaris Operating System puts the oraInst.loc in /var/opt/oracle. This means that the agentca command will not find the oraInst.loc file. You need to create a soft link: $ORACLE_HOME/oraInst.loc to /var/opt/oracle/oraInst.loc for the agentca to work properly.

2. Copy the file $ORACLE_HOME/sysman/emd/targets.xml. to targets.xml.orig

3. Run agentca -d

4. Check that the targets.xml has been created in $ORACLE_HOME/sysman/emd directory

Check that the Management Agent uploads successfully to the Management Service with emctl status agent

emctl upload
emctl status agent

5. For any agentca issue consult the following log files: - $ORACLE_HOME/cfgtoollogs/oui/configActions.log - $ORACLE_HOME/cfgtoollogs/oui/configActions.err - $ORACLE_HOME/cfgtoollogs/cfgfw/CfmLogger__.log

Oracle Audit Failed Logon Attempts

Ever need to audit failed login attempts to your database. Ever have accounts becoming locked due to failed login attempts and not sure why.

Here is a solution

You must set the audit_trail=DB in the init.ora/spfile
then you must audit with audit session whenever not successful ;
Here is a complete walk through on the setup and a script that will help query the audit trail for the failed login attempts. Be sure to clean up you audit trail so it does not grow out of control.

The Example Walk Through for Setup:

SQL> connect sys as sysdba
Enter password: ******
Connected.
SQL> alter system set audit_trail=DB scope=spfile ;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup ;
ORACLE instance started.

Total System Global Area 612368384 bytes
Fixed Size 1250428 bytes
Variable Size 234883972 bytes
Database Buffers 369098752 bytes
Redo Buffers 7135232 bytes
Database mounted.
Database opened.
SQL> audit session whenever not successful ;

Audit succeeded.

SQL> connect dummy/dummy
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> connect sys as sysdba
Enter password: ******
Connected.
SQL> select os_username,
username,
userhost,
to_char(timestamp,'mm/dd/yyyy hh24:mi:ss') timestamp,
returncode
from dba_audit_session
where action_name = 'LOGON'
and returncode > 0
order by timestamp ;

OS_USERNAME
--------------------------------------------------------------------------------
USERNAME
------------------------------
USERHOST
--------------------------------------------------------------------------------TIMESTAMP RETURNCODE
------------------- ----------------
MRMESSIN\Mike Messina
DUMMYWORKGROUP\MRMESSIN
11/08/2007 09:07:54 1017
SQL>

Here is a script that will show you the failed login attempts made to an Oracle Database after your setup.

-----------------------------------------------
-- see_failed_login_attempts.sql
--
-- Michael Messina
--
-- query the Oracle Audit Trail and
-- will write a log file of the failed
-- login attempts for the database.
--
-- Requires:
-- audit_trail=DB in init.ora/spfile
-- audit session whenever not successful ;
-----------------------------------------------
set pagesize 200
set linesize 150
column os_username format a15
column username format a15
column userhost format a40
column timestamp format a20
column returncode format 9999999999

spool failed_login_attempts.log

select os_username,
username,
userhost,
to_char(timestamp,'mm/dd/yyyy hh24:mi:ss') timestamp,
-- action_name,
returncode
from dba_audit_session
where action_name = 'LOGON'
and returncode > 0
order by timestamp ;

spool off

Oracle Security Resolving the OEM Grid Control execute to PUBLIC Security Violations

Oracle Enterprise Manager highlights many execute to public permissions on packages like UTL_FILE, UTL_HTTP, UTL_TCP, UTL_SMTP, DBMS_LOB, DBMS_JOB and DBMS_RANDOM.

The following is a script that will help take care of that.

Be sure to check for invalid objects after script execution. This script covers Oracle internal accounts, but your database may have other accounts that need permissions as well.

----------------------------------------
-- fix_database_security.sql
--
-- Michael Messina
--
----------------------------------------
-- Script will resolve the most common
-- security policy violations OEM looks
-- for in a Oracle database
----------------------------------------
grant execute on DBMS_LOB to CONTENT ;
grant execute on DBMS_LOB to CWSYS ;
grant execute on DBMS_LOB to CTX_SUBSTR ;
grant execute on dbms_lob to rtc ;
grant execute on utl_smtp to es_mail ;
grant execute on utl_file to es_mail ;
grant execute on dbms_lob to es_mail ;
-- Oracle AS Metadata Repository Needs
grant execute on dbms_lob to ORASSO ;
grant execute on dbms_job to ORASSO ;
grant execute on dbms_random to orasso ;
grant execute on utl_file to orasso ;
grant execute on utl_http to orasso ;
grant execute on utl_smtp to orasso ;
grant execute on utl_tcp to bam ;
grant execute on utl_smtp to bam ;
grant execute on dbms_lob to bam ;
grant execute on utl_file to ods ;
grant execute on dbms_job to ods ;
grant execute on dbms_lob to OWF_MGR ;
grant execute on dbms_job to OWF_MGR ;
grant execute on dbms_random to OWF_MGR ;
grant execute on utl_http to OWF_MGR ;
grant execute on utl_file to OWF_MGR ;
grant execute on dbms_lob to exfsys ;
grant execute on dbms_lob to PORTAL ;
grant execute on dbms_job to PORTAL ;
grant execute on dbms_random to PORTAL ;
grant execute on utl_http to PORTAL ;
grant execute on utl_file to PORTAL ;
grant execute on utl_SMTP to PORTAL ;
grant execute on dbms_lob to WIRELESS ;
grant execute on dbms_job to WIRELESS ;
grant execute on dbms_random to WIRELESS ;
grant execute on dbms_job to B2B ;
-- Enterprise Manager Repository Owner Needs
grant execute on utl_file to sysman ;
grant execute on dbms_random to sysman ;
grant execute on utl_http to sysman ;
grant execute on utl_smtp to sysman ;
grant execute on utl_tcp to sysman ;
grant execute on dbms_lob to sysman ;
grant execute on dbms_job to sysman ;
grant execute on utl_file to ordplugins ;
grant execute on dbms_random to ordplugins ;
grant execute on utl_http to ordplugins ;
grant execute on utl_smtp to ordplugins ;
grant execute on utl_tcp to ordplugins ;
grant execute on dbms_lob to ordplugins ;
grant execute on dbms_job to ordplugins ;
grant execute on utl_file to ordsys ;
grant execute on dbms_random to ordsys ;
grant execute on utl_http to ordsys ;
grant execute on utl_smtp to ordsys ;
grant execute on utl_tcp to ordsys ;
grant execute on dbms_lob to ordsys ;
grant execute on dbms_job to ordsys ;
grant execute on utl_file to mdsys ;
grant execute on dbms_random to mdsys ;
grant execute on utl_http to mdsys ;
grant execute on utl_smtp to mdsys ;
grant execute on utl_tcp to mdsys ;
grant execute on dbms_lob to mdsys ;
grant execute on dbms_job to mdsys ;
grant execute on utl_file to dmsys ;
grant execute on dbms_random to dmsys ;
grant execute on utl_http to dmsys ;
grant execute on utl_smtp to dmsys ;
grant execute on utl_tcp to dmsys ;
grant execute on dbms_lob to dmsys ;
grant execute on dbms_job to dmsys ;
grant execute on utl_file to xdb ;
grant execute on dbms_random to xdb ;
grant execute on utl_http to xdb ;
grant execute on utl_smtp to xdb ;
grant execute on utl_tcp to xdb ;
grant execute on dbms_lob to xdb ;
grant execute on dbms_job to xdb ;
grant execute on utl_file to system ;
grant execute on dbms_random to system ;
grant execute on utl_http to system ;
grant execute on utl_smtp to system ;
grant execute on utl_tcp to system ;
grant execute on dbms_lob to system ;
grant execute on dbms_job to system ;
grant execute on utl_file to outln ;
grant execute on dbms_random to outln ;
grant execute on utl_http to outln ;
grant execute on utl_smtp to outln ;
grant execute on utl_tcp to outln ;
grant execute on dbms_lob to outln ;
grant execute on dbms_job to outln ;
grant execute on utl_file to dbsnmp ;
grant execute on dbms_random to dbsnmp ;
grant execute on utl_http to dbsnmp ;
grant execute on utl_smtp to dbsnmp ;
grant execute on utl_tcp to dbsnmp ;
grant execute on dbms_lob to dbsnmp ;
grant execute on dbms_job to dbsnmp ;
grant execute on utl_file to ordsys ;
grant execute on utl_http to mdsys ;
grant execute on utl_file to olapsys ;
grant execute on dbms_random to olapsys ;
grant execute on utl_http to olapsys ;
grant execute on utl_smtp to olapsys ;
grant execute on utl_tcp to olapsys ;
grant execute on dbms_lob to olapsys ;
grant execute on dbms_job to olapsys ;
grant execute on dbms_lob to ctxsys ;
grant execute on dbms_job to ctxsys ;
grant execute on dbms_job to exfsys ;
grant execute on dbms_job to wksys ;
grant execute on dbms_lob to wksys ;
revoke execute on utl_file from public ;
revoke execute on dbms_random from public ;
revoke execute on utl_http from public ;
revoke execute on utl_smtp from public ;
revoke execute on utl_tcp from public ;
revoke execute on dbms_lob from public ;
revoke execute on dbms_job from public ;
@$ORACLE_HOME/rdbms/admin/utlrp