Understand Performance of Oracle Database using AWR Report


http://learntestingtools.com/understand-performance-of-oracle-database-using-awr-report/

There are plenty of articles, blogs and write ups available to explain what Automatic Workload Repository aka AWR report is and what information does it provide. I have read a few and found very useful. AWR is an ocean of information and may be quite overwhelming sometime. If you are trying to find out the meaning of each topic and sub topic, Oracle would be your one stop solution (well .. not really !!!). But if you are looking at AWR report in order to analyse Oracle performance and its impact on overall application performance, then you need a planned approach to explore these statistics.

Before diving into analysis, I want to reiterate few known facts.

  • The data in an AWR report is the delta, or changes, between the accumulated metrics within each snapshot.
  • An AWR contains 3 primary sections – (1) Oracle Configuration Details (2) Performance Statistics and (3) Advisory.
  • Any snapshot over 60 0mins may not be ideal for performance analysis (can be changed though) as it can hide the actual issue.
  • The default Snapshot retention period is 7 days (can be changed though).

In general, the types of problems observed in Oracle database are –

  • The database is slow (period)
  • It works fine for most cases but brings to halt for specific (one or few) business scenarios.
  • One or few SQL take(s) forever.

So your plan should be different when tacking different problems.
In this article, I have tried to explain a generic approach to start analysing Oracle database performance by looking at the statistics available on AWR report. So the sequence of topics will not be as same as what you see in AWR report.

AWR - 1

Snapshot Details

To begin analysis, I would start by looking at the Snapshot details. You will not find any performance statistics here but you need to make sure that you are looking at the right data. Few checks from the section include –

  • Start and End snapshot details. If you are comparing with another AWR report, please make sure you are comparing it right.
  • The elapsed time between the snapshots.
  • Server details e.g. Memory, number of CPUs, Oracle Database version etc.

AWR - 2

Elapsed Time & Database Time

Elapsed time represents simply the time interval between the ‘begin snap id’ and ‘end snap id’. DB time is equal to CPU time in processing + wait time. If the DB time is much greater than elapsed time drill down further in the report to understand the wait events.

In this example, Elapsed Time = 60 mins.
Therefore, Total CPU time = 60 mins * <number of CPUs> =60 mins * 16 CPU = 960 mins.
This means the Database has consumed = 292.24 mins out of 960 mins i.e. (292.24/960) *100 = 30.44% CPU (average for the entire duration).

Is this bad? Not really, if you have executed a performance test and stressed the system; this is quite expected.
However, if you have executed a load test that was not supposed to stress the system (and probably < 20-30% of the actual load) then yes, it is possibly a problem. [Please remember, extrapolation is not a good practice and could be completely misleading. read more about it here]

Sessions

Then it is about sessions. What is session? Well, a session represents the connection that stores its persistent objects like transactions, caching etc between an application and the database.

In this report, we can see the number of sessions at the beginning was 198 and then gradually came down to 158. This means – the database was already busy even before this AWR report start time and it gradually calmed down. So you will tend to see more activity at the beginning of the session ~15:00 + 15 mins rather at the end 16:00 – 15 mins. It is probably a good idea to find out the nature of load in the beginning of the snapshot and at the end of the snapshot.

And the database has (=292.24/60.04) = ~5 per active sessions on the average for the entire period. Again this is not bad; however please keep an eye on the number of active sessions.
If you have executed a load test and the tests and it finished within this 60 mins; then I would expect to see the number of active session goes down to a minimum and not remains at 158. To confirm, please check the number of active session in the next snapshot.

Load Profile

Then it is Load Profile. There are two primary reasons for looking at load profile in the beginning.

  1. If you are comparing one AWR report with another, then load profile will help you understand that you have executed exact same/ similar test on your application [and there is no unexpected activity impacted the database performance during that period. Example – database backup job]. I am not saying the number will be exactly same but they should be in an expected range. If that is not the case, then you must find out the changes implemented either in the test you are running or in the system (application or database).
  2. To analyse using the performance statistics.

AWR - 3

In my earlier post about analysing PostgreSQL server performance, I have created a video; the concept represented in the video is same for across all database. In every application, a SQL is sent to database; the database then receives the SQL and parses in its own understandable language, executes to extract required data and then sends the data back to the application. AWR essentially provides lots of information about all these stages but categorises them differently.

In the load profile section, you will find all the statistics for the duration between database receiving a query and sending all data back to the application.

DB Time

You can see the DB Time in snapshot details is 292.24 mins = 17534.4 sec.

The DB worked 17534.4 sec within 60 mins i.e. 3600 sec (snapshot interval) without considering any background processing work.

Therefore, it has worked = 17534.4 / 3600 = 4.9 per sec.

In this example, it is definitely not bad.

Parsing

Let’s talk about Parser then. What is parser? It is Oracles way of validating the incoming request and then preparing to serve the request. The general rule of is – hard parsing is bad and soft parsing is good. Although it is quite difficult to put a fix number against each of these types, but anything more than 10% hard parsing is definitely bad. So what is the difference between these two types of parsing?

You provide a SQL to and request Oracle to execute a query and give records back. Oracle first tries to find out if it already has executed similar queries, because if it has then it does not have to load the query to shared memory pool (therefore saves a lot of work) – this is your soft parsing. But if it finds out that it is a completely new query, then it has to first load the query to shared memory pool. This is a hard parsing.

In AWR report, Soft parses are not listed but derived by subtracting the hard parses from parses. A soft parse reuses a previous hard parse; hence it consumes far fewer resources.

In this example, a total of 96.4 parsing in a sec took place and 1 out of them was hard parsing; so 99% was soft parsing.

This is definitely good.

But please be careful, (as I mentioned) the opposite is not always true i.e. 99% soft parsing is good performance but good performance does not necessarily mean 99% soft parsing.

REDO log

It is Oracles way to go back to previous state if necessary. Oracle keeps a record of every change in this REDO log until a commit. UPDATE, INSERT type of queries always causes to generate a huge REDO logs while SELECT statement sometimes does it too (may be in a lower scale though).

So what should be the optimum size? Well, that is a difficult one. But a high REDO logs indicates a lot of changes taking place in the database.

In this example, Oracle is generating ~1,230 KB of logs per sec.

This means, if you are expecting a similar load on the system 24 hrs a day, then you are probably talking about 1230*24 *3600 = ~100GB of REDO logs.

This seems high. [Please don’t forget to check your Database size and calculate the relative growth]

Also, there are only 6 transactions took place within the period and each of these transactions has generated about 205KB of REDO logs i.e. (=205077* 6) 1,230,457 bytes per sec. This is definitely high.

Check the database size before running any more tests and compare the size increase. Discuss with developers to find out the types of query executed during this test; if these queries predominantly are SELECT type and not many UPDATE or INSERT then this could be a problem.

User Calls

Oracle will increase the counter whenever the application makes a request. User session Login, soft and hard parsing, physical reads and physical writes – everything is accounted. This means, if you increase the load on the system you should expect to see more Users Calls.

Reads (Logical & Physical)

These two are interesting statistics. So what is read? It is when user (i.e. the application) has requested (user calls) for data, Oracle has validated and transformed (i.e. soft and hard parsing) and then started to extract the data.
As a process, Oracle will try to see if the data is available in Buffer cache i.e. SGA? If it does, then logical read increases to 1.

To explain a bit further, if Oracle gets the data in a block which is consistent with a given point in time, then a counter name “Consistent Gets” increases to 1.

But if the data is found in current mode, that is, the most up-to-date copy of the data in that block, as it is right now or currently then it increases a different counter name “db block Gets”.

Therefore, a Logical read is calculated as = Total number of “Consistent Gets” + Total number of “db block gets”. These two specific values can be observed in ‘Instance Activity Stats’ section.

But if it Oracle does not find the data in buffer cache, then it reads it from physical block and increases then Physical read count to 1.

Clearly, buffer get is less expensive than physical read because database has to work harder (and more) to get the data. Basically time it would have taken if available in buffer cache + time actually taken to find out from physical block.

In this example, a total of 556,323,564 Consistent Gets and 16,162,346 DB Block Gets took place;
Therefore, Logical reads per sec = (556,323,564+ 16,162,346)/3600 sec) = 158,912.5.
Is this good? May be not!

Please remember, lesser physical read compared to Logical read is good performance but too many logical read is definitely not good.

In this case, the distribution is –
(1) Physical Read = =(333.1/159245.6)*100 = 0.2% and
(2) Logical Read = 99.8%.

This shows a well allocated SGA memory and that is definitely good; however the fact that 158K reads in a sec is not good. You need to explore the report to find out the SQL(s) causing this high logical reads.

Transactions (and Rollbacks) & Executes

What is a transaction in Oracle? Transaction in Oracle is simply the total of number of commits and rollbacks.
In this example, the database has rolled back 360 operation in 60 mins (=0.1 * 3600 sec) and committed 21,240 operations (= Total Transactions – Roll backs i.e. = 6*3600 – 360).
Remember, Rollbacks are expensive operations.
So if the question is whether any of these numbers are good or bad?
Well, to answer the question first try to find out about those rollbacks. Speak to DBA and Developers to find out the root cause for those rollbacks. Sometime even a small increase in rollback can impact the database performance significantly.

Now about the number of commits, I would not necessarily say good or bad; rather I will use this number to direct analysis and investigation further.

Please remember, the important aspect is not just the number of transactions, you need to calculate how many queries Oracle has executed (average) per transactions.

This can be calculated as Total execution per sec / number of transactions per sec.

In this example, oracle has executed =311.5/6= ~52 SQLs per transactions. Although there is no absolute boundary to decide low or high; but I would say this is quite high. You need to explore the section detailing about number of SQLs executed most.

I am still working on the other sections as described in the diagram in the beginning and will try to publish as soon as possible.

Thank you

Advertisements

Audit Trail Management


http://www.oracle.com/technetwork/issue-archive/2010/10-nov/o60security-17606…

TECHNOLOGY: Security
As Published In

November/December 2010

Managing Audit Trails
By Arup Nanda

Relocate the audit trail to a different tablespace and set up an
automatic purge process to keep its size under control.

One of the most significant aspects of database security involves
setting up auditing to record user activities. The very knowledge that
a user’s actions are being recorded can act as a significant deterrent
to prevent wrongdoers from committing malicious acts.

When auditing is enabled, the audit output is recorded in an audit
trail, which is usually stored in the database in a table under the
SYS schema called AUD$. It can also reside as files in the file
system, and the files can optionally be stored in XML format. For
more-precise control, the Fine Grained Auditing feature of Oracle
Database 11g provides granular control of what to audit, based on a
more detailed set of policies. Fine Grained Auditing audits are
usually stored in another table, FGA_LOG$, under the SYS schema.

These various audit trails can quickly grow out of control when
database activity increases. As audit trails grow, two main challenges
must be addressed:

1.Trails need to be kept to a manageable size (and old records purged)
if they are to be used effectively in forensic analysis.
2.Because database-resident trails are typically stored in the SYSTEM
tablespace, they can potentially fill it up—bringing the database to a
halt.

Fortunately, the new auditing features in Oracle Database 11g Release
2 can help address these challenges. These capabilities, implemented
in a package called DBMS_AUDIT_MGMT, enable you to move audit trails
from the SYSTEM tablespace to one of your choice.

The new auditing features also let you set up one-time and automated
purge processes for each of your audit trail types. Historically, to
purge an audit trail, you were generally forced to stop auditing
(which may have required bouncing the database), truncate, and then
restart auditing (and bouncing the database again).

In this article, you will learn how to use the new features in Oracle
Database 11g Release 2 to manage your audit trails.

Relocating the Audit Trail Tables
Let’s first examine how to relocate an audit trail from the default
SYSTEM tablespace to a new one. In case you don’t already have a
suitable target tablespace, the code below shows how to create one:

create tablespace audit_trail_ts
datafile ‘+DATA’
size 500M
segment space management auto
/
For moving an audit trail to the new tablespace, Oracle Database 11g
Release 2 provides a procedure in DBMS_AUDIT_MGMT called
SET_AUDIT_TRAIL_LOCATION. Listing 1 shows how to move a “standard”
audit trail, which is the Oracle Database audit recorded in the AUD$
table.

Code Listing 1: Relocating a standard audit trail

begin
dbms_audit_mgmt.set_audit_trail_location(
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
audit_trail_location_value => ‘AUDIT_TRAIL_TS’);
end;
/
This move operation can be performed even when the database is up and
an audit trail is being written. The target tablespace (AUDIT_TRAIL_TS
in this case) must be available and online. If the tablespace is not
available, auditing will stop, also stopping the database in the
process. You should therefore be very careful about where you create
the tablespace. The location should be permanent (and not on a
temporary file system such as /tmp), and the underlying hardware
should be resilient against failures (using RAID-1, for example).

The procedure can also be used for Fine Grained Auditing audit trails.
To move a Fine Grained Auditing audit trail, simply replace the value
of the audit_trail_type parameter in Listing 1 with
dbms_audit_mgmt.audit_trail_fga_std. If you want to move both the
standard and Fine Grained Auditing audit trails to the new tablespace,
use the dbms_audit.audit_trail_db_std value as the audit_trail_type
parameter.

Purging Old Data
Next, let’s examine how to purge audit trails. The audit management
package includes a procedure that automatically performs the purge for
you. But before you can actually use it, you must call a one-time
initialization procedure—INIT_CLEANUP—to set up the audit management
infrastructure. Listing 2 shows how to perform the initialization.

Code Listing 2: Initializing cleanup of audit entries

begin
dbms_audit_mgmt.init_cleanup(
audit_trail_type => dbms_audit_mgmt.audit_trail_db_std,
default_cleanup_interval => 24 );
end;
The INIT_CLEANUP procedure takes two parameters, neither of which
takes a default value:

audit_trail_type—designates the type of audit trail being initialized.
For instance, audit_trail_aud_std indicates the standard database
audit trail (the AUD$ table). Table 1 lists the possible values for
this parameter and the audit trail types they represent.
default_cleanup_interval—designates the default interval in hours
between executions of automatic purge jobs (to be discussed later in
this article).

Parameter Description
audit_trail_aud_std The standard AUD$ audit trail in the database
audit_trail_fga_std The FGA_LOG$ table, for Fine Grained Auditing
audit_trail_db_std Both standard and FGA audit trails
audit_trail_os The OS audit trail
audit_trail_xml The XML audit trail
audit_trail_files Both OS and XML audit trails
audit_trail_all All of the above

Table 1: Types of audit trails for audit_trail_type

In addition to setting the default cleanup frequency, the INIT_CLEANUP
procedure moves the audit trail out of the SYSTEM tablespace. If the
FGA_LOG$ and AUD$ tables are in the SYSTEM tablespace, the procedure
will move them to the SYSAUX tablespace. Needless to say, you should
ensure that the SYSAUX tablespace has sufficient space to hold both of
these tables. The process of moving data from one tablespace to the
other can have an impact on performance, so you should avoid calling
the procedure during peak hours.

If you have already relocated these two tables to another tablespace
(as described in the previous section), they will stay in the new
location and the procedure will execute much more quickly.

After calling the initialization procedure, you can perform the actual
audit trail cleanup, but you likely wouldn’t just remove an audit
trail blindly. In most cases, you would archive the trail first before
performing a permanent purge. When doing so, you can call another
procedure—SET_LAST_ARCHIVE_TIMESTAMP—to let the purge process know the
time stamp up to which an audit trail has been archived. This
procedure accepts three parameters:

audit_trail_type—the type of audit trail you are about to purge.
last_archive_time—the last time the audit trail was archived for this type.
rac_instance_number—with an Oracle Real Application Clusters (Oracle
RAC) database, OS audit trail files exist on more than one server.
It’s possible to archive these files at different times, so this
parameter tells the purge process the archive time of each node (or
instance number) of the cluster. This parameter is applicable to
Oracle RAC databases only; it has no significance for single-instance
databases. Furthermore, this parameter is irrelevant for database
audit trails, because they are common to all Oracle RAC instances.

After you set the archive time stamp, you can check its value from a
data dictionary view, DBA_AUDIT_MGMT_LAST_ARCH_TS. Listing 3 shows how
to set the cutoff time stamp to September 30, 2009 at 10 a.m. and
subsequently check its value from the view.

Code Listing 3: Setting the last archived time

begin
dbms_audit_mgmt.set_last_archive_timestamp(
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
last_archive_time =>
to_timestamp(‘2009-09-30 10:00:00′,’YYYY-MM-DD HH24:MI:SS’),
rac_instance_number => null
);
end;
/

SQL> select * from DBA_AUDIT_MGMT_LAST_ARCH_TS;

AUDIT_TRAIL RAC_INSTANCE
——————————————————————————————————
LAST_ARCHIVE_TS
——————————————————————————————————
STANDARD AUDIT TRAIL 0
30-SEP-09 10.00.00.000000 AM +00:00
Now you can execute the purge. To do so, run the code shown in
Listing 4. The CLEAN_AUDIT_TRAIL procedure in the listing accepts two
parameters. The first one is audit_trail_type. The second
parameter—use_last_arch_timestamp—specifies whether the purge should
be performed, depending on the last archive time stamp. If the
parameter is set to TRUE (the default), the purge will delete the
records generated before the time stamp (September 30, 2009 at 10 a.m.
in this case). If it is set to FALSE, all audit trail records will be
deleted.

Code Listing 4: Purging a standard database audit trail

begin
dbms_audit_mgmt.clean_audit_trail(
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
use_last_arch_timestamp => TRUE
);
end;
/
This same procedure is also used to purge file-based audit trails
such as OS file audit trails and XML trails. To purge those trails,
just specify the appropriate value for the audit_trail_type parameter
(as shown in Table 1). However, note that for file-based audit trails,
only the files in the current audit directory (as specified by the
audit_file_dest initialization parameter) will be deleted. If you have
audit trail files in a different directory from the one specified in
audit_file_dest, those files will not be deleted.

Note that in Microsoft Windows, audit trails are entries in Windows
Event Viewer and not actual OS files. So purging OS-based audit trails
on that platform will not delete the trails.

Setting Up Automatic Purge
The foregoing process is good for a one-time purge of audit trails. To
ensure that audit trails do not overwhelm their tablespace, you may
want to institute an automatic purge mechanism. The DBMS_AUDIT_MGMT
package has another procedure—CREATE_PURGE_JOB—to do just that. This
procedure takes four parameters:

audit_trail_type—the type of the audit trail
audit_trail_purge_interval—the duration, in hours, between executions
of the purge process
audit_trail_purge_name—the name you assign to this job
use_last_arch_timestamp—an indication of whether the job should delete
audit trail records marked as archived. The default is TRUE. If the
parameter is set to FALSE, the procedure will delete the entire trail.

Listing 5 shows how to create a purge job that deletes standard audit
trail records every 24 hours. As with one-time purges, you can create
different jobs for each type of trail—such as standard, Fine Grained
Auditing, OS files, and XML—simply by specifying different values for
audit_trail_type when calling CREATE_PURGE_JOB. You can even set
different purge intervals for each audit trail type to suit your
archival needs. For instance, you can use a simple database-link-based
script to pull database audit trail records to a different database
while using a third-party tool to pull the OS audit trails. The
execution time of each approach may be different, causing the database
records to be pulled every day while the OS files are being pulled
every hour. As a result, you might schedule purge jobs with an
interval of 24 hours for database-based trails and with an interval of
one hour for OS-file-based trails.

Code Listing 5: Creating a purge job for a standard audit trail

begin
dbms_audit_mgmt.create_purge_job (
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
audit_trail_purge_interval => 24,
audit_trail_purge_name => ‘std_audit_trail_purge_job’,
use_last_arch_timestamp => TRUE
);
end;
/
You can view information about automatic purge jobs by accessing the
DBA_AUDIT_MGMT_CLEANUP_JOBS data dictionary view. It shows all the
important attributes of the job, such as the name, the type of audit
trail being cleaned, and the frequency.

Setting Audit Trail Properties
Next Steps

READ more about
the audit management package
database security

When setting up a purge job, you should always remember one very
important fact. It performs a DELETE operation—not TRUNCATE—on
database-based trails, so the purge operation generates redo and undo
records, which may be quite significant, depending on the number of
trail records deleted. A large deletion can potentially fill up the
undo tablespace. To reduce the redo size of a transaction, the purge
job deletes in batches of 1,000 and performs commits between them. If
the database is very large, it may be able to handle much more redo
easily. You can change the delete batch size by using the
SET_AUDIT_TRAIL_PROPERTY procedure. Listing 6 shows how to set the
delete batch size to 100,000.

Code Listing 6: Setting the deletion batch size

begin
dbms_audit_mgmt.set_audit_trail_property(
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
audit_trail_property => dbms_audit_mgmt.db_delete_batch_size,
audit_trail_property_value => 100000);
end;
/

In addition to the db_delete_batch_size property referenced in Listing
6, you can use SET_AUDIT_TRAIL_PROPERTY to set several other important
properties. They include the following:

file_delete_batch_size specifies how many OS audit trail files will be
deleted by the purge job in one batch.
cleanup_interval specifies the default interval, in hours, between
executions of a purge job.
os_file_max_age specifies how many days an OS file or an XML file can
be left open before a new file is created.
os_file_max_size specifies the maximum size of an audit trail file (in
kilobytes).

To find the current value of a property, you can check the data
dictionary view DBA_AUDIT_MGMT_CONFIG_PARAMS.

Conclusion
Audit trails establish accountability. In Oracle Database 11g, there
are several types of audit trails—standard, fine-grained,
OS-file-based, and XML. In this article, you learned how to relocate a
database-based audit trail from its default tablespace—SYSTEM—to
another one designated only for audit trails. You also learned how to
purge audit trails of various types to keep them within a manageable
limit, and you finished by establishing an automatic purge process.

——————————————————————————–
Arup Nanda (arup@proligence.com) has been an Oracle DBA for more than
14 years, handling all aspects of database administration, from
performance tuning to security and disaster recovery. He was Oracle
Magazine’s DBA of the Year in 2003.

alter any table


/* Formatted on 05.05.2011 10:58:34 (QP5 v5.163.1008.3004) */
DECLARE
the_list VARCHAR2 (4000) := ‘ alter table user.tab drop part x’;
in_str VARCHAR (4000);

TYPE EventType IS TABLE OF VARCHAR2 (4000);

v_EventType EventType := EventType ();

TYPE nestedSql IS TABLE OF VARCHAR2 (4000);

v_nested_sql NestedSQL := NestedSQL ();

is_member BOOLEAN := FALSE;
v_cn number := 1;
no_valid_action EXCEPTION;
no_valid_schema EXCEPTION;

t_str varchar2(100);

v_user varchar2(30);
v_table varchar2(30);

BEGIN
v_EventType := EventType (‘ALTER’, ‘DROP’, ‘ DELETE’);

SELECT RTRIM (LTRIM (REGEXP_REPLACE (the_list, ‘( ){2,}’, ‘ ‘)))
INTO in_str
FROM DUAL;

SELECT count(SUBSTR (main_string,
position_from + 1,
position_to – position_from – 1))
into v_cn
FROM ( SELECT main_string,
DECODE (ROWNUM – 1,
0, 0,
INSTR (main_string,
‘,’,
1,
ROWNUM – 1))
position_from,
INSTR (main_string,
‘,’,
1,
ROWNUM)
position_to
FROM (SELECT REPLACE (UPPER (in_str), CHR (32), ‘,’)
main_string
FROM DUAL)
CONNECT BY LEVEL <= LENGTH (main_string))
WHERE position_to > 0;

FOR i
IN (SELECT SUBSTR (main_string,
position_from + 1,
position_to – position_from – 1)
s,
ROWNUM c
FROM ( SELECT main_string,
DECODE (ROWNUM – 1,
0, 0,
INSTR (main_string,
‘,’,
1,
ROWNUM – 1))
position_from,
INSTR (main_string,
‘,’,
1,
ROWNUM)
position_to
FROM (SELECT REPLACE (UPPER (in_str), CHR (32), ‘,’)
main_string
FROM DUAL)
CONNECT BY LEVEL <= LENGTH (main_string))
WHERE position_to > 0)
LOOP
DBMS_OUTPUT.PUT_LINE (i.s);

IF i.c = 1
THEN
is_member := i.s MEMBER OF v_EventType;

IF is_member
THEN
v_nested_sql.EXTEND (v_cn);
v_nested_sql (i.c) := i.s;
–DBMS_OUTPUT.PUT_LINE (i.c||’. element is ‘||
v_nested_sql (i.c) );
ELSE
RAISE no_valid_action;
END IF;
ELSE
v_nested_sql (i.c) := i.s;
–DBMS_OUTPUT.PUT_LINE (i.c||’. element is ‘|| v_nested_sql (i.c) );
END IF;

END LOOP;

DBMS_OUTPUT.PUT_LINE(chr(9));
DBMS_OUTPUT.NEW_LINE;

— 3. element is table
if v_nested_sql (1) =v_EventType(1) then
t_str := v_nested_sql (3) ;
DBMS_OUTPUT.PUT_LINE (‘ table is ‘|| t_str);
v_user:= substr(t_str,1,instr(t_str,’.’,1)-1);
DBMS_OUTPUT.PUT_LINE (‘ user is ‘|| v_user);
— hier kann die Berechtigung auf das Schema geprüft werden

— example
if v_user = ‘SYS’ then
raise no_valid_schema;
end if;

end if;

EXCEPTION
WHEN no_valid_action
THEN
RAISE_APPLICATION_ERROR (-20001, ‘no valid action’);
WHEN no_valid_schema
THEN
RAISE_APPLICATION_ERROR (-20002, ‘no valid schema’);
WHEN OTHERS
THEN
DBMS_OUTPUT.PUT_LINE (SQLERRM);
END;
/

Oracle: Backupset keep until


 
— get backup handle and backup pieces

select

d.name,

       b.bs_key,S.HANDLE,
       decode(b.incremental_level,null,decode(b.backup_type,'D','Ctrl','I','Inc','L','Arch','n/k'),
        'Inc'||b.incremental_level) backup_type,

       decode(b.status,'A','Avail.','O','Obsolete',b.status) status,

       b.pieces,
       b.start_time,
       decode(b.controlfile_included,'NONE','No','BACKUP','Yes','STANDBY','Standby','n/k') controlfile_included,
       b.keep,b.keep_options,
       b.keep_until
from rc_database d,
     rc_backup_set b,
     rc_backup_piece s
where d.db_key=b.db_key and d.db_key=s.db_key
      and B.SET_COUNT=S.SET_COUNT and B.SET_STAMP=S.SET_STAMP
      and upper(d.name) like upper('ADMP01')
      and B.START_TIME > to_date('12/01/2011 09:00','DD/MM/YY HH24:MI')
order by start_time desc;

 
— change keep option

select 'CHANGE BACKUPSET '|| b.bs_key||' keep until time ''sysdate+10'' logs;'

from rc_database d,
rc_backup_set b, rc_backup_piece s
where d.db_key=b.db_key and d.db_key=s.db_key
and B.SET_COUNT=S.SET_COUNT and B.SET_STAMP=S.SET_STAMP
and upper(d.name) like upper('ADMP01')
and B.START_TIME > to_date('12/01/2011 09:00','DD/MM/YY HH24:MI')
and B.BACKUP_TYPE<>'L'
order by b.start_time desc, b.bs_key;

How to use Oracle Logminer to analysis Logfile


http://arjudba.blogspot.com/2008/08/how-to-use-oracle-logminer-to-analysis.html

How to use Oracle Logminer to analysis Logfile
————————————————
We know that any changes to database is recored in online redo logfiles.
If your database archival mode on then online redo log files are archived which contains the database changes.
With these logfile we can analysis any changes in database.

In simple we can say an undo entry provides the values of data stored
before a change and the redo entry provides the values of data stored after a change.
Because undo segments are database changes, they also generate redo entries.
So we can get them from online redo logs and then to archived logs.

So from online redo logs and from archived redo logs we can get database redo and undo information.
But online and archived logs have an unpublished format and are not human-readable.
With the DBMS_LOGMNR package we can analysis redo log files and can get
back undo and redo information in a human readable format.

Another scenario of use of logminer is to investigate database past in time.
With Flashback Query we can get prior values of the record in the table at
some point in the past but is limited to UNDO_RETENTION parameter
(which is often as short as 30 minutes for an OLTP database.).
So in order to analysis past activity on the database logminer is a good choice.

In this step I will show you the step by step procedure how we can use logminer.

1)Ensure that you have on at a minimal level supplemental logging
To work with logminer you must have database supplemental logging
on of the source database at a minimum level.By default,
Oracle Database does not provide any supplemental logging,
which means that by default LogMiner is not usable. To make it use you need to on it.

You can check your supplemental logging on of off by following commands,
SQL> select SUPPLEMENTAL_LOG_DATA_MIN from v$database;
SUPPLEME
——–
NO

In order to on it at a minimal level,
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

2)Install the DBMS_LOGMNR package.
The next step is to install DBMS_LOGMNR package.
If you have already installed then ignore this steps.
You can install this package by running $ORACLE_HOME/rdbms/admin/dbmslm.sql script.
If you create your database with dbca then this script run automatically.
So you can ignore this step.
However if you create database manually with CREATE DATABASE … command
then you must run the script before using logminer. That is ,
SQL>@$ORACLE_HOME/rdbms/admin/dbmslm.sql

3)Grant the EXECUTE_CATALOG_ROLE role.

The user who will do the mining task give him EXECUTE_CATALOG_ROLE. Here user is ARJU.

SQL>GRANT EXECUTE_CATALOG_ROLE TO ARJU;

4)Create the synonym. ARJU creates a public synonym:

CREATE PUBLIC SYNONYM DBMS_LOGMNR FOR SYS.DBMS_LOGMNR;

All above four steps are needed just for once.

5)Specify the scope of the mining.
Now you decide on which file you will do the analysis.
You may have interest over archived redo log files or online redo log files based on your scenario.
In this post suppose you have recent problem in your database
and so you might show interest of your online redo log files.
You can see current online redo logs by,
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;

LOGFILENAME
——————————————————————————–
/oradata2/data1/dbase/redo01.log
/oradata2/data1/dbase/redo03.log
/oradata2/data1/dbase/redo3.log

Sometimes, you want to mine the redo log file that was most recently archived.

SQL> SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME
——————————————————————————–
/oradata2/flash_recovery_area/DBASE/archivelog/2008_08_03/o1_mf_1_839_49bodr0k_.
arc
With the DBMS_LOGMNR.ADD_LOGFILE specify those files that you want to mine.
Here I have given the online redo log files redo01.log and redo03.log.
SQL>BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo01.log');
DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo03.log');
END;
/

6)Start the LogMiner session and specify a dictionary.
To start the LogMiner session:

BEGIN
DBMS_LOGMNR.START_LOGMNR
(options =>
dbms_logmnr.dict_from_online_catalog);
END;
/

Using the OPTIONS parameter,
it is specified that Oracle Database read the dictionary information
to convert the object names from the online catalog while starting LogMiner.

7)Check contents from V$LOGMNR_CONTENTS view.
To get information and when DML or DDL happened in the V$LOGMNR_CONTENTS about table TEST3 we can issue

SQL> select username, to_char(timestamp,'mm/dd/yy hh24:mi:ss') timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'TEST3';

USERNAM TIMESTAMP SEG_TY SEG_NAME TABLE_SPAC SID SERIAL# OPERATION
——- —————– —— ———- ———- ———- ———- ——————————–
08/03/08 02:57:35 TABLE TEST3 153 3272 DDL
08/03/08 02:57:41 TABLE TEST3 SYSTEM 153 3272 INSERT
08/03/08 02:57:55 TABLE TEST3 SYSTEM 153 3272 UPDATE
08/03/08 03:01:35 TABLE TEST3 SYSTEM 153 3272 UPDATE

We can get SQL_UNDO and SQL_REDO information by,

SQL> select sql_undo, sql_redo
from v$logmnr_contents
where table_name = 'TEST3' and OPERATION='UPDATE';

SQL_UNDO SQL_REDO
————————————————– ————————————————–
update "SYS"."TEST3" set "A" = '3' where "A" = '9' update "SYS"."TEST3" set "A" = '9' where "A" = '3'
and ROWID = 'AAAOKVAABAAAP8qAAA'; and ROWID = 'AAAOKVAABAAAP8qAAA';

update "SYS"."TEST3" set "A" = '9' where "A" = '10 update "SYS"."TEST3" set "A" = '10' where "A" = '9
' and ROWID = 'AAAOKVAABAAAP8qAAA'; ' and ROWID = 'AAAOKVAABAAAP8qAAA';

8)End the LogMiner session.

Use the DBMS_LOGMNR.END_LOGMNR procedure.
SQL> BEGIN
DBMS_LOGMNR.END_LOGMNR;
END;
/

PL/SQL procedure successfully completed.

How to install XDB for 11g


Applies to:
Oracle Server – Enterprise Edition – Version: 11.1.0.6 and later   [Release: 11.1 and later ]
Oracle Server – Personal Edition – Version: 11.1.0.6 and later    [Release: 11.1 and later]
Information in this document applies to any platform.
Checked for relevance on 12-Nov-2010
Purpose
This script will provide output to show which features of XML Database (XDB) are currently being utilized in the database.

 

There are times where this script may not run due to XDB repository  items being invalid or corrupt .
if you encounter exceptions running this script you can try the script form note: 790812.1 instead. 
Even though it is a 10g note it will still give support a base idea of which items are being
used as they offer repair/recovery recommendations.

Software Requirements/Prerequisites
Use SQLPlus.  This script is for 11g.

Configuring the Script
Run the script as the SYS user.

Running the Script
1.  Log in to the DB using SQLPlus

2.  Connect as the SYS user

3.  Copy the script below into a text document and save it as xdbusagecheck.sql

4.  Run the xdbusagecheck.sql script

5.  Review the output and provide to support as necessary. 
     (see sample output below)

Caution
This script is provided for educational purposes only and not supported by Oracle Support Services.
It has been tested internally, however, and works as documented.
We do not guarantee that it will work for you,
so be sure to test it in your environment before relying on it.
Proofread this script before using it! Due to the differences in the way text editors,
e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns),
this script may not be in an executable state when you first receive it.
Check over the script to ensure that errors of this type are corrected.

— begin script
set pagesize 0
set serveroutput on–Begin XDB health and usage check
declare 
–define cursors 
–check for version 
cursor c_ver is select version from v$instance; 
–check for invalids owned by XDB 
cursor c_inval is select * from dba_objects where status='INVALID' and OWNER in ('SYS','XDB'); 
— Check status of other database features 
cursor c_feat is select comp_name,status,version from dba_registry; 
–check for xml type tables 
cursor c_xml_tabs is select owner,storage_type,count(*) "TOTAL" from dba_xml_tables group by owner,storage_type; 
–check for xml type colmns 
cursor c_xml_tab_cols is select owner,storage_type,count(*) "TOTAL" from dba_xml_tab_cols group by owner,storage_type; 
–check for xml type views 
cursor c_xml_vw is select owner,count(*) "TOTAL" from dba_xml_views group by owner; 
–check for xml type Indexes 
cursor c_xml_idx is select index_owner,type,count(*) "TOTAL" from dba_xml_indexes group by index_owner,type; 
–check for API's bbuilt with XML API's 
cursor c_api is select owner,name,type from dba_dependencies
where referenced_name in (select object_name from dba_objects
where object_name like 'DBMS_XML%' or object_name like 'DBMS_XSL%') and TYPE !='SYNONYM' and owner !='SYS'; 
–check for registered Schemas 
cursor c_xml_schemas is select owner,count(*) "TOTAL" from dba_xml_schemas group by owner; 
–check for user defined resources in the repository 
cursor c_res is select distinct (a.username) "USER",count (r.xmldata) "TOTAL"
from dba_users a, xdb.xdb$resource r
where sys_op_rawtonum (extractvalue (value(r),'/Resource/OwnerID/text()')) =a.USER_ID group by a.username; 
— check xdbconfig.xml values 
cursor c_config is select value(x).GETROOTELEMENT() NODENAME, extractValue(value(x),'/*') NODEVALUE
from table(xmlsequence(extract(xdburitype('/xdbconfig.xml').getXML(),'//*[text()]'))) x; 
–check for Network ACLs 
cursor c_net_acls is select * from dba_network_acls; 
–define variables for fetching data from cursors 
v_ver c_ver%ROWTYPE; 
v_inval c_inval%ROWTYPE; 
v_feat c_feat%ROWTYPE; 
v_xml_tabs c_xml_tabs%ROWTYPE; 
v_xml_tab_cols c_xml_tab_cols%ROWTYPE; 
v_xml_vw c_xml_vw%rowtype; 
v_xml_idx c_xml_idx%rowtype; 
v_api c_api%rowtype; 
v_c_net_acls c_net_acls%rowtype; 
v_xml_schemas c_xml_schemas%rowtype; 
v_res c_res%ROWTYPE; 
v_config c_config%rowtype; 
— Static variables 
v_errcode NUMBER := 0; 
v_errmsg varchar2(50) := ' '; 
l_dad_names DBMS_EPG.varchar2_table; 
–stylesheet for xdbconfig.xml reading 
v_style clob :=''; 
begin open c_ver; 
fetch c_ver into v_ver; 
–check minimum XDB requirements 
if dbms_registry.version('XDB') in ('9.2.0.1.0','9.2.0.2.0') then 
DBMS_OUTPUT.PUT_LINE('!!!!!!!!!!!!! UNSUPPORTED VERSION !!!!!!!!!!!!!'); 
DBMS_OUTPUT.PUT_LINE('Minimun version is 9.2.0.3.0. actual version is: '||dbms_registry.version('XDB')); 
end if; 
if v_ver.version like '11.%' then DBMS_OUTPUT.PUT_LINE(' Doing '||v_ver.version||' checks'); 
— Print XDB status 
DBMS_OUTPUT.PUT_LINE('############# Status/Version #############'); 
DBMS_OUTPUT.PUT_LINE('XDB Status is: '||dbms_registry.status('XDB')||' at version '||dbms_registry.version('XDB')); 
end if; 
if v_ver.version != dbms_registry.version('XDB') then 
DBMS_OUTPUT.PUT_LINE('Database is at version '||v_ver.version||' XDB is at version '||dbms_registry.version('XDB')); 
end if; 
–Check Status if invalid gather invalid objects list and check for usage if valid simply check for usage 
if dbms_registry.status('XDB') != 'VALID' then 
DBMS_OUTPUT.PUT_LINE('############# Invalid Objects #############'); 
open c_inval; 
loop 
fetch c_inval into v_inval; 
DBMS_OUTPUT.PUT_LINE('Type: '||v_inval.object_type||' '||v_inval.owner||'.'||v_inval.object_name); 
exit when c_inval%NOTFOUND; 
end loop; 
close c_inval; 
end if; 
— Check XDBCONFIG.XML paramareters 
DBMS_OUTPUT.PUT_LINE('############# OTHER DATABASE FEATURES #############'); 
open c_feat; 
loop 
fetch c_feat into v_feat; 
exit when c_feat%NOTFOUND; 
if c_feat%rowcount >0 then 
DBMS_OUTPUT.PUT_LINE(v_feat.comp_name||' is '||v_feat.status||' at version '||v_feat.version); 
else DBMS_OUTPUT.PUT_LINE('No Data Found'); 
end if; 
end loop; 
close c_feat; 
— Check XDBCONFIG.XML paramareters 
DBMS_OUTPUT.PUT_LINE('############# XDBCONFIG INFORMATION #############'); 
open c_config; 
loop 
fetch c_config into v_config; 
exit when c_config%NOTFOUND; 
if c_config%rowcount >0 then 
DBMS_OUTPUT.PUT_LINE(v_config.NODENAME||'= = = '||v_config.NODEVALUE); 
else DBMS_OUTPUT.PUT_LINE('No Data Found'); 
end if; 
end loop; 
close c_config; 
— Check if they have any xmltype tables or columns and if they are schema based, clob or binary 
DBMS_OUTPUT.PUT_LINE('############# XMLTYPE Tables #############'); 
open c_xml_tabs; 
loop 
fetch c_xml_tabs into v_xml_tabs; 
exit when c_xml_tabs%NOTFOUND; 
DBMS_OUTPUT.PUT_LINE(v_xml_tabs.owner||' has '||v_xml_tabs.TOTAL||' XMLTYPE TABLES stored as '||v_xml_tabs.storage_type); 
end loop; 
close c_xml_tabs; 
DBMS_OUTPUT.PUT_LINE('############# XMLTYPE Columns #############'); 
open c_xml_tab_cols; 
loop 
fetch c_xml_tab_cols into v_xml_tab_cols; 
exit when c_xml_tab_cols%NOTFOUND; 
if c_xml_tab_cols%rowcount > 0 then 
DBMS_OUTPUT.PUT_LINE(v_xml_tab_cols.owner||' has '||v_xml_tab_cols.TOTAL||' XMLTYPE Columns stored as ' ||v_xml_tab_cols.storage_type); 
else DBMS_OUTPUT.PUT_LINE('No Data Found'); 
end if; 
end loop; 
close c_xml_tab_cols; 
DBMS_OUTPUT.PUT_LINE('############# XMLTYPE Views #############'); 
open c_xml_vw; 
loop 
fetch c_xml_vw into v_xml_vw; 
exit when c_xml_vw%NOTFOUND; 
if c_xml_vw%rowcount > 0 then 
DBMS_OUTPUT.PUT_LINE(v_xml_vw.owner||' has '||v_xml_vw.TOTAL||' XMLTYPE Views'); 
else DBMS_OUTPUT.PUT_LINE('No Data Found'); 
end if; 
end loop; 
close c_xml_vw; 
DBMS_OUTPUT.PUT_LINE('############# XMLTYPE INDEXES #############'); 
open c_xml_idx; 
loop 
fetch c_xml_idx into v_xml_idx; 
exit when c_xml_idx%NOTFOUND; 
if c_xml_idx%rowcount > 0 then 
DBMS_OUTPUT.PUT_LINE(v_xml_idx.index_owner||' has '||v_xml_idx.TOTAL||' XMLTYPE Indexes of type '||v_xml_idx.type); 
else DBMS_OUTPUT.PUT_LINE('No Data Found'); 
end if; 
end loop; 
close c_xml_idx; 
DBMS_OUTPUT.PUT_LINE('############# Items built with XML API''s #############'); 
open c_api; 
loop 
fetch c_api into v_api; 
exit when c_api%NOTFOUND;
if c_api%rowcount > 0 then 
DBMS_OUTPUT.PUT_LINE(v_api.type||' '||v_api.owner||'.'||v_api.name); 
else DBMS_OUTPUT.PUT_LINE('No Data Found'); 
end if; 
end loop; 
close c_api; 
DBMS_OUTPUT.PUT_LINE('############# XML SCHEMAS #############'); 
open c_xml_schemas; 
loop 
fetch c_xml_schemas into v_xml_schemas; 
exit when c_xml_schemas%NOTFOUND; 
if c_xml_schemas%rowcount >0 then 
DBMS_OUTPUT.PUT_LINE(v_xml_schemas.owner||' has '||v_xml_schemas.TOTAL||' registered.'); 
else DBMS_OUTPUT.PUT_LINE('No Data Found'); 
end if; 
end loop; 
close c_xml_schemas; 
— Check for repository resources 
DBMS_OUTPUT.PUT_LINE('############# Repository Resources #############'); 
open c_res; 
loop 
fetch c_res into v_res; 
exit when c_res%NOTFOUND; 
if c_res%rowcount >0 then 
DBMS_OUTPUT.PUT_LINE(v_res.USER||' has '||v_res.TOTAL||' resources.'); 
else DBMS_OUTPUT.PUT_LINE('No Data Found'); 
end if; 
end loop; 
close c_res; 
— Check Network ACLS 
DBMS_OUTPUT.PUT_LINE('############# Network ACLs Configured #############'); 
open c_net_acls; 
loop 
fetch c_net_acls into v_c_net_acls; 
if c_net_acls%rowcount >0 then 
DBMS_OUTPUT.PUT_LINE(v_c_net_acls.host||' has network acls configured for ports '||v_c_net_acls.lower_port||' through '|| v_c_net_acls.upper_port); 
else DBMS_OUTPUT.PUT_LINE('No Data Found'); 
end if; 
exit when c_net_acls%NOTFOUND; 
end loop; 
close c_net_acls; 
–Check DAD configuration to see if DBMS_EPG is being used 
DBMS_OUTPUT.put_line('############# DBMS_EPG DAD USAGE #############'); 
DBMS_EPG.GET_DAD_LIST (l_dad_names); 
FOR i IN 1 .. l_dad_names.count LOOP 
DBMS_OUTPUT.put_line(l_dad_names(i)); 
END LOOP; 
close c_ver; 
EXCEPTION 
WHEN no_data_found THEN 
DBMS_OUTPUT.PUT_LINE('No Data Found'); 
WHEN others THEN 
v_errcode := sqlcode; 
v_errmsg := SUBSTR(sqlerrm, 1, 50); 
DBMS_OUTPUT.PUT_LINE('ERROR: '||v_errcode||': ' || v_errmsg); 
end; 
/
–End XDB health and usage check