Today we got an issue that stuck our database. Database generated an expected number of archives which consumed the free space on the server. We asked our team if there is any long running process or abnormal process they confirmed nothing happened. We got the root cause by tracing the archives using the Oracle logMiner. The issue was a lot of inserts came to a custom audit table.
First identify the name and location for your archives.
select name,FIRST_TIME
from v$archived_log
where trunc(FIRST_TIME) = trunc(to_date('22-02-2013','dd-mm-yyyy'));
------------------------------------- ------------------------------
/OraData/Archive/arch_1_226111.arc 08/03/2007 19:06:19
/OraData/Archive/arch_1_226112.arc 08/03/2007 19:06:20
/OraData/Archive/arch_1_226113.arc 08/03/2007 19:06:22
Then let the logminor use theses logs:
SQL> exec DBMS_LOGMNR.ADD_LOGFILE('/OraData/Archive/arch_1_226111.arc');
PL/SQL procedure successfully completed.
SQL> exec DBMS_LOGMNR.ADD_LOGFILE('/OraData/Archive/arch_1_226112.arc');
PL/SQL procedure successfully completed.
SQL> exec DBMS_LOGMNR.ADD_LOGFILE('/OraData/Archive/arch_1_226113.arc');
PL/SQL procedure successfully completed.
From the belwo query we can check if logminor can see the files:
SELECT * FROM V$LOGMNR_LOGS;
Then initiate the mining process by:
SQL> exec DBMS_LOGMNR.START_LOGMNR(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.
We can check the executed SQL statements by one of the following queries:
select seg_owner,seg_name,seg_type_name,operation ,min(TIMESTAMP) mintime,max(TIMESTAMP) maxtime,count(*) vcount
from v$logmnr_contents
group by seg_owner,seg_name,seg_type_name,operation
order by count(*) desc;
SELECT timestamp, scn, start_scn, operation, seg_owner, seg_name, sql_redo
FROM V$LOGMNR_CONTENTS
ORDER BY timestamp, scn;
No comments:
Post a Comment