Tuesday, 17 May 2011

Oracle performance tunning using tkprof


tkprof is one of the most useful utilities available to DBAs for diagnosing performance issues.  It essentially formats a trace file into a more readable format for performance analysis.  The DBA can then identify and resolve performance issues such as poor SQL, indexing, and wait events. 
tkprof has been historically difficult to use for many reasons.  First, the entire process of enabling tracing, finding trace files, and executing the utility against them is a burdensome task.  Once the DBA finally has the trace file output the typical response is “Now what do I do”?  Second, even though tkprof formats the data, it lacks any additional insight needed to remedy the problems revealed.  In fact, problems are not even highlighted, thereby putting more work on the DBA to analyze the output, assess the problems, and determine what to do. 
The DBA will use tkprof and session tracing when the database or a particular session is having performance problems.  tkprof will generally be used infrequently, when researching a very particular performance issue.  A user may complain that the response time for a session is abysmal compared to the prior week.  Session tracing and tkprof can be used to see exactly what is happening on the database, enabling the DBA to take corrective action.
The utility can also be used to view SQL that is being executed for an application.  In some situations, this will be the only mechanism a DBA will have to view SQL.  These situations include the execution of encrypted PL/SQL code on the database or submission of SQL statements from third party applications.
So what should DBAs be looking for?  Here’s a small checklist of items to watch for in tkprof formatted files: 
·     Compare the number of parses to number of executions.  A well-tuned system will have one parse per n executions of a statement and will eliminate the re-parsing of the same statement.  
·     Search for SQL statements that do not use bind variables (:variable).  These statements should be modified to use bind variables.
·     Identify those statements that perform full table scans, multiple disk reads, and high CPU consumption.  These performance benchmarks are defined by the DBA and need to be tailored to each database.  What may be considered a high number of disk reads for an OLTP application may not even be minimal for a data warehouse implementation.
The tkprof process will be explained in six easy steps.
Step 1: Check the Environment
Before tracing can be enabled, the environment must first be configured by performing the following steps:

·     Enable Timed Statistics – This parameter enables the collection of certain vital statistics such as CPU execution time, wait events, and elapsed times.  The resulting trace output is more meaningful with these statistics.  The command to enable timed statistics is:
ALTER SYSTEM SET TIMED_STATISTICS = TRUE;   

·     Check the User Dump Destination Directory – The trace files generated by Oracle can be numerous and large.  These files are placed by Oracle in user_dump_dest directory as specified in the init.ora.  The user dump destination can also be specified for a single session using the alter session command.  Make sure that enough space exists on the device to support the number of trace files that you expect to generate.

SQL> select value
         from v$parameter
         where name = 'user_dump_dest';

VALUE
---------------------------------
C:\oracle9i\admin\ORCL92\udump

Once the directory name is obtained, the corresponding space command (OS dependent) will report the amount of available space.  Delete unwanted trace files before starting a new trace to free up the disk space. 
Step 2: Turn Tracing On
The next step in the process is to enable tracing.  By default, tracing is disabled due to the burden (5-10%) it places on the database.  Tracing can be defined at the session level:

ALTER SESSION SET SQL_TRACE = TRUE;

DBMS_SESSION.SET_SQL_TRACE(TRUE);

A DBA may enable tracing for another user’s session by:

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);

where the sid (Session ID) and serial# can be obtained from the v$session view.  This package is owned by the SYS user and therefore the executor must be SYS or be granted EXECUTE the privilege by SYS user. 

Once tracing is enabled, Oracle generates and stores the statistics in the trace file.    The trace file name is version specific.   Table 5.1 below contains the version naming conventions for foreground processes.

Version
Naming Convention
Example
7.3.4
snnn_pid.trc
s000_4714.trc
8.0.5
ora_pid_trc
ora_2345.trc
8.1.7
ora_pid_instance.trc
ora_13737_asgard81.trc
9.x
instance_ora_pid.trc
asgard91_ora_15313.trc
Table 5.1 - Oracle Trace File Naming Conventions
Supplied with this information, the DBA can construct a query that will return the trace file for a given session or for all sessions attached to the database.  The query below (Users_Trace_Files.sql) will show the trace file name for each process.

< users_trace_files.sql 

column username format a10
column trace_file format a70
select b.username, c.value || '\' || lower(d.value) || '_ora_' ||
       to_char(a.spid, 'fm00000') || '.trc' "TRACE_FILE"
from v$process a, v$session b, v$parameter c, v$parameter d
where a.addr   = b.paddr
and c.name     = 'user_dump_dest'
and d.name     = 'db_name'
and b.username is not null;

USERNAME   TRACE_FILE
---------- --------------------------------------------------------
SYS        C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03164.trc
SCOTT      C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_02264.trc
DAVE       C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03578.trc

Notice that the trace files are for each session and not for each named user.  Given that the SYS user has two connections to the database, the commands for each session would be in separate trace files.

The query can be modified to return the file name for the currently connected session. The script below will return the file name for the current session.

select c.value || '\' || lower(d.value) || '_ora_' ||
       to_char(a.spid, 'fm00000') || '.trc' "TRACE FILE"
from v$process a, v$session b, v$parameter c, v$parameter d
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name   = 'user_dump_dest'
and d.name   = 'db_name';

TRACE FILE
---------------------------------------------------------------
C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03164.trc

Both queries above generate the trace file names (with Oracle9i on Windows XP) that would exist if the session were to be traced.  However, there is no indication in any V$ view that a session is currently being traced.  The only way to really know if tracing is being performed is to inspect the file names and dates in user_dump_dest directory.  For this reason, a DBA should not trace a session indefinitely, as it will continue to consume both performance resources and file system resources.  If the user_dump_dest directory fills, the Oracle database will come to a screeching halt.

When the DBA determines that enough data has been gathered, the next step is to disable tracing.
Step 3: Turn Tracing Off
The same options that we use to enable tracing are used to disable it.  These include:

ALTER SESSION SET SQL_TRACE = FALSE;

DBMS_SESSION.SET_SQL_TRACE(FALSE);

To disable tracing for another user’s session use:

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,false);

This process is a perfect candidate for automation.  The code below (start_trace.sql) creates a stored procedure that automates all the steps discussed so far. It can also serve as a wrapper for the standard methods of enabling tracing.  Start_trace.sql accepts the sid and serial# for the session that needs tracing.  It requires that a time interval, in seconds, be set  to run the trace so that it doesn’t run perpetually and bog down the session.  When the time has elapsed, it will disable tracing for the session and send the relevant trace information: user, time, and trace file name. 



< start_trace.sql

create or replace procedure start_trace
 (
   v_sid     in number,
   v_serial# in number,
   seconds   in number)

---------------------------------------------
-- 2003 - Oracle Utilities
-- D. Moore
--
-- This procedure serves as a wrapper to 
-- session tracing.  It accepts 
-- a sid and serial#, along with the amount of
-- time in seconds that the trace should last.
-- The trace will be stopped when that time
-- period expires.  After tracing is turned
-- off, the name of the trace file will be
-- displayed.
---------------------------------------------

IS
   v_user           varchar2 (32); 
   stop_trace_cmd   varchar2 (200);
   duration         number;
   v_spid           number;
   dump_dest        varchar2 (200);
   db_name          varchar2 (32);
   v_version        varchar2 (32);
   v_compatible     varchar2 (32);
   file_name        varchar2 (32);
   no_session_found exception;

BEGIN
    begin
      select a.username, b.spid into v_user,v_spid 
         from v$session a, v$process b 
         where a.sid     = v_sid  and
               a.serial# = v_serial# and
               a.paddr   = b.addr;
      exception
      when NO_DATA_FOUND then
         raise no_session_found;
     
    end; 

    dbms_system.set_sql_trace_in_session(v_sid,v_serial#,true);
    dbms_output.put_line('Tracing Started for User: '
                 || v_user);
    dbms_output.put_line('Tracing Start Time: '
                 || TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS'));   
         

    ---------------------------------------------------
    -- Sleep for the amount of seconds specified as 
    -- seconds input parameter.  When complete, stop
    -- the tracing and display the resulting trace file
    -- name
    ---------------------------------------------------

    if seconds is null then
       duration := 60;
    else
       duration := seconds;
    end if;

    dbms_lock.sleep(duration);

    -- the time alotted has now expired. Disable
    -- tracing and output the trace file information
     
    dbms_system.set_sql_trace_in_session(v_sid,v_serial#,false);
    dbms_output.put_line ('Tracing Stop Time: '
                 || TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS'));  
   
    -- get all of the data needed to format the trace file name

    select value into dump_dest
       from v$parameter
       where name = 'user_dump_dest';

    select value into db_name
       from v$parameter
       where name = 'db_name'; 

    -- we need the version of the database in order to determine
    -- the naming scheme for the trace file

    dbms_utility.db_version(v_version, v_compatible);
      
    if substr(v_version,1,1) = '9' then
       file_name := db_name || '_ora_' || v_spid || '.trc';
    elsif substr(v_version,1,3) = '8.1' then
       file_name := 'ora_' || v_spid || '_' || db_name || '.trc';
    elsif substr(v_version,1,3) = '8.0' then
       file_name := 'ora_' || v_spid || '.trc';
    end if;

    dbms_output.put_line('Trace Directory: ' || dump_dest);
    dbms_output.put_line('Trace Filename: ' || file_name);

    exception
      when no_session_found then
         dbms_output.put_line('No session found for sid and serial#
                               specified');

END start_trace;

The output from start_trace.sql is displayed below.  The time interval specified was 30 and we can see the elapsed time of the trace in the timestamps below.

SQL> exec start_trace(17, 6157, 30);

Tracing Started for User: SCOTT
Tracing Start Time: 12-26-2002 14:55:12
Tracing Stop Time: 12-26-2002 14:55:42
Trace Directory: C:\oracle9i\admin\ORCL92\udump
Trace Filename: ORCL92_ora_5472.trc

The next step is to run tkprof against the trace file.  
Step 4: Locate Trace File and Execute tkprof
Locating the file is easy because the script above gives us the file name.  tkprof will format the raw trace file, although the file is somewhat readable without tkprof. 

Raw Trace File

PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535446373886 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=37469,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=535446373874
EXEC #1:c=0,e=71,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535446375834
FETCH #1:c=31250,e=42564,p=10,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535446418910
FETCH #1:c=0,e=3852,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535446424026
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================
PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535448474894 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=146,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535448474882
EXEC #1:c=0,e=76,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535448476767
FETCH #1:c=31250,e=30553,p=12,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535448507870
FETCH #1:c=15625,e=3832,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535448512927
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================
PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535449209407 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=111,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449209395
EXEC #1:c=0,e=74,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449211302
FETCH #1:c=31250,e=32623,p=8,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535449244513
FETCH #1:c=15625,e=3918,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535449249648
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================
PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535449801444 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=102,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449801433
EXEC #1:c=0,e=74,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449803310
FETCH #1:c=31250,e=31503,p=7,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535449835358
FETCH #1:c=15625,e=4039,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535449840721
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================
PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535450369301 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=101,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535450369290
EXEC #1:c=0,e=76,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535450371203
FETCH #1:c=15625,e=28362,p=5,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535450400245
FETCH #1:c=15625,e=4333,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535450405578
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================

With minimal effort, a programmer could create a trace file parser and formatter similar to tkprof that provides the trace data in a format even more suitable for analysis.

The tkprof command can now be executed from the operating system prompt.

C:\oracle9i\admin\ORCL92\udump>tkprof ORCL92_ora_3064.trc output.txt insert=tkprof.sql record=Allsql.sql

tkprof: Release 9.2.0.1.0 - Production on Thu Dec 26 13:22:29 2002

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Based on the command above, tkprof will process the file ORCL92_ora_3064.trc and format the results in the file output.txt.  Two other files were also created (tkprof.sql, allsql.sql) that will be discussed later.
Step 5: Analyze tkprof Output
This is the most difficult step in the process.  Each tkprof output file contains a header, body, and summary section.  The header simply displays the trace file name, definitions, and sort options selected.  The body contains the performance metrics for SQL statements.  The summary section contains an aggregate of performance statistics for all SQL statements in the file. 

tkprof Output

tkprof: Release 9.2.0.1.0 - Production on Tue Dec 24 15:32:43 2002

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Trace file: ORCL92_ora_3064.trc
Sort options: default

********************************************************************************

count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************


select *
from
 employee where emp_id = 87933


call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse       10      0.00       0.03          0          0          0           0

Execute     10      0.00       0.00          0          0          0           0

Fetch       20      0.34       0.35         72       4730          0          10

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total       40      0.34       0.39         72       4730          0          10


Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 59

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL EMPLOYEE

********************************************************************************


The output displays a table of performance metrics after each unique SQL statement.  Each row in the table corresponds to each of the three steps required in SQL processing.

1.   Parse – The translation of the SQL into an execution plan.  This step includes syntax checks, permissions, and all object dependencies.
2.   Execute – The actual execution of the statement.
3.   Fetch – The number of rows returned for a SELECT statement.
The table columns include the following:

·     Count – The number of times a statement was parsed, executed, or fetched.
·     CPU – The total CPU time in seconds for all parse, execute, or fetch calls.
·     Elapsed – Total elapsed time in seconds for all parse, execute, or fetch calls.
·     Disk – The number of physical disk reads from the datafiles for all parse, execute, or fetch calls.
·     Query – The number of buffers retrieved for all parse, execute, or fetch calls.
·     Current – The number of buffers retrieved in current mode (INSERT, UPDATE, or DELETE statements).
Observe from the tkprof output above that the SQL statement performed a TABLE ACCESS FULL, meaning a full-table scan.  Full-table scans can degrade performance, especially when accessing a small subset of the data in a table.  In this case, the query is selecting one row, yet all 100,000 rows in the table are scanned.  This is a perfect situation to add an index on the EMP_ID column of the EMPLOYEE table:

SQL> CREATE INDEX emp_idx1 ON employee (emp_id);

Index created.

Let’s examine the performance of this query again, this time with the index enabled. 

select *
from
 employee where emp_id = 87933


call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse        1      0.03       0.05          1          1          0           0

Execute      1      0.00       0.00          0          0          0           0

Fetch        2      0.00       0.03          3          4          0           1

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total        4      0.03       0.09          4          5          0           1


Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 59

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID EMPLOYEE
      1   INDEX RANGE SCAN EMP_IDX1 (object id 30498)

********************************************************************************

The CPU speed improved by a multiple of 11 (.03 vs. .34) compared to the benchmark before the index was added.
Step 6: Load tkprof Results into Tables
Loading tkprof data into the database is optional, but it can be worthwhile for those DBAs that want historical data or the ability to access  data via SQL queries to generate reports.  The command used earlier specified insert=tkprof.sql which generated the following SQL in tkprof.sql: 

CREATE TABLE  tkprof_table
(
 date_of_insert                       DATE
,cursor_num                           NUMBER
,depth                                NUMBER
,user_id                              NUMBER
,parse_cnt                            NUMBER
,parse_cpu                            NUMBER
,parse_elap                           NUMBER
,parse_disk                           NUMBER
,parse_query                          NUMBER
,parse_current                        NUMBER
,parse_miss                           NUMBER
,exe_count                            NUMBER
,exe_cpu                              NUMBER
,exe_elap                             NUMBER
,exe_disk                             NUMBER
,exe_query                            NUMBER
,exe_current                          NUMBER
,exe_miss                             NUMBER
,exe_rows                             NUMBER
,fetch_count                          NUMBER
,fetch_cpu                            NUMBER
,fetch_elap                           NUMBER
,fetch_disk                           NUMBER
,fetch_query                          NUMBER
,fetch_current                        NUMBER
,fetch_rows                           NUMBER
,ticks                                NUMBER
,sql_statement                        LONG
);
INSERT INTO tkprof_table values
(
  SYSDATE, 1, 0, 59, 0, 0, 0, 0, 0, 0, 0
, 1, 0, 192, 0, 0, 0, 1, 0
, 0, 0, 0, 0, 0, 0, 0, 4294966155
, 'alter session set sql_trace=true
');
INSERT INTO tkprof_table VALUES
(
  SYSDATE, 2, 1, 0, 1, 0, 1232, 0, 0, 0, 1
, 1, 0, 745, 0, 0, 0, 0, 0
, 1, 0, 115, 0, 3, 0, 1, 17866289
, 'select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, spare
2 from obj$ where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or r
emoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is
null)and(subname=:6 or subname is null and :6 is null)
');
INSERT INTO tkprof_table VALUES
(
  SYSDATE, 3, 1, 0, 1, 0, 1400, 0, 0, 0, 1
, 1, 0, 658, 0, 0, 0, 0, 0
, 1, 0, 131, 0, 3, 0, 1, 5463
, 'select ts#,file#,block#,nvl(bobj#,0),nvl(tab#,0),intcols,nvl(clucols,0),audit
$,flags,pctfree$,pctused$,initrans,maxtrans,rowcnt,blkcnt,empcnt,avgspc,chncnt,a
vgrln,analyzetime, samplesize,cols,property,nvl(degree,1),nvl(instances,1),avgsp
c_flb,flbcnt,kernelcols,nvl(trigflag, 0),nvl(spare1,0),nvl(spare2,0),spare4,spar
e6 from tab$ where obj#=:1
');
INSERT INTO tkprof_table VALUES
(
  SYSDATE, 4, 1, 0, 2, 0, 1110, 0, 0, 0, 1
, 2, 15625, 757, 0, 0, 0, 0, 0
, 2, 0, 221, 0, 6, 0, 2, 8966
, 'select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(
lists,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0) from seg$ where
ts#=:1 and file#=:2 and block#=:3
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 4, 1, 0, 1, 0, 1802, 0, 0, 0, 1
, 1, 0, 1089, 0, 0, 0, 0, 0
, 2, 0, 489, 0, 5, 0, 1, 23441
, 'select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags, i.property,i.
pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i.distkey, i.lblkkey,i.dblkkey
,i.clufac,i.cols,i.analyzetime,i.samplesize,i.dataobj#, nvl(i.degree,1),nvl(i.in
stances,1),i.rowcnt,mod(i.pctthres$,256),i.indmethod#,i.trunccnt,nvl(c.unicols,0
),nvl(c.deferrable#+c.valid#,0), nvl(i.spare1,i.intcols),i.spare4,spare2,spare6,
 decode(i.pctthres$,null,null, mod(trunc(i.pctthres$/256),256)) from ind$ i, (se
lect enabled, min(cols) unicols, min(to_number(bitand(defer,1))) deferrable#, mi
n(to_number(bitand(defer,4))) valid# from cdef$ where obj#=:1 and enabled > 1 gr
oup by enabled) c where i.obj#=c.enabled(+) and i.bo#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 5, 1, 0, 1, 0, 910, 0, 0, 0, 1
, 1, 0, 573, 0, 0, 0, 0, 0
, 2, 0, 147, 0, 3, 0, 1, 5409
, 'select pos#,intcol#,col#,spare1,bo#,spare2 from icol$ where obj#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 6, 1, 0, 1, 15625, 1426, 0, 0, 0, 1
, 1, 0, 775, 0, 0, 0, 0, 0
, 6, 0, 1744, 0, 3, 0, 5, 10773
, 'select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl
(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,scale,18
3,scale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,rowid,col#,pro
perty, nvl(charsetid,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$
 where obj#=:1 order by intcol#
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 8, 1, 0, 1, 0, 831, 0, 0, 0, 1
, 1, 0, 597, 0, 0, 0, 0, 0
, 1, 0, 59, 0, 1, 0, 0, 5736
, 'select con#,obj#,rcon#,enabled,nvl(defer,0) from cdef$ where robj#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 9, 1, 0, 1, 0, 973, 0, 0, 0, 1
, 1, 0, 650, 0, 0, 0, 0, 0
, 1, 0, 43, 0, 2, 0, 0, 5050
, 'select con#,type#,condlength,intcols,robj#,rcon#,match#,refact,nvl(enabled,0)
,rowid,cols,nvl(defer,0),mtime,nvl(spare1,0) from cdef$ where obj#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 1, 0, 59, 1, 31250, 58068, 1, 1, 0, 1
, 1, 0, 85, 0, 0, 0, 0, 0
, 2, 0, 37301, 3, 4, 0, 1, 39511
, ' select * from employee where emp_id = 87933
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 2, 1, 0, 2, 0, 1122, 0, 0, 0, 1
, 2, 0, 672, 0, 0, 0, 0, 0
, 2, 0, 178, 0, 6, 0, 2, 12416444
, 'select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.datao
bj#,o.flags from obj$ o where o.obj#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 1, 0, 59, 1, 0, 353, 0, 0, 0, 1
, 1, 0, 148, 0, 0, 0, 0, 0
, 0, 0, 0, 0, 0, 0, 0, 1160
, 'alter session set sql_trace=false
');

This file contains the DDL to create the table as well as the data to load.  If the table already exists, the error will be ignored when it tries to create the table again.
tkprof provides many useful command line options that provide additional functionality for the DBA.

·     print – Lists only the first n SQL statements in the output file.  If nothing is specified, all statements will be listed.  Use this option when the list needs to be limited to the “Top n” statements.  This is useful when combined with a sorting option to enable the top n statements by CPU, or disk reads, or parses, etc.
·     aggregate – When “Yes”, tkprof will combine the statistics from multiple user executions of the same SQL statement.  When “No”, the statistics will be listed each time the statement is executed.
·     insert – Creates a file that will load the statistics into a table in the database for further processing.  Choose this option if you want to perform any advanced analysis of the tkprof output.
·     sys – Enables or disables the inclusion of SQL statements executed by the SYS user, including recursive SQL statements.  The default is to enable. 
·     table – Used in the Explain Plan command (if specified) for Oracle to load data temporarily into an Oracle table.  The user must specify the schema and table name for the plan table.  If the table exists all rows will be deleted otherwise tkprof will create the table and use it.
·     record - creates a SQL script with the specified filename that contains all non-recursive SQL statements from the trace file.  For DBAs wanting to log the SQL statements in a separate file, this is the option to use.  In the example earlier, the contents of the Allsql.sql file include:
alter session set sql_trace=true ;
select * from employee where emp_id = 87933 ;
alter session set sql_trace=false ;

·     explain – Executes an Explain Plan for each statement in the trace file and displays the output.  Explain Plan is less useful when used in conjunction with tkprof than it is when used alone.  Explain Plan provides the predicted optimizer execution path without actually executing the statement.  tkprof shows you the actual execution path and statistics after the statement is executed.  In addition, running Explain Plan against SQL statements that were captured and saved is always problematic given dependencies and changes in the database environment.
·     sort – Sorts the SQL statements in the trace file by the criteria deemed most important by the DBA.  This option allows the DBA to view the SQL statements that consume the most resources at the top of the file, rather than searching the entire file contents for the poor performers.  The following are the data elements available for sorting:
·     prscnt – The number of times the SQL was parsed.
·     prscpu – The CPU time spent parsing.
·     prsela – The elapsed time spent parsing the SQL.
·     prsdsk – The number of physical reads required for the parse.
·     prsmis – The number of consistent block reads required for the parse.
·     prscu - The number of current block reads required for the parse.
·     execnt – The number of times the SQL statement was executed. 
·     execpu – The CPU time spent executing the SQL. 
·     exeela – The elapsed time spent executing the SQL.
·     exedsk – The number of physical reads during execution.
·     exeqry – The number of consistent block reads during execution.
·     execu – The number of current block reads during execution.
·     exerow – The number of rows processed during execution.
·     exemis – The number of library cache misses during execution.
·     fchcnt – The number of fetches performed.
·     fchcpu – The CPU time spent fetching rows.
·     fchela – The elapsed time spent fetching rows.
·     fchdsk – The number of physical disk reads during the fetch.
·     fchqry – The number of consistent block reads during the fetch.
·     fchcu – The number of current block reads during the fetch.
·     fchrow – The number of rows fetched for the query.
Many sort options exist, however some are more useful than others.  Execnt, execpu, exedsk and prscnt are the most useful sort parameters when formatting trace output with tkprof, because they are more indicative of most SQL performance issues.  The execution counts are most indicative of performance issues and therefore should bubble to the top.  In particular, this is true of the SQL statement that used the most CPU – execpu.  The prscnt parameter is important because it shows the SQL statements that are parsed most, usually a result of not using bind variables.
SQL Execution Statistics in 9i
The SQL tuning process prior to 9.2 involved executing SQL commands, then OS commands, and then SQL commands again.  This is a very time-consuming and burdensome process.  In 9.2, Oracle decided to retain the SQL metrics for each statement in the SGA (library cache) while the statement remains cached.  The DBA could then diagnose SQL issues at a SQL prompt and leave tkprof alone.  This is a vast improvement over prior versions.

Oracle9.2 contains the following views that enable the DBA to identify SQL issues directly from a SQL prompt.  These views should be used to periodically check SQL statistics and full-table scans, alerting the DBA to problem areas requiring corrective action.

·     v$sql_plan  - This view shows the same information as shown by Explain Plan except it is the actual execution plan and not the predicted one – just like tkprof and even better than Explain Plan.
·     v$sql_plan_statistics  - This view contains the execution statistics for each operation (step) in the v$sql_plan.  Queries should access this view and look for poor SQL operations including TABLE ACCESS FULL – full-table scans.
·     v$sql_plan_statistics_all  - This view combines data from v$sql_plan, v$sql_plan_statistics and v$sql_workarea. 

Both v$sql_plan_statistics and v$sql_plan_statistics_all are not populated by default.  The option statistics_level=all must be set.  
Enable tracing only on those sessions that are having problems.  Be selective to minimize the performance burden on the sessions and to retain more free space in the user dump destination directory.

Rename trace files after tracing is disabled.  The new file name should be something more meaningful that will be recognizable at a later date.   The name employee_index.trc is much more meaningful than ORCL92_ora_3172.trc. 
Delete trace files that are no longer needed to reduce clutter and free disk space.

Explain Plan is not as useful when used in conjunction with tkprof since the trace file contains the actual execution path of the SQL statement.  Use Explain Plan when anticipated execution statistics are desired without actually executing the statement.

When tracing a session, remember that nothing in v$session indicates that a session is being traced.  Therefore, trace with caution and remember to disable tracing after an adequate amount of trace data has been generated.

tkprof does not control the contents of a trace file, it simply formats them.  Oracle provides multiple ways to actually generate the trace file.   tkprof is valuable for detailed trace file analysis.  For those DBAs that pefer a simpler tracing mechanism with instant feedback, the autotrace utility should be used.

Friday, 13 May 2011

Five DBA best practices

#1: Multiple Oracle Homes
My favorite best practice is the one about multiple Oracle Homes. Here it how it goes. When applying a patch or a patchset, I recommend against applying to the existing Oracle Home. Instead, I suggest creating a new Oracle Home, and apply the patches there.
I create the first Oracle Home at /app/oracle/db_1, for instance. When a patch comes out, I install the whole Oracle software in a different home -- /app/oracle/db_2 -- and then apply the patch there. During the process of installation and patch application, the database is up and running, as it runs off the home /db_1. When the outage window comes, all I have to do is to shut down Oracle; change Oracle Home to db_2 and bring the database up. If there is a problem, I can reset the Oracle Home back to the old one.
So, here is the conventional approach:
  1. Shut down the database
  2. Apply patch to the Oracle Home
  3. Start the database
  4. In case of problems:
  5. Shut down the database
  6. Roll back the patch
  7. Start the database
Steps 2 and 6 could take as much as three hours depending on the amount of patching.

The database is down during these times.
In the new approach:
  1. Install new Oracle Home
  2. Apply the patch to the new Home
  3. Shut down the database
  4. Change Oracle Home to the new location
  5. Start the database
  6. In case of problems:
  7. Shut down the database
  8. Change Oracle Home to the old one
  9. Start the database
The database is down only during steps 4 and 8, which takes a couple of minutes at the most, not hours.
So, here are the advantages:
  1. The downtime is significantly reduced, to one 60th of the original time.
  2. The risk is reduced significantly, as there is no rolling back a patch; you just go to the older version.
  3. You can perform a "diff" on these two homes to see what changed. You can see the differences across multiple homes as well.
  4. You can take several databases running on the same server to the new Oracle Home one by one.
  5. You can see the various Oracle Homes and what patch level they are on using the inventory.
The only negative part is the space consumption -- you need space for two Oracle Homes. But considering a typical Oracle Home takes about 4 GB or less, this aspect of the suggestion is trivial.
#2: Set audit trail to DB
Set the audit trail to DB by placing the parameter AUDIT_TRAIL = DB in the initialization parameter file during the database creation. Setting this parameter does not start the auditing, because an explicit AUDIT command must be given on the object. But the parameter must be set to a value other than FALSE (the default) for the command to take effect. Being a non-dynamic parameter, the database must be bounced to change the value of AUDIT_TRAIL. To save the trouble and avoid an outage, always set the value to DB, even if you never intend to audit anything. It does not break anything and you will always be ready to audit when the time comes.
#3: Don't use .log
Don't use .log as the extension of redo logs. Someone may run a script to remove all the log files assuming they are redundant and you will end up losing the online redo logs as well, forcing a database recovery. Instead, name them with extension "redo" or "rdo."
#4: Preview RMAN Restore
Preview RMAN Restore to identify all the various backup pieces that will be used in the recovery process without doing an actual recovery. This eliminates any surprises from missing pieces during an actual recovery process.
#5: Create a new Oracle user for clients running on the same server as the DB
The Oracle Database server software also contains the client piece, which allows the clients to connect to the database on the same server. But as a best practice do not use the same user or the software; use a new one. For instance, if "oracle" is the user to install Oracle software, create a new user called, say, "oraapp" and install the client-only software using that user. The user "oraapp" should not be part of the dba or the oinstall group; so this user can't log on to the database as sysdba. Create a new group called "appgrp" and assign the user oraaap to this group. All the application users on the box should also be part of the appgrp group. This way they can use the sqlplus, sqlldr and other executables on the server, but be able to connect as sysdba.
The common practice is to use the client software in the same user as the database software owner; but starting with 10.2, Oracle has changed the security policy that takes away the global execution permissions from the Oracle Home. So the only option is to let app users be part of the dba group or change the permissions on Oracle Home -- both make the database extremely vulnerable.


Thursday, 12 May 2011

Get a list of OS running processes

On Unix:
ps -ef

On windows:
from the command line:
wmic /OUTPUT:C:\ProcessList.txt PROCESS get Caption,Commandline,Processid

Apps DBA tasks - Daily


eBusiness 11i/R12 – Daily Checks (for Apps. DBA) – Admin Node
Being Apps. DBA, one should check the subsequent on Daily Basis in order to make sure that everything is working fine. This post is relating to ADMIN NODE.
Listener Status
$ lsnrctl status PROD
File System Space Utilization
$ df -h
Operating System – eBusiness DB and Concurrent Processes
$ ps -ef |grep pmon|grep -v grep
$ ps -ef |grep FNDLIBR|grep -v grep
eBusiness DB Instance
SQL> SELECT instance_name,host_name,TO_CHAR(startup_time,’DD-MON-YYY HH:MM:SS AM’) startedat FROM v$instance;
Invalid Objects
SQL> select owner,object_name,object_type from dba_objects where status’VALID’;
Inactive Users
SQL> select username,command,status,server,osuser,machine,terminal,program,module,action from v$session where status’INACTIVE’;
Active Sessions
SQL> select inst_id,count(*) from gv$session group by inst_id;
Active Users Detail
SQL> select username,command,status,osuser,process,machine,terminal,program,module,action from v$session where status’ACTIVE’;
Data Files Status:
SQL> select name,status from v$datafile;
Log Files Status
SQL> select * from v$log;
Archiving Errors
SQL> select error from v$archive_dest;
Tablespace Free Space
SQL> select tablespace_name,round (sum(bytes)/1024/1024) Free_Space_MB from dba_free_space group by (TABLESPACE_NAME);
Invalid Indexes
SQL> select index_name,table_name from dba_indexes where status =’INVALID’ and owner=’APPS’;
eBusiness 11i/R12 – Applications Listener Status
$ sh $COMMON_TOP/admin/scripts/PROD_oracle2/adalnctl.sh status
Checking on the status of Report Server
$ sh $ORACLE_HOME/admin/scripts/PROD_oracle2/adrepctl.sh status

Oracle EBS/Apps: Pinning Objects to Improve Apps Performance

Keeping database packages in the Oracle database’s System Global Area (SGA) shared pool is a common practice. This is also called pinning. One common misconception is that pinning is not useful or needed in later Oracle database versions such as 10g and above, since those releases offer automatic shared memory management. This isn’t completely accurate for all cases, for reasons which I’ll discuss below.
An Introduction to the System Global Area (SGA)
The Oracle database’s System Global Area contains various pools of memory used to satisfy particular classes of memory allocation requests:
• Shared pool: used for caching shared cursors, stored procedures, dictionary caches, etc.
• Buffer cache: cache of disk data
• Large pool: large allocations
• Java pool: Java allocations and for caching java objects
• Log buffer: in-memory buffer for redo generation
• Streams Pool: new in 10g, for buffering inbound and outbound logical change records
• Fixed SGA: bootstrap section of the SGA
SGA memory is allocated in units of contiguous memory chunks called granules. The size of a granule depends on your operating system platform and the SGA’s total size. On most platforms, if the estimated SGA size is:
• Less than or equal to 1GB, the granule size is 4 MB
• Greater than 1GB, the granule size is 16 MB
A Primer on Space Allocations in the Shared Pool One of the important components of the SGA is the shared pool. The shared pool was introduced as a feature of the Oracle Database in Version 7, primarily as a repository for shared SQL and PL/SQL. The shared pool has come a long way since its original release.
The Oracle database requires contiguous space. For example, if a request for 4 K of memory is made, the database cannot allocate separate 3 K and 1 K chunks. It must allocate a 4 K block of contiguous free memory to satisfy the request. If there is no free memory, it will scan the Least Recently Used list to free some memory. The heap manager will try to free as many objects as possible before giving up. If the shared pool has no space to satisfy the request for memory, an ORA-4031 error is thrown.
With the advent of automatic shared memory management, we need not configure the size of the shared pool via the shared_pool_size parameter. Automatic shared memory management requires one parameter to be set: sga_target. The Oracle database’s automatic memory allocation is superior to manual memory management. This prevents ORA-4031 errors in most cases.
When Large Objects Jump in the Shared Pool
Imagine a large package (or any object) has to be loaded into the shared pool. Large PL/SQL objects present particular challenges. The database has to search for free space for the object. If it cannot get enough contiguous space, it will free many small objects to satisfy the request. If several large objects need to be loaded, the database has to throw out many small objects in the shared pool.
Finding candidate objects and freeing memory is very costly. These tasks will impact CPU resources.
One approach to avoiding performance overhead and memory allocation errors is to keep large PL/SQL objects in the shared pool at startup time. This process is known as pinning. This loads the objects into the shared pool and ensures that the objects are never aged out of the shared pool. If the objects are never aged out, then that avoids problems with insufficient memory when trying to reload them.
What’s in Your Shared Pool Now?
Objects are ‘kept’ in the shared pool using the dbms_shared_pool package that is defined in the dbmspool.sql file.
For example:
execute dbms_shared_pool.keep(‘owner.object’);
To view a list of all objects that are kept in the shared pool, one can query the v$db_object_cache:
select owner,name,type,sharable_mem from v$db_object_cache where kept=’YES’;
The SQL query above will list all of the objects that are ‘kept’ in the shared pool using dbms_shared_pool.keep.
Identifying Candidate Objects for Keeping in the Shared Pool
To identify candidates that should be kept in the shared pool, first run the following query:
select substr(owner,1,10)||’.'||substr(name,1,35) “ObjectName”, type, sharable_mem,loads, executions, kept from v$db_object_cache where type in (‘TRIGGER’,'PROCEDURE’,'PACKAGE BODY’,'PACKAGE’) and executions >0 order by executions desc,loads desc,sharable_mem desc
The query above will return something like this:
Next, query the x$ksmlru table, using:
select * from x$ksmlru;
The x$ksmlru table keeps track of the current shared pool objects and the corresponding number of objects flushed out of the shared pool to allocate space for the load. These objects are stored and flushed out based on the Least Recently Used (LRU) algorithm. Here’s what the table looks like:
describe x$ksmlru
Table or View x$ksmlru
Name Null? Type
————– ——– ————–
ADDR RAW(4)
INDX NUMBER
KSMLRCOM VARCHAR2(20)
KSMLRSIZ NUMBER
KSMLRNUM NUMBER
• KSMLRNUM shows the number of objects that were flushed to load the large object
• KSMLRISZ shows the size of the object that was loaded (contiguous memory allocated)
Note: This is a fixed table: once you query the table, the database will automatically reset the table. Make sure that you spool the output to a file so you can capture it for analysis. Analyze the x$ksmlru output to determine if there are any large allocations that are flushing other objects. If this is the case, analyze the v$db_object_cache to identify the objects with high loads or executions. These should be kept in the shared pool.
Keeping Objects in Oracle Applications Databases
All E-Business Suite DBAs should do some analysis to assess whether pinning can improve the performance of your Apps environment. Arriving at the objects to be pinned varies from setup to setup. Objects have to be pinned after each instance startup, and ideally immediately after the startup.
The $AD_TOP/sql/ADXGNPIN.sql script is provided to pin the packages in Oracle Applications. This script pins all Oracle Applications objects, which is generally not completely necessary or advisable. You should modify this script based on your analyses of v$db_object_cache and x$ksmlru to identify the right objects to be pinned for your environment.
For more information about this script (and the related ADXCKPIN.sql script, used for listing objects in the shared pool), see the appropriate manual for your Apps release.

Create a Read Only apps/ebs schema


Step 1:-
create a new user let say APPSREAD.

Step 2:-
Surely, the schema created in above Step 1 will be given read only grants to objects in apps. There will be cases where the grant command might fail. To monitor such failures  create a table as below
conn apps/xx;
–For APPSREAD. This table will capture the exceptions during Grants
PROMPT create table XX_GRANTS_FAIL_APPSREAD
create table XX_GRANTS_FAIL_APPSREAD (
        object_name VARCHAR2(100)
       ,sqlerrm varchar2(2000)
       ,creation_date DATE
        );
grant all on XX_GRANTS_FAIL_APPSREAD to apps with grant option;
grant select on XX_GRANTS_FAIL_APPSREAD to APPSREAD ;

Step 3
In this step we grant select on all the existing views and synonyms in apps schema to APPSREAD.
conn apps/xx;
PROMPT This can take upto 15-30 minutes
PROMPT Granting SELECT on All synonyms and views to APPSREAD
DECLARE
–One off script to execute grants to APPSREAD
  v_error VARCHAR2(2000);
BEGIN
  FOR p_rec IN (SELECT *
                FROM   all_objects
                WHERE  owner = ‘APPS’
                AND    object_type IN (‘SYNONYM’, ‘VIEW’)
                AND    object_name NOT LIKE ‘%_S’)
  LOOP
    BEGIN
      EXECUTE IMMEDIATE ‘grant select on ‘ || p_rec.object_name ||
                        ‘ to APPSREAD’;
    EXCEPTION
      WHEN OTHERS THEN
        v_error := substr(SQLERRM, 1, 2000);
        INSERT INTO bes.XX_GRANTS_FAIL_APPSREAD
          (object_name
          ,SQLERRM
          ,creation_date
          )
        VALUES
          (p_rec.object_name
          ,v_error
          ,sysdate
          );
    END;
  END LOOP;
  COMMIT;
END;
/
Step 4
Write a after logon trigger on APPSREAD schema. The main purpose of this trigger is to alter the session to apps schema, such that the CurrentSchema will be set to apps for the session(whilst retaining APPSREAD restrictions).In doing so your logon will retain the permissions of APPSREAD schema(read_only). Howerver it will be able to reference the apps objects with exactly the same name as does a direct connection to apps schema.
conn apps/xx;
PROMPT CREATE OR REPLACE TRIGGER xx_APPSREAD_logon_trg
CREATE OR REPLACE TRIGGER xx_APPSREAD_logon_trg
–16Jun2006 By Anil Passi
–Trigger to toggle schema to apps, but yet retaining APPSREAD resitrictions
–Also sets the org_id
  AFTER logon ON APPSREAD.SCHEMA
DECLARE
BEGIN
  EXECUTE IMMEDIATE
          ‘declare begin ‘ ||
          ‘dbms_application_info.set_client_info ( 101 ); end;’;
  EXECUTE IMMEDIATE ‘ALTER SESSION SET CURRENT_SCHEMA =APPS’;
END;
/
Step 5
Create a Trigger on the apps schema to issue select only grants for all new views and synonyms. Please note that I am excluding grants for sequences. SELECT grants for views and synonyms will be provided to APPSREAD as and when such objects are created in APPS. Please note that, all the APPS objects (views and synonyms) that existed in APPS schema prior to the implementation of this design, would have been granted read-only access to APPSREAD in Step 2.
conn apps/xx ;
PROMPT CREATE OR REPLACE TRIGGER xx_grant_APPSREAD
CREATE OR REPLACE TRIGGER xx_grant_APPSREAD
–16Jun2006 By Anil Passi

  AFTER CREATE ON APPS.SCHEMA
DECLARE
  l_str VARCHAR2(255);
  l_job NUMBER;
BEGIN
  IF (ora_dict_obj_type IN (‘SYNONYM’, ‘VIEW’))
     AND (ora_dict_obj_name NOT LIKE ‘%_S’)
  THEN
    l_str := ‘execute immediate “grant select on ‘ || ora_dict_obj_name ||
             ‘ to APPSREAD”;’;
    dbms_job.submit(l_job, REPLACE(l_str, ‘”‘, ””));
  END IF;
END;
/

Monday, 9 May 2011

How to check Oracle Apps/EBS version

Run following SQL from apps user ;
select RELEASE_NAME from fnd_product_groups;
You should see output like
RELEASE_NAME
-----------------------
11.5.10.2

Number of Visitors