Thursday, 25 April 2013

Oracle DBA interview Q&A in an Indian style

1- How can I get diagnostic information from a hung instance before I restart the database?
/u01/app/oracle/bin>sqlplus system/mypass -prelim /nolog
SQL*Plus: Release - Production  

> oradebug setmypid
Statement processed.  

> oradebug hanganalyze 12
Hang Analysis in /u01/app/oracle/admin/mysid/udump/mysid_ora_1234.trc
2- How to enable tracing for a specific or current session?
In 11g:
For current: exec dbms_monitor.session_trace_enable(NULL, NULL);
For a specific: exec dbms_monitor.session_trace_enable(144, NULL); -- 144 is the SID number

3- How to copy schema into same database?
Using the impdp utility.
Let say we are copying SCOTT schema to SCOTT_NEW

SQL> create a directory TEST_DIR as ‘/xxxx/exp-dir/’;
SQL> create a database link SCOTT_DB_LINK connect to SYSTEM identified by  xxxxxxx using ‘DB-A’;
Verify the database link
SQL> select * from dual@SCOTT_DB_LINK;
Use the following syntax to copy schema,
impdp system/xxxxxx schemas=SCOTT directory=TEST_DIR network_link=SCOTT_DB_LINK  remap_schema=SCOTT:SCOTT_NEW logfile=TEST_DIR:SCOTT_NEW.log
This eliminates the creation of dump file, by using database link with the parameter network_link      
4- RAC to non-RAC cloning steps?

1- Run Preclone script on both Dbtier and AppsTier on the SOURCE system.
2- Create backup on SOURCE Databases to FILESYSTEM (/backup)
run {
allocate channel d1 type disk;
backup format '/backup/df_t%t_s%s_p%p' database;
sql 'alter system archive log current';
backup format '/backup/al_t%t_s%s_p%p' archivelog all;
release channel d1;

3- compress database ORACLE HOME on the source node, then move it to the
source node using GZIP utility as the following:
tar cvf – DB_Home | gzip > DB_Home.tar.gz

4- Copy the tar file of the Oracle HOME which has been taken in the previous step
to the target node and extract this file.

5- Copy the backup pieces which has been taken in step2 to the TARGET node, and in the
same location where the backup pieces has been taken in step 2 (รจ /backup)

6- Run script on the TARGET db node using the following command:
perl dbTechStack
This will create the init parameter file, context file and environment files on the target

7- now on the TARGET node and as oracle user, create the directories where the
datafiles,onlinelogs and tempfiles are stored on your ASM storage:
+DATA1/prod/datafile >> $ mkdir $DATA_TOP/prod/datafile
+DATA1/prod/onlinelog >> $ mkdir $DATA_TOP/prod/onlinelog
+DATA1/prod/tempfile >> $ mkdir $DATA_TOP/prod/tempfile
Suppose that the DATA_TOP on the target node is /u01/app/CLONE/oradata

8- Change initialization parameters on the init file on the target instance:
db_file_name_convert =('+DATA1, '/u01/app/CLONE/oradata')
log_file_name_convert =('+DATA1', '/u01/app/CLONE/oradata')

9- now on the target node, startup the database in nomount mode

10- check that you can access the source database, by copying tnsEntries
from the SOURCE Node to the Target node tnsnames.ora file.
Step 11) on the target node, connect to the RMAN using the following command, where
PROD is your source database, and CLONE is the target database:
$ rman target sys/sys@PROD auxiliary /
allocate auxiliary channel ch1 type disk;
duplicate target database to CLONE;
release channel ch1;

12- After the RMAN command completed successfully, shutdown the database,
then change the following database initialization parameters, in init parameter file:
undo_management to =MANUAL
Now open the database with RESETLOGS option;

13- Create APPS_UNDOTS1 tablespace using the following statements:
'/u01/app/CLONE/oradata/prod/datafile/undo01.dbf' SIZE 4000M AUTOEXTEND

14- now edit the init parameter file, and change the following initialization
undo_management to =AUTO
undo_tablespace= APPS_UNDOTS1
and remove the added initialization parameter_no_recovery_through_resetlogs=TRUE
Drop all unused undo tablespaces

15- Disable database archive on the TARGET database, and start the database in
open mode


Oracle database cross platform migration methods:
1. Export / Import to include the use of Datapump facilities. All versions support Export/Import but for Datapump or higher is required
2. Transportable Tablespaces 10G or Later
3. RMAN Convert Database functions. 10G or Later
4. Streams Replication
5. Create Table As Select (CTAS)
6. Dataguard Heterogeneous Primary and Physical Standbys
7. Oracle Golden Gate (For assistance with Oracle Golden Gate, an SR needs opened with the correct team)



Sunday, 14 April 2013

Oracle Handy script: copy schemas

  ' GRANT CONNECT TO '|| username||';',
  ' GRANT RESOURCE TO '|| username||';',
  ' GRANT UNLIMITED TABLESPACE TO '|| username||';',
  ' GRANT ALTER SESSION TO '|| username||';',
  ' GRANT CREATE SESSION TO '|| username||';',
' imp system/sys file=/oradata/AEMEDB/AEMEDB_FULL_14042013.dmp  fromuser='|| username||' touser='|| username||';'
 from dba_users
 order by created desc;

Friday, 12 April 2013

Oracle Advanced Queues and Streams: A Definition in Plain English

A mathematician, an accountant and an economist apply for the same job. The interviewer calls in the mathematician and asks "What do two plus two equal?" 

The mathematician replies "Four." 

The interviewer asks "Four, exactly?" The mathematician looks at the interviewer incredulously and says "Yes, four, exactly." 

Then the interviewer calls in the accountant and asks the same question "What do two plus two equal?" The accountant says "On average, four - give or take ten percent, but on average, four." 

Then the interviewer calls in the economist and poses the same question "What do two plus two equal?" 

The economist gets up, locks the door, closes the shade, sits down next to the interviewer and says "What do you want it to equal?" 

And now for something completely different: 

What is AQ? Advanced Queues, or AQ, is Oracle's messaging solution. AQ provides a persistent (or non-persistent for specialty apps) queue mechanism that can guarantee delivery of a message. It has interfaces to PL/SQL, OCI and Java. It's Oracle's answer to IBM's MQ Series. 

A message can be an XML document, a set of fields, an array of data and just about anything else you can think of. 

AQ works on a publish/subscribe model. That means that someone (a publisher puts a message on the queue and someone else (a subscriber) takes the message off. A queue can have multiple subscribers. Technically it can have multiple publishers but I haven't worked with that configuration and I'm not sure what the usefulness of that is. I think I would prefer multiple queues, one for each publisher. 

What's AQ good for? What does it do? 

One example would be replication. In Oracle Advanced Replication, AQ is the mechanism that copies data from one instance to another. The master site (publisher) will receive an update, it puts the update and a before and after image of the data in a queue. The slave sites (Subscribers) pull the data off the queue and apply them to the local database. The before and after images are used by replication to find the correct record and see if there are any update conflicts. 

Besides Oracle replication, or your own home-grown replication, there are a lot of other uses for AQ. 

You can drop a message on a queue for local usage. Say if you have a transactional system and are getting backed up but don't want to turn away incoming transactions. You can implement a queue. The receiving procedure can drop the transactions on the queue and a local de-queue procedure in the background can pull them off when it has time. 

You can use AQ to interface with Java. AQ supports the Java Messaging Specification (JMS) API. Using Java, XML and AQ you can easily implement a SOA (service oriented architecture) web service. 

What is Streams? Here's a brief description of streams and what you can use it for. 

AQ and Replication both entail data movement. Streams is the current technology enabling that data movement. Streams is kind of like AQ, but with rules applied. 

Let's think about AQ. AQ is basically a table and some table maintenance code wrapped around streams. When you enqueue a record, you're using AQ. Streams takes over and moves it to the next database, enqueueing it locally. AQ then takes over again, dequeuing it for consumption. 

Streams has some nice features. I think the most important is the rule based transformations. A transformation allows you to modify the payload in flight. A receiving application doesn't need to be aware of the sending applications formats, it just receives what it needs. 

Think of the way mainframes send data down to a data warehouse. The mainframe doesn't send entire vsam files down to let the warehouse figure out what pieces it needs. The warehouse group defines the fields it needs, a mainframe programmer writes a Cobol (maybe) program and sends a new, specific file down. 

With streams, the receiving application can define what it needs and the sending application can define rules to match. The nice thing with streams is that there can be multiple consumers receiving the same payload but have different rules applied for them. The sender sends one payload and it's transformed many times in different ways for multiple consumers. Rules are also easily defined as opposed to writing a program, scheduling a batch processing window, writing a load routine, etc. With streams, identify the source, define the rules and write a consumer dequeue. 

Speaking of data warehouses, another use of streams is in change data capture (CDC). You identify a source object, say your transaction detail table in the oltp system. You can create a rule that says capture all transactions that are approved and billable. Define the billing warehouse as a consumer for that stream. That payload can be applied to a staging table for loading into a warehouse table. The apply to the staging table can be done without coding. With almost 0 lines of code you can move the data you want, i.e. billable items, from your oltp system directly to your warehouse. 

Streams is also non-stressful to the source database. Streams reads the redo-logs and gathers information from that as opposed to running queries or DML against the source database's tables. 

Streams and AQ are both pretty fascinating technologies. And there is a lot more to them than I speak about here. In the near future I want to show how to do some setup and build a little application using CDC. If you're using AQ in 10g, then you're already using streams behind the scenes. I think that's the hallmark of good technology; it makes life easier and you never have to see it. 


Tuesday, 9 April 2013

Unix: VI search and replace command examples

Let us say you would like to find a word called "foo" and replace with "bar".

First hit [Esc] key

Type : (colon) followed by %s/foo/bar/ and hit [Enter] key.

Above command will replace first occurrence of word foo with bar on all lines. The % is shorthand for all lines.

To replace all occurrences of word foo with bar on all lines, use the g option (which indicates all occurrences on a line).

Note that the g can be replaced with a number 1,2,...N to change only the n'th occurrence on each line.

Use find and replace on line ranges (match by line numbers)

You can also make changes on range of lines i.e. replace first occurrence of foo with bar on lines 5 through 20 only, enter:

Following command will replace first occurrence of foo with bar starting at the current line for the next 100 lines:

Match by words

Finally, you can match by words i.e. replace first occurrence of foo with bar starting at at the next line containing a word "test":

As usual you can specify ranges:

Please note that all search/replace commands should be start with the [ESC]: keystroke combination onl

Number of Visitors