Wednesday, October 29, 2008

Performing Asynchronous AutoLog Publishing in CDC Oracle

Step 1 Source Database DBA: Prepare to copy redo log files from the source database:
compatible = 10.1.0
java_pool_size = 50000000
log_archive_dest_1="location=/oracle/dbs mandatory reopen=5"
log_archive_dest_2 = "service=stagingdb arch optional noregister reopen=5
template = /usr/oracle/dbs/arch1_%s_%t_%r.dbf"
log_archive_dest_state_1 = enable
log_archive_dest_state_2 = enable
log_archive_format="arch1_%s_%t_%r.dbf"
remote_login_passwordfile=shared

Step 2 Staging Database DBA: Set the database initialization parameters:
compatible = 10.1.0
global_names = true
java_pool_size = 50000000
job_queue_processes = 2
parallel_max_servers = + 5
processes = + 7
remote_login_passwordfile = shared
sessions = + 2
streams_pool_size = + 21 MB
undo_retention = 3600

Step 3 Source Database DBA: Alter the source database:
The second is required. The first and third are optional, but recommended. It is assumed that the database is currently running in ARCHIVELOG mode.
1.ALTER DATABASE FORCE LOGGING;
2.ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
3.This example captures rows in the HR.JOB_HISTORY table only. The source database DBA would repeat this step for each source table for which change tables will be created
ALTER TABLE HR.JOB_HISTORY ADD SUPPLEMENTAL LOG GROUP log_group_job_hist (EMPLOYEE_ID, START_DATE, END_DATE, JOB_ID, DEPARTMENT_ID) ALWAYS;
Logging all columns incurs more overhead than logging selected columns.
ALTER TABLE HR.JOB_HISTORY ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Step 4 Staging Database DBA: Create and grant privileges to the publisher:
CREATE USER cdcpub IDENTIFIED BY cdcpub DEFAULT TABLESPACE ts_cdcpub
QUOTA UNLIMITED ON SYSTEM
QUOTA UNLIMITED ON SYSAUX;
GRANT CREATE SESSION TO cdcpub;
GRANT CREATE TABLE TO cdcpub;
GRANT CREATE TABLESPACE TO cdcpub;
GRANT UNLIMITED TABLESPACE TO cdcpub;
GRANT SELECT_CATALOG_ROLE TO cdcpub;
GRANT EXECUTE_CATALOG_ROLE TO cdcpub;
GRANT CONNECT, RESOURCE, DBA TO cdcpub;
GRANT CREATE SEQUENCE TO cdcpub;
EXECUTE DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'cdcpub');

Step 5 Source Database DBA: Build the LogMiner data dictionary:
The source database DBA builds a LogMiner data dictionary at the source database so that log transport services can transport this data dictionary to the staging database.When building the LogMiner data dictionary, the source database DBA should get the SCN value of the data dictionary build. In Step 8, when the publisher creates a change source, he or she will need to provide this value as the first_scn parameter.
SET SERVEROUTPUT ON
VARIABLE f_scn NUMBER;
BEGIN
:f_scn := 0;
DBMS_CAPTURE_ADM.BUILD(:f_scn);
DBMS_OUTPUT.PUT_LINE('The first_scn value is ' || :f_scn);
END;
/
The first_scn value is 207722

Step 6 Source Database DBA: Prepare the source tables:
BEGIN
DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION(
TABLE_NAME => 'hr.job_history');
END;
/

Step 7 Source Database DBA: Get the global name of the source database:
SELECT GLOBAL_NAME FROM GLOBAL_NAME;

Step 8 Staging Database Publisher: Identify each change source database and create the change sources:
BEGIN
DBMS_CDC_PUBLISH.CREATE_AUTOLOG_CHANGE_SOURCE(
change_source_name => 'CHICAGO',
description => 'test source',
source_database => 'HQDB',
first_scn => 207722);
END;
/

Step 9 Staging Database Publisher: Create change sets:
BEGIN
DBMS_CDC_PUBLISH.CREATE_CHANGE_SET(
change_set_name => 'CHICAGO_DAILY',
description => 'change set for job history info',
change_source_name => 'CHICAGO',
stop_on_ddl => 'y');
END;
/

Step 10 Staging Database Publisher: Create the change tables:
BEGIN
DBMS_CDC_PUBLISH.CREATE_CHANGE_TABLE(
owner => 'cdcpub',
change_table_name => 'JOB_HIST_CT',
change_set_name => 'CHICAGO_DAILY',
source_schema => 'HR',
source_table => 'JOB_HISTORY',
column_type_list => 'EMPLOYEE_ID NUMBER(6),START_DATE DATE,END_DATE DATE,
JOB_ID VARCHAR2(10), DEPARTMENT_ID NUMBER(4)',
capture_values => 'both',
rs_id => 'y',
row_id => 'n',
user_id => 'n',
timestamp => 'n',
object_id => 'n',
source_colmap => 'n',
target_colmap => 'y',
options_string => 'TABLESPACE TS_CHICAGO_DAILY');
END;
/

Step 11 Staging Database Publisher: Enable the change set:
BEGIN
DBMS_CDC_PUBLISH.ALTER_CHANGE_SET(
change_set_name => 'CHICAGO_DAILY',
enable_capture => 'y');
END;
/

Step 12 Source Database DBA: Switch the redo log files at the source database:
ALTER SYSTEM ARCHIVE LOGFILE;

Step 13 Staging Database Publisher: Grant access to subscribers:
GRANT SELECT ON cdcpub.job_hist_ct TO subscriber1;

Tuesday, October 28, 2008

Performing Asynchronous HotLog Publishing in CDC Oracle

Step 1 Source Database DBA: Set the database initialization parameters:
compatible = 10.1.0
java_pool_size = 50000000;
job_queue_processes = 2

parallel_max_servers = + 5
processes = + 7
sessions = + 2
streams_pool_size = + 21 MB
undo_retention = 3600

Step 2 Source Database DBA: Alter the source database:
The second is required. The first and third are optional, but recommended. It is assumed that the database is currently running in ARCHIVELOG mode.
1.ALTER DATABASE FORCE LOGGING;
2.ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
3.This example captures rows in the HR.JOB_HISTORY table only. The source database DBA would repeat this step for each source table for which change tables will be created.
ALTER TABLE HR.JOB_HISTORY
ADD SUPPLEMENTAL LOG GROUP log_group_jobhist
(EMPLOYEE_ID, START_DATE, END_DATE, JOB_ID, DEPARTMENT_ID) ALWAYS;Logging all columns incurs more overhead than logging selected columns.
ALTER TABLE HR.JOB_HISTORY ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;


Step 3 Source Database DBA: Create and grant privileges to the publisher:
CREATE USER cdcpub IDENTIFIED BY cdcpub DEFAULT TABLESPACE ts_cdcpub
QUOTA UNLIMITED ON SYSTEM
QUOTA UNLIMITED ON SYSAUX;
GRANT CREATE SESSION TO cdcpub;
GRANT CREATE TABLE TO cdcpub;
GRANT CREATE TABLESPACE TO cdcpub;
GRANT UNLIMITED TABLESPACE TO cdcpub;
GRANT SELECT_CATALOG_ROLE TO cdcpub;
GRANT EXECUTE_CATALOG_ROLE TO cdcpub;
GRANT CREATE SEQUENCE TO cdcpub;
GRANT CONNECT, RESOURCE, DBA TO cdcpub;
EXECUTE DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE => 'cdcpub');

Step 4 Source Database DBA: Prepare the source tables:
BEGIN
DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION(TABLE_NAME => 'hr.job_history');
END;

Step 5 Staging Database Publisher: Create change sets:
BEGIN
DBMS_CDC_PUBLISH.CREATE_CHANGE_SET(
change_set_name => 'CHICAGO_DAILY',
description => 'Change set for job history info',
change_source_name => 'HOTLOG_SOURCE',
stop_on_ddl => 'y',
begin_date => sysdate,
end_date => sysdate+5);
END;
/
Step 6 Staging Database Publisher: Create the change tables that will contain the changes to the source tables:
BEGIN
DBMS_CDC_PUBLISH.CREATE_CHANGE_TABLE(
owner => 'cdcpub',
change_table_name => 'job_hist_ct',
change_set_name => 'CHICAGO_DAILY',
source_schema => 'HR',
source_table => 'JOB_HISTORY',
column_type_list => 'EMPLOYEE_ID NUMBER(6),START_DATE DATE,END_DATE DATE,
JOB_ID VARCHAR(10), DEPARTMENT_ID NUMBER(4)',
capture_values => 'both',
rs_id => 'y',
row_id => 'n',
user_id => 'n',
timestamp => 'n',
object_id => 'n',
source_colmap => 'n',
target_colmap => 'y',
options_string => 'TABLESPACE TS_CDCPUB');
END;
/
Step 7 Staging Database Publisher: Enable the change set:
BEGIN
DBMS_CDC_PUBLISH.ALTER_CHANGE_SET(
change_set_name => 'CHICAGO_DAILY',
enable_capture => 'y');
END;
/
Step 8 Staging Database Publisher: Grant access to subscribers:
GRANT SELECT ON cdcpub.job_hist_ct TO subscriber1;

Performing Synchronous Publishing For Change Data Capture in Oracle

Step 1 Source Database DBA: Set the JAVA_POOL_SIZE parameter:
alter system set java_pool_size = 50000000;

Step 2 Source Database DBA: Create and grant privileges to the publisher:
CREATE USER cdcpub IDENTIFIED BY cdcpub DEFAULT TABLESPACE ts_cdcpub
QUOTA UNLIMITED ON SYSTEM
QUOTA UNLIMITED ON SYSAUX;
GRANT CREATE SESSION TO cdcpub;
GRANT CREATE TABLE TO cdcpub;
GRANT CREATE TABLESPACE TO cdcpub;
GRANT UNLIMITED TABLESPACE TO cdcpub;
GRANT SELECT_CATALOG_ROLE TO cdcpub;
GRANT EXECUTE_CATALOG_ROLE TO cdcpub;
GRANT CONNECT, RESOURCE TO cdcpub;

Step 3 Staging Database Publisher: Create a change set.
BEGIN
DBMS_CDC_PUBLISH.CREATE_CHANGE_SET(
change_set_name => 'CHICAGO_DAILY',
description => 'Change set for job history info',
change_source_name => 'SYNC_SOURCE');
END;
/

Step 4 Staging Database Publisher: Create a change table:
The following example creates a change table that captures changes that occur on a source table. The example uses the sample table HR.JOB_HISTORY as the source table. It assumes that the publisher has already created the TS_CHICAGO_DAILY tablespace.

BEGIN
DBMS_CDC_PUBLISH.CREATE_CHANGE_TABLE(
owner => 'cdcpub',
change_table_name => 'jobhist_ct',
change_set_name => 'CHICAGO_DAILY',
source_schema => 'HR',
source_table => 'JOB_HISTORY',
column_type_list => 'EMPLOYEE_ID NUMBER(6),START_DATE DATE,
END_DATE DATE,JOB_ID VARCHAR2(10),
DEPARTMENT_ID NUMBER(4)',
capture_values => 'both',
rs_id => 'y',
row_id => 'n',
user_id => 'n',
timestamp => 'n',
object_id => 'n',
source_colmap => 'y',
target_colmap => 'y',
options_string => 'TABLESPACE TS_CHICAGO_DAILY');
END;
/
This statement creates a change table named jobhist_ct within the change set CHICAGO_DAILY. The column_type_list parameter identifies the columns captured by the change table. The source_schema and source_table parameters identify the schema and source table that reside in the source database.

Step 5 Staging Database Publisher: Grant access to subscribers:
GRANT SELECT ON cdcpub.jobhist_ct TO subscriber1;

Change Sets and Change tables in Change Data Capture (CDC) in Oracle

Change Set:
A change set is a logical grouping of change data that is guaranteed to be transactionally consistent and that can be managed as a unit. A change set is a member of one (and only one) change source. A change source can contain one or more change sets. There are three modes of change sets, as follow:
  • Synchronous
  • Asynchronous HotLog
  • Asynchronous AutoLog

Publishers can purge unneeded change data from change tables at the change set level to keep the change tables in the change set from growing larger indefinitely.

Change Tables:
A given change table contains the change data resulting from DML operations performed on a given source table. A change table consists of two things: the change data itself, which is stored in a database table, ; and the system metadata necessary to maintain the change table, which includes control columns.

Monday, October 27, 2008

Asynchronous mode of Change Data Capture (CDC) in Oracle

This mode captures change data after the changes have been committed to the source database by using the database redo log files.There are two methods of capturing change data asynchronously, HotLog and AutoLog.

Hotlog:
Change data is captured from the online redo log file on the source database.There is a single, predefined HotLog change source, HOTLOG_SOURCE, that represents the current redo log files of the source database.Change tables for this mode of Change Data Capture must reside locally in the source database.

AutoLog:
Change data is captured from a set of redo log files managed by log transport services. Log transport services control the automated transfer of redo log files from the source database to the staging database. Change sets are populated automatically as new redo log files arrive. The degree of latency depends on frequency of redo log switches on the source database.

Publish and Subscribe Model in Change Data Capture (CDC) in Oracle

Most Change Data Capture systems have one person who captures and publishes change data; this person is the publisher. There can be multiple applications or individuals that access the change data; these applications and individuals are the subscribers.

Publisher:
Typically, a publisher deals with two databases:
*Source database
*Staging database: The following Change Data Capture objects reside on the staging database,
Change table --> A change table is a relational table that contains change data for a single source table. To subscribers, a change table is known as a publication.
Change set --> A change set is a set of change data that is guaranteed to be transactionally consistent.
Change source --> The change source is a logical representation of the source database. It contains one or more change sets.

Subscribers:
The subscribers are consumers of the published change data. A subscriber performs the following tasks,
1.Create subscriptions
2.Notify Change Data Capture when ready to receive a set of change data.
3.Notify Change Data Capture when finished with a set of change data.

Capturing Change Data with Change Data Capture in Oracle

Change Data Capture can capture and publish committed change data in either of the following modes:

Synchronous:
Change data is captured immediately, as each SQL statement that performs a data manipulation language (DML) operation (INSERT, UPDATE, or DELETE) is made, by using triggers on the source database.

Asynchronous:
Change data is captured after a SQL statement that performs a DML operation is committed, by taking advantage of the data sent to the redo log files.The asynchronous mode of Change Data Capture is dependent on the level of supplemental logging enabled at the source database. There are two methods of capturing change data asynchronously, HotLog and AutoLog.

Advantages of capturing change data with Change Data Capture:
1.Completeness
2.Performance
3.Interface
4.Cost

Capturing Change Data Without Change Data Capture in Oracle

Prior to the introduction of Change Data Capture, there were a number of ways that users could capture change data, including table differencing and change-value selection.

Table differencing:
Table differencing involves transporting a copy of an entire table from the source (production) database to the staging database (where the change data is captured), where an older version of the table already exists. Using the SQL MINUS operator, you can obtain the inserted and new versions of updated rows with the following query:

SELECT * FROM new_version
MINUS SELECT * FROM old_version;


However, there are several problems with this method:
1.It requires the entire table thereby increasing the cost of transportation.
2.Cost of MINUS operation is high.
3.Any intermediate price values between the old and new versions of the product's table cannot be captured using table differencing.
4.There is no way to determine which changes were made as part of the same transaction.

Change-value selection:
Change-value selection involves capturing the data on the source database by selecting the new and changed data from the source tables based on the value of a specific column.
However, there are also several problems with this method:
1.The overhead of capturing the change data must be borne on the source database.
2.This method is no better at capturing intermediate values than the table differencing method.
3.This method is also no better than the table differencing method at capturing which data changes were made together in the same transaction.
4.You have to design your source database in advance with this capture mechanism in mind – all tables from which you wish to capture change data must have a change-value column.

Thursday, October 23, 2008

difference between NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET

They are not invariably the same - it depends on your needs.

This allows you to store data from two differing charactersets in your database. One
character set in varchar2/clobs and a different one in nvarchar2 and nclobs.

Wednesday, October 22, 2008

How to Store only time in Oracle

SQL> create table aa(aaa date);

Table created.

SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'HH24:MI:SS';

Session altered.

SQL>
SQL> insert into aa values(sysdate);

1 row created.

SQL> select * from aa;

AAA
--------
23:51:36

Sunday, October 19, 2008

OPEN_CURSORS and SESSION_CACHED_CURSORS parameter in oracle

OPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once. You can use this parameter to prevent a session from opening an excessive number of cursors.

It is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors. The number will vary from one application to another. Assuming that a session does not open the number of cursors specified by OPEN_CURSORS, there is no added overhead to setting this value higher than actually needed.

SESSION_CACHED_CURSORS specifies the number of session cursors to cache. Repeated parse calls of the same SQL statement cause the session cursor for that statement to be moved into the session cursor cache. Subsequent parse calls will find the cursor in the cache and do not need to reopen the cursor. Oracle uses a least recently used algorithm to remove entries in the session cursor cache to make room for new entries when needed.
There are two main initialization parameters that affect cursors, and many folks get them confused. One is OPEN_CURSORS, and the other is SESSION_CACHED_CURSORS.

SESSION_CACHED_CURSORS sets the number of cached closed cursors each session can have. You can set SESSION_CACHED_CURSORS to higher than OPEN_CURSORS, lower than OPEN_CURSORS, or anywhere in between. This parameter has no effect on ora-1000's or on the number of cursors a session will have open. Conversely, OPEN_CURSORS has no effect on the number of cursors cached. There's no relationship between the two parameters.

If SESSION_CACHED_CURSORS is not set, it defaults to 0 and no cursors will be cached for your session. (Your cursors will still be cached in the shared pool, but your session will have to find them there.) If it is set, then when a parse request is issued, Oracle checks the library cache to see whether more than 3 parse requests have been issued for that statement. If so, Oracle moves the session cursor associated with that statement into the session cursor cache. Subsequent parse requests for that statement by the same session are then filled from the session cursor cache, thus avoiding even a soft parse. (Technically, a parse can't be completely avoided; a "softer" soft parse is done that's faster and requires less CPU.)

"Top SQL" queries from V$SQLAREA

Top 10 by Buffer Gets:

set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
buffer_gets, executions, buffer_gets/executions "Gets/Exec",
hash_value,address
FROM V$SQLAREA
WHERE buffer_gets > 10000
ORDER BY buffer_gets DESC)
WHERE rownum <= 10
;

Top 10 by Physical Reads:

set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
disk_reads, executions, disk_reads/executions "Reads/Exec",
hash_value,address
FROM V$SQLAREA
WHERE disk_reads > 1000
ORDER BY disk_reads DESC)
WHERE rownum <= 10
;

Top 10 by Executions:

set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
executions, rows_processed, rows_processed/executions "Rows/Exec",
hash_value,address
FROM V$SQLAREA
WHERE executions > 100
ORDER BY executions DESC)
WHERE rownum <= 10
;

Top 10 by Parse Calls:

set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
parse_calls, executions, hash_value,address
FROM V$SQLAREA
WHERE parse_calls > 1000
ORDER BY parse_calls DESC)
WHERE rownum <= 10
;

Top 10 by Sharable Memory:

set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
sharable_mem, executions, hash_value,address
FROM V$SQLAREA
WHERE sharable_mem > 1048576
ORDER BY sharable_mem DESC)
WHERE rownum <= 10
;

Top 10 by Version Count:

set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
version_count, executions, hash_value,address
FROM V$SQLAREA
WHERE version_count > 20
ORDER BY version_count DESC)
WHERE rownum <= 10
;

Trace Level of SQLNET log

For both TRACE_LEVEL_CLIENT and TRACE_LEVEL_SERVER, the parameter can take a numeric value between 0 and 16 where 0 is disabled and 16 is the most detailed. Alternatively these parameters can also take a scalar value was follows:

OFF
0
No tracing
USER
4
Include user errors
ADMIN
6
Include administrative errors
SUPPORT
16
Include packet contents


Level 16 (SUPPORT) is the most detailed trace level. Take care when enabling this level of detail as it will consume disk space very rapidly. Consider using the TRACE_FILELEN_SERVER and TRACE_FILENO_SERVER parameters to reduce the impact on the server. If TRACE_UNIQUE_CLIENT is set to ON then a separate trace file will be created for each client. The pid is appended to the file name e.g. client_123.trc. Note that this appears to be the default behaviour in recent versions.

When both TRACE_FILELEN_CLIENT and TRACE_FILENO_CLIENT are set to non-zero values, the trace files are used cyclically. When one file is full, output continues in the next file; when all files are full output continues in the first file. A sequence number is included in the file name. For example if TRACE_FILE_CLIENT is client and TRACE_FILENO_CLIENT is 5 then the files will be:
 
client1_pid.trc
client2_pid.trc
client3_pid.trc
client4_pid.trc
client5_pid.trc

Saturday, October 18, 2008

Fatal NI connect error 12502

When logging to the database during peak hours, the following error is thrown intermittently: ORA-12502 TNS:listener received no CONNECT_DATA from client. A temporary workaround would be to restart the listener.

Fatal NI connect error 12502, connecting to:

(DESCRIPTION=(LOAD_BALANCE=yes)(ADDRESS=(PROTOCOL=TCP)(HOST=10.1.25.12)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=a012band)(CID=(PROGRAM=sqlplus@narayan)(HOST=narayan)(USER=fnsonla))))

VERSION INFORMATION:
TNS for IBM/AIX RISC System/6000: Version 9.2.0.7.0 - Production
TCP/IP NT Protocol Adapter for IBM/AIX RISC System/6000: Version 9.2.0.7.0 - Production
Time: 12-DEC-2007 11:36:13
Tracing not turned on.
Tns error struct:

Client trace could also show ORA-12520: TNS:listener could not find available handler for requested type of server

An increase in the number of connections can cause this problem to manifest where it did not exist earlier.
Check if the processes parameter has exhausted using following command by log in into database as SYSDBA:
select * from v$resource_limit where resource_name='processes';

The pfile setting for PROCESSES is not adequate for customer setup. When the database process limit is hit, the listener will not be able to spawn any more server processes. Hence connections are failing with ORA-12502 at client side as well as in the listener trace and listener.log files
Solution
Increase the value for PROCESSES for all instances in the RAC cluster.

Select Top SQLs in Oracle

SELECT SQL_TEXT,X.CPU_TIME FROM DBA_HIST_SQLTEXT DHST,
(SELECT DHSS.SQL_ID SQL_ID,SUM(DHSS.CPU_TIME_DELTA) CPU_TIME
FROM DBA_HIST_SQLSTAT DHSS
WHERE DHSS.SNAP_ID IN(SELECT SNAP_ID FROM DBA_HIST_SNAPSHOT
WHERE BEGIN_INTERVAL_TIME>=TO_DATE('09/01/2008','MM/DD/YYYY')
AND END_INTERVAL_TIME<=TO_DATE('09/09/2008','MM/DD/YYYY'))
GROUP BY DHSS.SQL_ID) X WHERE X.SQL_ID=DHST.SQL_ID
ORDER BY X.CPU_TIME DESC;

Monday, October 13, 2008

To know the IP adrees of Client in Oracle

SYS_CONTEXT returns the value of parameter associated with the context namespace. You can use this function in both SQL and PL/SQL statements.For namespace and parameter, you can specify either a string or an expression that resolves to a string designating a namespace or an attribute.

The datatype of the return value is VARCHAR2. The default maximum size of the return value is 256 bytes. You can override this default by specifying the optional length parameter, which must be a NUMBER or a value that can be implicitly converted to NUMBER. The valid range of values is 1 to 4000 bytes.

Examples
CONNECT OE/OE
SELECT SYS_CONTEXT ('USERENV', 'SESSION_USER') FROM DUAL;
SYS_CONTEXT ('USERENV', 'SESSION_USER')
------------------------------------------------------
OE

SELECT SYS_CONTEXT ('USERENV', 'IP_ADDRESS') FROM DUAL;

Sunday, October 12, 2008

DATABASE LINK in ORACLE

A database link is a schema object in one database that enables you to access objects on another database. The other database need not be an Oracle Database system. To create a private database link, you must have the CREATE DATABASE LINK system privilege. To create a public database link, you must have the CREATE PUBLIC DATABASE LINK system privilege.

Examples:
CREATE DATABASE LINK local CONNECT TO hr IDENTIFIED BY hr USING 'local';

CREATE PUBLIC DATABASE LINK remote USING 'remote';

CREATE DATABASE LINK remote.us.oracle.com CONNECT TO CURRENT_USER USING 'remote';

SELECT * FROM employees@local;

Here 'local' is connect string. Specify the service name of a remote database. If you specify only the database name, then Oracle Database implicitly appends the database domain to the connect string to create a complete service name. Therefore, if the database domain of the remote database is different from that of the current database, then you must specify the complete service name.

Tuesday, October 7, 2008

Count of Table's Rows per Datafile

set verify off
def tabowner = &&1
def tabname = &&2

ttitle -
center '&tabowner..&tabname Rows per Datafile' skip 3

col file# format 99,999 justify c heading 'Datafile ID'
col nrows format 999,999,999,999 justify c heading 'Row Count'

select dbms_rowid.rowid_to_absolute_fno(rowid,'&&tabowner','&&tabname') file#,
count(rowid) nrows
from &&tabowner..&&tabname
group by dbms_rowid.rowid_to_absolute_fno(rowid,'&&tabowner','&&tabname')
/

undef tabowner
undef tabname
set verify on

Check If A Database Is In Restricted Mode

SQL> select logins from v$instance;

LOGINS
----------
RESTRICTED

SPFILE OR PFILE IS USED BY CURRENT INSTANCE

SQL> show parameter SPFILE

If the output returns no rows then pfile is used.
If the output return any value with a filename(with its path) then SPFILE is used by current running instance .

Monday, October 6, 2008

Find Sessions Generating Lots of Redo

To find sessions generating lots of redo, you can use either of the following  methods. Both methods examine the amount of undo generated. When a transaction  generates undo, it will automatically generate redo as well.

The methods are:
1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates  how much blocks have been changed by the session. High values indicate a session generating lots of redo.

The query you can use is:
SQL> SELECT s.sid, s.serial#, s.username, s.program,
2 i.block_changes
3 FROM v$session s, v$sess_io i
4 WHERE s.sid = i.sid
5 ORDER BY 5 desc, 1, 2, 3, 4;

Run the query multiple times and examine the delta between each occurrence of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.

2) Query V$TRANSACTION. This view contains information about the amount of undo blocks and undo records accessed by the transaction (as found in the  USED_UBLK and USED_UREC columns).

The query you can use is:
SQL> SELECT s.sid, s.serial#, s.username, s.program,
2 t.used_ublk, t.used_urec
3 FROM v$session s, v$transaction t
4 WHERE s.taddr = t.addr
5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;

Run the query multiple times and examine the delta between each occurrence of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by the session.

Display User Lock Information

In general, multiuser databases use some form of data locking to solve the problems associated with data concurrency, consistency, and integrity. Locks are mechanisms that prevent destructive interaction between transactions accessing the same resource.

Create a blocking lock
create table tstlock (foo varchar2(1), bar varchar2(1));
insert into tstlock values (1,'a');
insert into tstlock values (2, 'b');

Now grab a lock on the whole table, still in Session 1:
SQL> select * from tstlock for update ;

And in Session 2, try to update a row:
SQL> update tstlock set bar='a' where bar='a' ;


To display locks being held (or waited on) in the database,
set echo off
set pagesize 60
Column SID FORMAT 999 heading "Sess|ID "
COLUMN OBJECT_NAME FORMAT A17 heading "OBJ NAME or|TRANS_ID" Trunc
COLUMN OSUSER FORMAT A10 heading "Op Sys|User ID"
COLUMN USERNAME FORMAT A8
COLUMN TERMINAL FORMAT A8 trunc

select B.SID, C.USERNAME, C.OSUSER, C.TERMINAL, DECODE(B.ID2, 0, A.OBJECT_NAME,
'Trans-'||to_char(B.ID1)) OBJECT_NAME, B.TYPE,
DECODE(B.LMODE,0,'--Waiting--', 1,'Null', 2,'Row Share', 3,'Row Excl', 4,'Share', 5,'Sha Row Exc', 6,'Exclusive', 'Other') "Lock Mode",
DECODE(B.REQUEST,0,' ', 1,'Null', 2,'Row Share', 3,'Row Excl', 4,'Share', 5,'Sha Row Exc', 6,'Exclusive', 'Other') "Req Mode"
from DBA_OBJECTS A, V$LOCK B, V$SESSION C
where A.OBJECT_ID(+) = B.ID1 and B.SID = C.SID and C.USERNAME is not null
order by B.SID, B.ID2;

To Determine an Object's Dependencies

I am looking at the schema 'DM' please edit for your schema.

set pages500 lines110
col objdep format a40 hea "Object Name (Type)"
col refr format a35 hea "Object Reference (Type)"

variable usn varchar2(20);
exec :usn := 'DM';

break on objdep

select owner||'.'||name||' ('||decode(type,'MATERIALIZED VIEW', 'MV', 'DIMENSION', 'DIM','EVALUATION CONTXT', 'EVALCTXT', 'PACKAGE BODY', 'PKGBDY','CUBE.DIMENSION','CUBE.DIM', type)||')' objdep,
referenced_name||' ('||decode(referenced_type,'EVALUATION CONTXT', 'EVALCTXT', 'NON-EXISTENT CONTXT','NO-EXIST',
'PACKAGE BODY','PKGBDY', 'CUBE.DIMENSION','CUBE.DIM', referenced_type)||')' refr
from dba_dependencies where owner=:usn order by objdep;

Remove Chained Rows from a Table

Finding and repairing chained rows is an important part of Oracle administration. When an Oracle rows expands, it sometimes chains onto multiple data blocks. Excessive row chaining can cause a dramatic increase in disk I/O because several I/O’s are required to fetch the block instead of one single I/O.

Of course, row chaining can be prevented by setting the PCTFREE storage parameter high enough to keep space on each data block for row expansion.

In cases where the data columns contain RAW and LONG RAW columns, row chaining may be unavoidable because the average row length may exceed the data block size. That is why the query below filters out tables with RAW data types.

set ECHO off

ACCEPT chaintabl PROMPT 'Enter the table with chained rows: '

drop table chaintemp;
drop table chained_rows;

start $ORACLE_HOME/rdbms/admin/utlchain

set ECHO OFF

REM ANALYZE table to locate chained rows
analyze table &chaintabl
list chained rows into chained_rows;

REM CREATE Temporary table with the chained rows
create table chaintemp as
select *
from &chaintabl
where rowid in (select head_rowid
from chained_rows);

REM DELETE the chained rows from the original table
delete from &chaintabl where rowid in (select head_rowid from chained_rows);

REM INSERT the formerly chained rows back into table
insert into &chaintabl
select *
from chaintemp;

REM DROP the temporary table
drop table chaintemp;

Number of commits executed by a session.

This SQL displays the number of commits executed by a session.
select value from v$sesstat
where sid in (select sid
from v$session where username = '&user')
and statistic# in (select statistic#
from v$statname where name like '%user%commit%')
/

Number of Open Cursors per user/SID

This sql reports the number of open cursors per user/SID.

COLUMN user_name heading Username
COLUMN num heading "Open Cursors"
SET lines 80 pages 59
SELECT user_name, sid,COUNT (*) num
FROM v$open_cursor
GROUP BY user_name,sid;

Get the Locked table with Session Information

The report generated by this script gives information on sessions which are
holding locks and gives the information needed to kill using the ALTER SYSTEM
KILL SESSION command.
set linesize 132 pagesize 66
break on Kill on username on terminal
column Kill heading 'Kill String' format a13
column res heading 'Resource Type' format 999
column id1 format 9999990
column id2 format 9999990
column lmode heading 'Lock Held' format a20
column request heading 'Lock Requested' format a20
column serial# format 99999
column username format a10 heading "Username"
column terminal heading Term format a6
column tab format a35 heading "Table Name"
column owner format a9
column Address format a18
select nvl(S.USERNAME,'Internal') username,
nvl(S.TERMINAL,'None') terminal,
L.SID||','||S.SERIAL# Kill,
U1.NAME||'.'||substr(T1.NAME,1,20) tab,
decode(L.LMODE,1,'No Lock',
2,'Row Share',
3,'Row Exclusive',
4,'Share',
5,'Share Row Exclusive',
6,'Exclusive',null) lmode,
decode(L.REQUEST,1,'No Lock',
2,'Row Share',
3,'Row Exclusive',
4,'Share',
5,'Share Row Exclusive',
6,'Exclusive',null) request
from V$LOCK L,
V$SESSION S,
SYS.USER$ U1,
SYS.OBJ$ T1
where L.SID = S.SID
and T1.OBJ# = decode(L.ID2,0,L.ID1,L.ID2)
and U1.USER# = T1.OWNER#
and S.TYPE != 'BACKGROUND'
order by 1,2,5

Sunday, October 5, 2008

A Fast way to review what options are installed in Oracle

Start the Installer and on the Welcome page, you can click on the "Installed Products" button. You can drill down to individual Oracle Home information, etc. You can see this information from inside the database also:

set pages 999
set lines 80
col comp_name format a35 word_wrapped

select comp_name, version, status
from dba_registry;

Setting threshhold value and critical value of a Metric in Oracle

Example scipt to set threshold for Alerts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
How to set on metric SHARED_POOL_FREE_PCT ?
begin
DBMS_SERVER_ALERT.SET_THRESHOLD(
metrics_id => DBMS_SERVER_ALERT.&Metric,
warning_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
warning_value => '&Warning',
critical_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
critical_value => '&Critical',
observation_period => 10,
consecutive_occurrences => 5,
instance_name => NULL,
object_type => DBMS_SERVER_ALERT.OBJECT_TYPE_&type,
object_name => &object_name);
end;
/

Enter value for metric: SHARED_POOL_FREE_PCT
Enter value for warning: 15
Enter value for critical: 5
Enter value for type: SYSTEM
Enter value for object_name: NULL

Supported Object Types
~~~~~~~~~~~~~~~~~~~~~~
OBJECT_TYPE_SYSTEM CONSTANT BINARY_INTEGER := 1;
OBJECT_TYPE_FILE CONSTANT BINARY_INTEGER := 2;
OBJECT_TYPE_SERVICE CONSTANT BINARY_INTEGER := 3;
OBJECT_TYPE_EVENT_CLASS CONSTANT BINARY_INTEGER := 4;
OBJECT_TYPE_TABLESPACE CONSTANT BINARY_INTEGER := 5;
OBJECT_TYPE_SESSION CONSTANT BINARY_INTEGER := 9;

METRIC_ID METRIC_NAME
---------- ------------------------------------------
0 Number of Sessions Waiting (Event)
1 Total Time Waited
2 Total Wait Counts

1000 Average Users Waiting Counts ( AVG_USERS_WAITING )
1001 Database Time Spent Waiting (%) ( DB_TIME_WAITING )
1002 Total Wait Counts
1003 Total Time Waited
2000 Buffer Cache Hit Ratio ( BUFFER_CACHE_HIT )
2001 Memory Sorts Ratio ( MEMORY_SORTS_PCT )
2002 Redo Allocation Hit Ratio ( REDO_ALLOCATION_HIT )
2003 User Transaction Per Sec ( USER_TRANSACTIONS_SEC )
2004 Physical Reads Per Sec ( PHYSICAL_READS_SEC )
2005 Physical Reads Per Txn ( PHYSICAL_READS_TXN )
2006 Physical Writes Per Sec ( PHYSICAL_WRITES_SEC )
2007 Physical Writes Per Txn ( PHYSICAL_WRITES_TXN )
2008 Physical Reads Direct Per Sec ( PHYSICAL_READS_DIR_SEC )
2009 Physical Reads Direct Per Txn ( PHYSICAL_READS_DIR_TXN )
2010 Physical Writes Direct Per Sec ( PHYSICAL_WRITES_DIR_SEC )
2011 Physical Writes Direct Per Txn ( PHYSICAL_WRITES_DIR_TXN )
2012 Physical Reads Direct Lobs Per Sec ( PHYSICAL_READS_LOB_SEC )
2013 Physical Reads Direct Lobs Per Txn ( PHYSICAL_READS_LOB_TXN )
2014 Physical Writes Direct Lobs Per Sec ( PHYSICAL_WRITES_LOB_SEC )
2015 Physical Writes Direct Lobs Per Txn ( PHYSICAL_WRITES_LOB_TXN )
2016 Redo Generated Per Sec ( REDO_GENERATED_SEC )
2017 Redo Generated Per Txn ( REDO_GENERATED_TXN )
2018 Logons Per Sec ( LOGONS_SEC )
2019 Logons Per Txn ( LOGONS_TXN )
2020 Open Cursors Per Sec ( OPEN_CURSORS_SEC )
2021 Open Cursors Per Txn ( OPEN_CURSORS_TXN )
2022 User Commits Per Sec ( USER_COMMITS_SEC )
2023 User Commits Percentage ( USER_COMMITS_TXN )
2024 User Rollbacks Per Sec ( USER_ROLLBACKS_SEC )
2025 User Rollbacks Percentage ( USER_ROLLBACKS_TXN )
2026 User Calls Per Sec ( USER_CALLS_SEC )
2027 User Calls Per Txn ( USER_CALLS_TXN )
2028 Recursive Calls Per Sec ( RECURSIVE_CALLS_SEC )
2029 Recursive Calls Per Txn ( RECURSIVE_CALLS_TXN )
2030 Logical Reads Per Sec ( SESS_LOGICAL_READS_SEC)
2031 Logical Reads Per Txn ( SESS_LOGICAL_READS_TXN )
2032 DBWR Checkpoints Per Sec ( DBWR_CKPT_SEC )
2033 Background Checkpoints Per Sec ( BACKGROUND_CKPT_SEC )
2034 Redo Writes Per Sec ( REDO_WRITES_SEC )
2035 Redo Writes Per Txn ( REDO_WRITES_TXN )
2036 Long Table Scans Per Sec ( LONG_TABLE_SCANS_SEC )
2037 Long Table Scans Per Txn ( LONG_TABLE_SCANS_TXN )
2038 Total Table Scans Per Sec ( TOTAL_TABLE_SCANS_SEC )
2039 Total Table Scans Per Txn ( TOTAL_TABLE_SCANS_TXN )
2040 Full Index Scans Per Sec ( FULL_INDEX_SCANS_SEC )
2041 Full Index Scans Per Txn ( FULL_INDEX_SCANS_TXN )
2042 Total Index Scans Per Sec ( TOTAL_INDEX_SCANS_SEC )
2043 Total Index Scans Per Txn ( TOTAL_INDEX_SCANS_TXN )
2044 Total Parse Count Per Sec ( TOTAL_PARSES_SEC )
2045 Total Parse Count Per Txn ( TOTAL_PARSES_TXN )
2046 Hard Parse Count Per Sec ( HARD_PARSES_SEC )
2047 Hard Parse Count Per Txn ( HARD_PARSES_TXN )
2048 Parse Failure Count Per Sec ( PARSE_FAILURES_SEC )
2049 Parse Failure Count Per Txn ( PARSE_FAILURES_TXN )
2050 Cursor Cache Hit Ratio ( CURSOR_CACHE_HIT )
2051 Disk Sort Per Sec ( DISK_SORT_SEC )
2052 Disk Sort Per Txn ( DISK_SORT_TXN )

2053 Rows Per Sort ( ROWS_PER_SORT )
2054 Execute Without Parse Ratio ( EXECUTE_WITHOUT_PARSE )
2055 Soft Parse Ratio ( SOFT_PARSE_PCT )
2056 User Calls Ratio ( USER_CALLS_PCT )
2057 Host CPU Utilization (%)
2058 Network Traffic Volume Per Sec ( NETWORK_BYTES_SEC )
2059 Enqueue Timeouts Per Sec ( ENQUEUE_TIMEOUTS_SEC )
2060 Enqueue Timeouts Per Txn ( ENQUEUE_TIMEOUTS_TXN )
2061 Enqueue Waits Per Sec ( ENQUEUE_WAITS_SEC )
2062 Enqueue Waits Per Txn ( ENQUEUE_WAITS_TXN )
2063 Enqueue Deadlocks Per Sec ( ENQUEUE_DEADLOCKS_SEC )
2064 Enqueue Deadlocks Per Txn ( ENQUEUE_DEADLOCKS_TXN )
2065 Enqueue Requests Per Sec ( ENQUEUE_REQUESTS_SEC )
2066 Enqueue Requests Per Txn ( ENQUEUE_REQUESTS_TXN )
2067 DB Block Gets Per Sec ( DB_BLKGETS_SEC )
2068 DB Block Gets Per Txn ( DB_BLKGETS_TXN )
2069 Consistent Read Gets Per Sec ( CONSISTENT_GETS_SEC )
2070 Consistent Read Gets Per Txn ( CONSISTENT_GETS_TXN )
2071 DB Block Changes Per Sec ( DB_BLKCHANGES_SEC )
2072 DB Block Changes Per Txn ( DB_BLKCHANGES_TXN )
2073 Consistent Read Changes Per Sec ( CONSISTENT_CHANGES_SEC )
2074 Consistent Read Changes Per Txn ( CONSISTENT_CHANGES_TXN )
2075 CPU Usage Per Sec ( SESSION_CPU_SEC )
2076 CPU Usage Per Txn ( SESSION_CPU_TXN )
2077 CR Blocks Created Per Sec ( CR_BLOCKS_CREATED_SEC )
2078 CR Blocks Created Per Txn ( CR_BLOCKS_CREATED_TXN )
2079 CR Undo Records Applied Per Sec ( CR_RECORDS_APPLIED_SEC )
2080 CR Undo Records Applied Per Txn ( CR_RECORDS_APPLIED_TXN )
2081 User Rollback UndoRec Applied Per Sec ( RB_RECORDS_APPLIED_SEC )
2082 User Rollback Undo Records Applied Per Txn ( RB_RECORDS_APPLIED_TXN )
2083 Leaf Node Splits Per Sec ( LEAF_NODE_SPLITS_SEC )
2084 Leaf Node Splits Per Txn ( LEAF_NODE_SPLITS_TXN )
2085 Branch Node Splits Per Sec ( BRANCH_NODE_SPLITS_SEC )
2086 Branch Node Splits Per Txn ( BRANCH_NODE_SPLITS_TXN )
2087 PX downgraded 25% or more Per Sec ( PX_DOWNGRADED_25_SEC )
2088 PX downgraded 50% or more Per Sec ( PX_DOWNGRADED_50_SEC )
2089 PX downgraded 75% or more Per Sec ( PX_DOWNGRADED_75_SEC )
2090 PX downgraded Parallel Operation Per Sec ( PX_DOWNGRADED_SEC )
2091 PX downgraded to serial Per Sec ( PX_DOWNGRADED_SER_SEC )
2092 Physical Read Total IO Requests Per Sec
2093 Physical Read Total Bytes Per Sec ( PX_DOWNGRADED_SEC )
2094 GC CR Block Received Per Second
2095 GC CR Block Received Per Txn ( PX_DOWNGRADED_SER_SEC )
2096 GC Current Block Received Per Second
2097 GC Current Block Received Per Txn
2098 Global Cache Average CR Get Time ( GC_AVG_CR_GET_TIME )
2099 Global Cache Average Current Get Time ( GC_AVG_CUR_GET_TIME )
2100 Physical Write Total IO Requests Per Sec
2101 Global Cache Blocks Corrupted ( GC_BLOCKS_CORRUPT )
2102 Global Cache Blocks Lost ( GC_BLOCKS_LOST )
2103 Current Logons Count ( LOGONS_CURRENT )
2104 Current Open Cursors Count ( OPEN_CURSORS_CURRENT )
2105 User Limit % ( USER_LIMIT_PCT )
2106 SQL Service Response Time ( SQL_SRV_RESPONSE_TIME )
2107 Database Wait Time Ratio ( DATABASE_WAIT_TIME )
2108 Database CPU Time Ratio ( DATABASE_CPU_TIME )
2109 Response Time Per Txn ( RESPONSE_TXN )
2110 Row Cache Hit Ratio ( ROW_CACHE_HIT )
2111 Row Cache Miss Ratio ( ROW_CACHE_MISS )
2112 Library Cache Hit Ratio ( LIBARY_CACHE_HIT )
2113 Library Cache Miss Ratio ( LIBARY_CACHE_MISS )
2114 Shared Pool Free % ( SHARED_POOL_FREE_PCT )
2115 PGA Cache Hit % ( PGA_CACHE_HIT )
2118 Process Limit % ( PROCESS_LIMIT_PCT )
2119 Session Limit % ( SESSION_LIMIT_PCT )
2120 Executions Per Txn
2121 Executions Per Sec ( EXECUTIONS_PER_SEC )
2122 Txns Per Logon
2123 Database Time Per Sec ( DB_TIME_PER_SEC )
2124 Physical Write Total Bytes Per Sec
2125 Physical Read IO Requests Per Sec
2126 Physical Read Bytes Per Sec
2127 Physical Write IO Requests Per Sec
2128 Physical Write Bytes Per Sec
2129 DB Block Changes Per User Call
2130 DB Block Gets Per User Call
2131 Executions Per User Call
2132 Logical Reads Per User Call
2133 Total Sorts Per User Call
2134 Total Table Scans Per User Call
2135 Current OS Load
2136 ??????????????????? (STREAMS_POOL_USED_PCT)
4000 Blocked User Session Count ( BLOCKED_USERS )
5000 User Transaction Count (Session)
5001 CPU Time (Session)
5002 Physical Reads (Session)
5003 PGA Memory (Session)
5004 Hard Parse Count (Session)
5005 Total Parse Count (Session)
5006 Physical Reads Ratio (Sess/Sys) %
5007 Logical Reads Ratio (Sess/Sys) %
5008 Soft Parse Count
5009 Logical Read Count
6000 Elapsed Time Per User Call ( ELAPSED_TIME_PER_CALL )
6001 CPU Time Per User Call ( CPU_TIME_PER_CALL )
6002 User Calls Per Second
6003 DB Time Per Second
6004 DB Time Per User Call
7000 Average File Read Time (Files-Long) ( AVG_FILE_READ_TIME )
7001 Average File Write Time (Files-Long) ( AVG_FILE_WRITE_TIME )
7002 Physical Reads (Files-Long)
7003 Physical Writes (Files-Long)
7004 Physical Block Reads (Files-Long)
7005 Physical Block Writes (Files-Long)
9000 Tablespace Space Usage ( TABLESPACE_PCT_FULL )
9001 Tablespace Bytes Space Usage ( TABLESPACE_BYT_FREE )

search engine

Custom Search