Sunday, November 16, 2008

NUMERIC Datatype in ORACLE

BINARY_INTEGER Subtypes:
A base type is the datatype from which a subtype is derived.For your convenience, PL/SQL predefines the following BINARY_INTEGER subtypes:
NATURAL : let you restrict an integer variable to non-negative values.
NATURALN : let you restrict an integer variable to positive values.
POSITIVE : prevent the assigning of nulls to an integer variable.
POSITIVEN : prevent the assigning of nulls to an integer variable.
SIGNTYPE : lets you restrict an integer variable to the values -1, 0, and 1.

BINARY_FLOAT and BINARY_DOUBLE Datatypes:
To help deal with overflow, underflow, and other conditions that can occur with these numbers, you can use several special predefined constants: BINARY_FLOAT_NAN, BINARY_FLOAT_INFINITY, BINARY_FLOAT_MAX_NORMAL, BINARY_FLOAT_MIN_NORMAL, BINARY_FLOAT_MAX_SUBNORMAL, BINARY_FLOAT_MIN_SUBNORMAL, and corresponding names starting with BINARY_DOUBLE. The constants for NaN (not a number) and infinity are also defined by SQL;the others are PL/SQL-only.

NUMBER Datatype:
The syntax of a NUMBER datatype is:
NUMBER[(precision,scale)]
The maximum precision that can be specified for a NUMBER value is 38 decimal digits. If you do not specify precision, it defaults to 39 or 40, or the maximum supported by your system, whichever is less.

PLS_INTEGER Datatype:
You use the PLS_INTEGER datatype to store signed integers. Its magnitude range is -2147483648 to 2147483647, represented in 32 bits. PLS_INTEGER values require less storage than NUMBER values and NUMBER subtypes. Also, PLS_INTEGER operations use hardware arithmetic, so they are faster than NUMBER operations, which use library
arithmetic.

Saturday, November 15, 2008

Exponentiation Operator in Oracle




SQL>
SQL> SET SERVEROUTPUT ON
SQL> BEGIN
2 DBMS_OUTPUT.PUT_LINE(4 ** 2);
3 END;
4 /
16

PL/SQL procedure successfully completed.

Object Types in PL/SQL

PL/SQL supports object-oriented programming through object types. An object type
encapsulates a data structure along with the functions and procedures needed to
manipulate the data.The variables that form the data structure are known as
attributes. The functions and procedures that manipulate the attributes are known as
methods.


CREATE TYPE bank_account AS OBJECT (
acct_number NUMBER(5),
balance NUMBER,
status VARCHAR2(10),
MEMBER PROCEDURE open (SELF IN OUT NOCOPY bank_account, amount IN NUMBER),
MEMBER PROCEDURE close (SELF IN OUT NOCOPY bank_account, num IN NUMBER,
amount OUT NUMBER),
MEMBER PROCEDURE deposit (SELF IN OUT NOCOPY bank_account, num IN NUMBER,
amount IN NUMBER),
MEMBER PROCEDURE withdraw (SELF IN OUT NOCOPY bank_account, num IN NUMBER,
amount IN NUMBER),
MEMBER FUNCTION curr_bal (num IN NUMBER) RETURN NUMBER );
/

Records in PL/SQL

Records are composite data structures whose fields can have different datatypes. You
can use records to hold related items and pass them to subprograms with a single parameter.

DECLARE
TYPE timerec IS RECORD (hours SMALLINT, minutes SMALLINT);
TYPE meetin_typ IS RECORD (
date_held DATE,
duration timerec, -- nested record
location VARCHAR2(20),
purpose VARCHAR2(50));
BEGIN
NULL;
END;
/

Using the EXIT-WHEN Statement in PL/SQL

There is another way to iterate a loop except the FOR and WHILE Loop. And we also can exit with using 'EXIT WHEN'.

LOOP
counter := counter + 1;
total := total + counter * counter;
-- exit loop when condition is true
EXIT WHEN total > 25000;
END LOOP;

Using CASE Statements in PL/SQL programs

While CASE constructs don't offer any fundamentally new semantics, they do allow a more compact notation and some elimination of repetition with respect to what otherwise would be expressed with an IF construct.

case
when n = 1 then Action1;
when n = 2 then Action2;
when n = 3 then Action3;
when ( n > 3 and n < 8 ) then Action4through7;
else ActionOther;
end case;

…and…

text := case
when n = 1 then one

when n = 2 then two
when n = 3 then three
when ( n > 3 and n < 8 ) then four_through_seven
else other
end;

For Example,
set serveroutput on;
drop function Get_Grade_Description;

create or replace function Get_Grade_Description(grade IN VARCHAR2) RETURN VARCHAR2 is
description VARCHAR2(15);
begin
-- use a CASE Expression to store the description for
-- the grade in a variable
description := CASE grade
WHEN 'A' THEN 'Excellent'
WHEN 'B' THEN 'Very Good'
WHEN 'C' THEN 'Good'
WHEN 'D' THEN 'Fair'
WHEN 'F' THEN 'Fail'
ELSE 'No such grade'
END;
-- return the description
return description;
end Get_Grade_Description;
/
Show Errors

select Get_Grade_Description('A') from dual
/

Using the IF-THEN_ELSE Statement for Conditional Control in PL/SQL

The IF-THEN-ELSE statement lets you execute a sequence of statements conditionally.

An IF statement looks like:
IF THEN ELSE END IF;
The ELSE part is optional. If you want a multiway branch, use:
IF THEN ...
ELSIF THEN ...
... ...
ELSIF THEN ...
ELSE ...
END IF;

Thursday, November 13, 2008

%TYPE and %ROWTYPE in PL/SQL

The %TYPE attribute provides the datatype of a variable or database column. This is
particularly useful when declaring variables that will hold database values.

v_last_name employees.last_name%TYPE;

In PL/SQL, records are used to group data. A record consists of a number of related
fields in which data values can be stored. The %ROWTYPE attribute provides a record
type that represents a row in a table.

dept_rec departments%ROWTYPE;

USING BIND VARIABLE IN PL/SQL

DECLARE
bonus NUMBER(8,2);
emp_id NUMBER(6) := 100;
BEGIN
EXECUTE IMMEDIATE 'DELETE FROM EMP WHERE employee_id = :emp_id' USING emp_id;
END;
/

Assigning Values to Variables by SELECTing INTO in PL/SQL

DECLARE
bonus NUMBER(8,2);
emp_id NUMBER(6) := 100;
BEGIN
SELECT salary * 0.10 INTO bonus FROM employees
WHERE employee_id = emp_id;
END;
/

Assigning Values to a Variable in PL/SQL

DECLARE
wages NUMBER;
hours_worked NUMBER := 40;
hourly_salary NUMBER := 22.50;
bonus NUMBER := 150;
country VARCHAR2(128);
counter NUMBER
BEGIN
...........
END;
/

Wednesday, November 12, 2008

ORA-02085: database link %s connects to %s

This error happens when the database parameter global_names is set to TRUE.

When this is the case, the database link need to have the same name as the global name of the remote database (the one you are trying to connect to).
eg:

SQL> show parameter global_names
NAME TYPE VALUE
------------------------------------ ----------- --------------------
global_names boolean TRUE

SQL> create database link test connect to myuser identified by mypw using 'ORA920';
Database link created.

SQL> select * from global_name@test;
select * from global_name@test
*
ERROR at line 1:
ORA-02085: database link TEST.EU.DBMOTIVE.COM connects to ORA920.EU.DBMOTIVE.COM

-----
Now, first determine the global name of the remote database.
Connect to the remote database and issue following select:

SQL> select global_name from global_name;

GLOBAL_NAME
---------------------------------------------------
ORA920.EU.DBMOTIVE.COM

-------

Back to the database where we're trying to create the database link to ORA920

SQL> drop database link test;

Database link dropped.

SQL> create database link ORA920 connect to myuser identified by mypw using 'ORA920';

Database link created.

SQL> select * from global_name@ora920;

GLOBAL_NAME
-------------------------------------------------------------------
ORA920.EU.DBMOTIVE.COM

When you need more than 1 database link to connect to the remote database link, you'll have to use a qualifier:
SQL> create database link ORA920@MYUSER connect to myuser identified by mypw using 'ORA920';

Database link created.

SQL> select * from global_name@ora920@myuser;

GLOBAL_NAME
-------------------------------------------------------------------
ORA920.EU.DBMOTIVE.COM

Monday, November 10, 2008

How to delete the listener service

Use the Net Configuration Assistant. Start -> Programs -> Oracle Home -> Configuration and Migration Tools -> Net Configuration Assistant
Or from the commandline: c:> netca

Wednesday, November 5, 2008

ORA-16179: incremental changes to "string" not allowed with SPFILE

SQL> alter system set log_archive_dest_2='/oradata1/terminus_archive' scope=both;
alter system set log_archive_dest_2='/oradata1/terminus_archive' scope=both
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-16179: incremental changes to "log_archive_dest_2" not allowed with SPFILE

Cause: Incremental changes to a log_archive_dest_n parameter cannot be made when using an SPFILE.
Action: Specify either LOCATION or SERVICE plus all other attributes to be set in one ALTER SYSTEM/SESSION SET command.

SQL> alter system set log_archive_dest_2='location=/oradata1/terminus_archive' scope=both;

System altered.

Tuesday, November 4, 2008

Store Image into a BLOB column in Oracle

create table demo
( id int primary key,
theBlob blob
)
/

create or replace directory my_files as '/export/home/image';

declare
l_blob blob;
l_bfile bfile;
begin
insert into demo values ( 1, empty_blob() )
returning theBlob into l_blob;

l_bfile := bfilename( 'MY_FILES', 'aria.gif' );
dbms_lob.fileopen( l_bfile );

dbms_lob.loadfromfile( l_blob, l_bfile,
dbms_lob.getlength( l_bfile ) );

dbms_lob.fileclose( l_bfile );
end;
/

Monday, November 3, 2008

Writing File with pl/sql in Oralce

DECLARE
vInHandle utl_file.file_type;
vNewLine VARCHAR2(250);
BEGIN
vNewLine:='sadat';
vInHandle := utl_file.fopen('SADAT', 'test.sql', 'W');
utl_file.put_line(vInHandle, vNewLine);
utl_file.fclose(vInHandle);
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Exception');
END;
/

Sunday, November 2, 2008

Finding All Modifications in the Last Archived Redo Log File By using LogMiner

SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);


EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oradata2/archive/arch1_1328_1_664408221.dbf',OPTIONS => DBMS_LOGMNR.NEW);


EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);


SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('SADAT');

Steps in a Typical LogMiner Session

Enable Supplemental Logging:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Extract a LogMiner Dictionary:
To use LogMiner, you must supply it with a dictionary by doing one of the following:
  • Specify use of the online catalog by using the DICT_FROM_ONLINE_CATALOG option when you start LogMiner.
  • Extract database dictionary information to the redo log files.
  • Extract database dictionary information to a flat file.
Specify Redo Log Files for Analysis
1.Create a list of redo log files. Specify the NEW option of the DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure to signal that this is the beginning of a new list.
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/oracle/logs/log1.f', -
OPTIONS => DBMS_LOGMNR.NEW);

2.
If desired, add more redo log files,
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/oracle/logs/log2.f', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
The OPTIONS parameter is optional and default value is DBMS_LOGMNR.ADDFILE.

3.If desired, remove redo log files
EXECUTE DBMS_LOGMNR.REMOVE_LOGFILE( -
LOGFILENAME => '/oracle/logs/log2.f');

Start LogMiner:
You must start LogMiner.use the OPTIONS parameter to specify either the DICT_FROM_REDO_LOGS or DICT_FROM_ONLINE_CATALOG option. If you specify DICT_FROM_REDO_LOGS, LogMiner expects to find a dictionary in the redo log files that you specified with the DBMS_LOGMNR.ADD_LOGFILE procedure. To determine which redo log files contain a dictionary, look at the V$ARCHIVED_LOG view.If you are specifying the name of a flat file LogMiner dictionary, you must supply a fully qualified filename for the dictionary file. So,
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
OR
EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
OR
EXECUTE DBMS_LOGMNR.START_LOGMNR(DICTFILENAME =>'/oracle/database/dictionary.ora');

Query V$LOGMNR_CONTENTS:
For Example,
SELECT (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
USERNAME, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE USERNAME != 'SYS'
AND SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM');

End the LogMiner Session:
EXECUTE DBMS_LOGMNR.END_LOGMNR;

Disabling Database-Level Supplemental Logging

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

The statements would have the following effects:
  • After the first statement, primary key supplemental logging is enabled.
  • After the second statement, primary key and unique key supplemental logging are enabled.
  • After the third statement, only unique key supplemental logging is enabled.
  • After the fourth statement, all supplemental logging is not disabled. The following error is returned: ORA-32589: unable to drop minimal supplemental logging.

To disable all database supplemental logging, you must first disable any identification key logging that has been enabled, then disable minimal supplemental logging.
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Filtering and Formatting Data Returned to V$LOGMNR_CONTENTS

Showing Only Committed Transactions:
When you use the COMMITTED_DATA_ONLY option to DBMS_LOGMNR.START_LOGMNR, only rows belonging to committed transactions are shown in the V$LOGMNR_CONTENTS view. This enables you to filter out rolled back transactions, transactions that are in progress, and internal operations.
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY);

Skipping Redo Corruptions:
When you use the SKIP_CORRUPTION option to DBMS_LOGMNR.START_LOGMNR, any corruptions in the redo log files are skipped during select operations from the V$LOGMNR_CONTENTS view. For every corrupt redo record encountered, a row is returned that contains the value CORRUPTED_BLOCKS in the OPERATION column, 1343 in the STATUS column, and the number of blocks skipped in the INFO column.
-- Add redo log files of interest.
--

EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
logfilename => '/usr/oracle/data/db1arch_1_16_482701534.log' -
options => DBMS_LOGMNR.NEW);

-- Start LogMiner
--
EXECUTE DBMS_LOGMNR.START_LOGMNR();

-- Select from the V$LOGMINER_CONTENTS view. This example shows corruptions are -- in the redo log files.
--

SELECT rbasqn, rbablk, rbabyte, operation, status, info
FROM V$LOGMNR_CONTENTS;

ERROR at line 3:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 6 change 73528 time 11/06/2002 11:30:23
ORA-00334: archived log: /usr/oracle/data/dbarch1_16_482701534.log

-- Restart LogMiner. This time, specify the SKIP_CORRUPTION option.
--

EXECUTE DBMS_LOGMNR.START_LOGMNR(-
options => DBMS_LOGMNR.SKIP_CORRUPTION);

-- Select from the V$LOGMINER_CONTENTS view again. The output indicates that
-- corrupted blocks were skipped: CORRUPTED_BLOCKS is in the OPERATION
-- column, 1343 is in the STATUS column, and the number of corrupt blocks
-- skipped is in the INFO column.
--
SELECT rbasqn, rbablk, rbabyte, operation, status, info
FROM V$LOGMNR_CONTENTS;


Filtering Data by Time:
To avoid the need to specify the date format in the call to the PL/SQL DBMS_LOGMNR.START_LOGMNR procedure, you can use the SQL ALTER SESSION SET NLS_DATE_FORMAT statement first, as shown in the following example.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
DICTFILENAME => '/oracle/database/dictionary.ora', -
STARTTIME => '01-Jan-1998 08:30:00', -
ENDTIME => '01-Jan-1998 08:45:00'-
OPTIONS => DBMS_LOGMNR.CONTINUOUS_MINE);

Filtering Data by SCN:
To filter data by SCN (system change number), use the STARTSCN and ENDSCN parameters to the PL/SQL DBMS_LOGMNR.START_LOGMNR procedure, as shown in this example:
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
STARTSCN => 621047, -
ENDSCN => 625695, -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.CONTINUOUS_MINE);
Formatting Reconstructed SQL Statements for Reexecution:
By default, a ROWID clause is included in the reconstructed SQL_REDO and SQL_UNDO statements and the statements are ended with a semicolon.you can override the default settings, as follows:
  • Specify the NO_ROWID_IN_STMT option when you start LogMiner.
  • Specify the NO_SQL_DELIMITER option when you start LogMiner.
Formatting the Appearance of Returned Data for Readability:
The PRINT_PRETTY_SQL option to the DBMS_LOGMNR.START_LOGMNR procedure formats the reconstructed SQL statements which makes them easier to read.

LogMiner Dictionary Files and Redo Log Files in Oracle

It is important to understand how LogMiner works with the LogMiner dictionary file (or files) and redo log files. This will help you to get accurate results and to plan the use of your system resources.

LogMiner Dictionary Options
LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for supplying the dictionary:

Using the Online Catalog
Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.

Extracting a LogMiner Dictionary to the Redo Log Files
Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.

Extracting the LogMiner Dictionary to a Flat File
This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.



Using the Online Catalog
To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows:
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

You can also use it to analyze archived redo log files. It may be the fastest way to start your analysis. Remember, however, that the online catalog can only reconstruct SQL statements that are executed on the latest version of a table. As soon as a table is altered, the online catalog no longer reflects the previous version of the table. This means that LogMiner will not be able to reconstruct any SQL statements that were executed on the previous version of the table.The online catalog option is not valid with the DDL_DICT_TRACKING option of DBMS_LOGMNR.START_LOGMNR.

Extracting a LogMiner Dictionary to the Redo Log Files
To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled.While the dictionary is being extracted to the redo log stream, no DDL statements can be executed.To extract dictionary information to the redo log files, execute the PL/SQL DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location.
EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

Specify the names of the start and end redo log files, and possibly other logs in between them, with the ADD_LOGFILE procedure when you are preparing to begin a LogMiner session.
Again, because of the time required, it is good practice to do this during off-peak hours.

Redo Log File Options
You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows:
Automatically:
You can direct LogMiner to find and create a list of redo log files for analysis automatically. Use the CONTINUOUS_MINE option when you start LogMiner with the DBMS_LOGMNR.START_LOGMNR procedure, and specify a time or SCN range.
For example:
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR(
STARTTIME => '01-Jan-2003 08:30:00',
ENDTIME => '01-Jan-2003 08:45:00',
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);

To avoid the need to specify the date format in the PL/SQL call to the DBMS_LOGMNR.START_LOGMNR procedure.You can also direct LogMiner to automatically build a list of redo log files to analyze by specifying just one redo log file using DBMS_LOGMNR.ADD_LOGFILE, and then specifying the CONTINUOUS_MINE option when you start LogMiner. The previously described method is more typical, however.
Manually
Use the DBMS_LOGMNR.ADD_LOGFILE procedure to manually create a list of redo log files before you start LogMiner.For example, to start a new list of redo log files, specify the NEW option of the DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure to signal that this is the beginning of a new list. For example, enter the following to specify /oracle/logs/log1.f:
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/logs/log2.f',OPTIONS => DBMS_LOGMNR.ADDFILE);


Starting LogMiner:
When you start LogMiner, you can:
*Specify how LogMiner should filter data it returns (for example, by starting and ending time or SCN value)
*Specify options for formatting the data returned by LogMiner
*Specify the LogMiner dictionary to use

Logminer in Oracle 10g

Requirements:
The following are requirements for the source and mining database, the data dictionary, and the redo log files that LogMiner will mine:
Source and mining database
Both the source database and the mining database must be running on the same hardware platform.
1.The mining database can be the same as, or completely separate from, the source database.
2.The mining database must run the same version or a later version of the Oracle Database software as the source database.
3.The mining database must use the same character set (or a superset of the character set) used by the source database.
LogMiner dictionary
1.The dictionary must be produced by the same source database that generates the redo log files that LogMiner will analyze.
All redo log files:
1.Must be produced by the same source database.
2.Must be associated with the same database RESETLOGS SCN.
3.Must be from a release 8.0 or later Oracle Database.

Others:
You must enable supplemental logging prior to generating log files that will be analyzed by LogMiner.
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

BFILE in Oracle

Here is a table I will use for the rest of this entry:
CREATE TABLE large_objects (
char_data CLOB,
binary_data BLOB,
ext_binary_data BFILE
)

A BFILE is a binary file stored outside the database. You may want to manage your pictures or documents externally and be able to reference them internally. In that case you would use a BFILE. In a BFILE column in the database, you actually store the locator to the BFILE rather than the BFILE itself.

To use a BFILE you need to create a database directory first. Thise database directory points to an OS directory on your SERVER (not your local drive):

create directory lob_data as 'c:\temp'


insert into large_objects (ext_binary_data) values (bfilename('LOB_DATA', 'logan.jpg'));


select dbms_lob.getlength(ext_binary_data) len_bin_data from large_objects

LEN_BIN_DATA
------------
64988

1 row selected.

CLOB in Oracle

Here is a table I will use for the rest of this entry:
CREATE TABLE large_objects (
char_data CLOB,
binary_data BLOB,
ext_binary_data BFILE
)

And now I'll insert a CLOB. If your data really was large, that's where DBMS_LOB would come into play. For this entry, I will stick to basics. I'll cover programming with LOBs in a future entry.
insert into large_objects (char_data)
values ('This is a lob even though it''s small');

And now we will select the data back and perform some string functions on it:
select char_data,
substr(char_data, 1, 7),
instr(char_data, '''')
from large_objects

char_data substr_char_data INSTR_CHAR_DATA
------------------------------ ------------------------------ ---------------
This is a lob even though it's This is 29
small

1 row selected.
To load files from the file system, look into DBMS_LOB.

BLOB datatype in Oracle

LOB OBJECT:
Here is a table I will use for the rest of this entry:
CREATE TABLE large_objects (
char_data CLOB,
binary_data BLOB,
ext_binary_data BFILE
)

A BLOB is a LOB for binary data. You might store pictures or word processing file in a BLOB field. You can use DBMS_LOB to load a blob into a BLOB field. You can also use UTL_RAW to convert char data to RAW.
insert into large_objects (binary_data)
values (utl_raw.cast_to_raw('This is a lob even though it''s small'));

select binary_data,
utl_raw.cast_to_varchar2(binary_data) bin_data,
dbms_lob.getlength(binary_data) len_bin_data
from large_objects;


binary_data                   bin_data                         LEN_BIN_DATA
------------------------------ ------------------------------ ------------
                                     This is a lob even though it's 36
                                      small
1 row selected.

search engine

Custom Search