Wednesday, September 24, 2008

Dynamic SELECT Statement on Oracle 10G

DECLARE
n NUMBER;
sql_stmt VARCHAR2(255);
type DynaCur IS REF CURSOR;
c DynaCur;
BEGIN
select 'SELECT 1,b.* from (select rownum nrow,a.* from DBA_USERS a where
rownum=1) b' into sql_stmt from dual;
OPEN c FOR sql_stmt;
FETCH c into n;
dbms_output.put_line(n);
CLOSE c;
END;
/

Sunday, September 21, 2008

Estimate TEMP usage without running SQL in Oracle

I was running a SQL with huge sorting but each time job failed because it was not able to extend TEMP tablespace.As the result, I needed to know how to determine required TEMP usage without running SQL statement.

My immediate answer was :
- Login to database with the same user.
- Get Execution plan.
- Watch TempSpc column in execution plan.

Client called me back and said "there is not TempSpc column in execution plan."(By the way client runs latest 10g version (10.2.0.3)).

There are basically two reasons that TempSpc may not appear in execution plan.

Reason 1 : SQL statement does not need temporary tablespace and dedicated memory (PGA) is enough to run all operations

Reason 2 : Plan table is old and has not any column to keep Temp space usage.

To update plan, run the followings :
- drop table plan_table
- @$ORACLE_HOME/rdbms/admin/utlxplan.sql

(If you are using sqltrace for getting execution plan, you need to run @$ORACLE_HOME/sqlplus/admin/plustrce.sql as sys and then grant PLUSTRACE to the user)

Saturday, September 20, 2008

EXECUTE IMMEDIATE option for Dynamic SQL and PL/SQL

1. To run a DDL statement in PL/SQL.
begin
execute immediate 'set role all';
end;

2. To pass values to a dynamic statement (USING clause).
declare
l_depnam varchar2(20) := 'testing';
l_loc varchar2(10) := 'Dubai';
begin
execute immediate 'insert into dept values (:1, :2, :3)'
using 50, l_depnam, l_loc;
commit;
end;

3. To retrieve values from a dynamic statement (INTO clause).
declare
l_cnt varchar2(20);
begin
execute immediate 'select count(1) from emp'
into l_cnt;
dbms_output.put_line(l_cnt);
end;

4. To call a routine dynamically: The bind variables used for parameters of the routine have to be specified along with the parameter type. IN type is the default, others have to be specified explicitly.
declare
l_routin varchar2(100) := 'gen2161.get_rowcnt';
l_tblnam varchar2(20) := 'emp';
l_cnt number;
l_status varchar2(200);
begin
execute immediate 'begin ' || l_routin || '(:2, :3, :4); end;'
using in l_tblnam, out l_cnt, in out l_status;
if l_status != 'OK' then
dbms_output.put_line('error');
end if;
end;

5. To return value into a PL/SQL record type: The same option can be used for %rowtype variables also.
declare
type empdtlrec is record (empno number(4),
ename varchar2(20),
deptno number(2));
empdtl empdtlrec;
begin
execute immediate 'select empno, ename, deptno ' ||
'from emp where empno = 7934'
into empdtl;
end;

6. To pass and retrieve values: The INTO clause should precede the USING clause.
declare
l_dept pls_integer := 20;
l_nam varchar2(20);
l_loc varchar2(20);
begin
execute immediate 'select dname, loc from dept where deptno = :1'
into l_nam, l_loc
using l_dept ;
end;

7. Multi-row query option. Use the insert statement to populate a temp table for this option. Use the temporary table to carry out further processing. Alternatively, you may use REF cursors to by-pass this drawback.
declare
l_sal pls_integer := 2000;
begin
execute immediate 'insert into temp(empno, ename) ' ||
' select empno, ename from emp ' ||
' where sal > :1'
using l_sal;
commit;
end;

Thursday, September 18, 2008

Long Running Operation in Oracle

There is a dynamic performance view v$session_longops that is populated for many long running operations in Oracle. The primary criterion for any operation to appear in v$session_longops is to run more than 6 seconds. Although this isn’t the only criterion as well as not all operations that take more than 6 seconds are shown in this view. For example one can find hash joins in v$session_longops, but you won’t find there nested loop joins even if they are longer than 6 seconds and are joining very big data sets.

select opname, target, sofar, totalwork, units, elapsed_seconds, messagefrom v$session_longops order by
start_time desc

Or If we want to know about a specific session.Then,

select opname, target, sofar, totalwork, units, elapsed_seconds, message from v$session_longops
where sid = and serial# = order by start_time desc

Built in types of long operations:
Each new version of Oracle adds several new types of built in long operations that are shown in v$session_longops. Some of them are:

* Table scan;
* Index Fast Full Scan;
* Hash join;
* Sort/Merge;
* Sort Output;
* Rollback;
* Gather Table's Index Statistics.

The other option is to query v$transaction and look at used_ublk (used blocks) and used_urec (used records) values. They don't show precisely the deleted amount but at least you'll know the trend and if they are increasing it means delete is not completed yet.
Again,Repeated SELECTs against V$SESS_IO can/will show I/O counts increasing.

DBMS OUTPUT buffer limit solution [SP2-0547]

In previous versions of Oracle, we can specify the size of the output buffer either using the sqlplus setting set serveroutput on or the DBMS_OUTPUT.ENABLE procedure. The explicit size range for the buffer is the same in 10.2 (as seen in error messages below), but there is a new "UNLIMITED" alternative to the upper bound. In the following examples, it is interesting to note Oracle's interpretation of the high number we attempt to set, which gives us an indication of the real limits it is working to.

SQL> set serveroutput on size 1000
SP2-0547: size option 1000 out of range (2000 through 1000000)

SQL> set serveroutput on size 1000000000000
SP2-0547: size option 3567587328 out of range (2000 through 1000000)

The unconstrained set serveroutput on setting is now equivalent to set serveroutput on size unlimited. Both of these sqlplus commands execute "DBMS_OUTPUT.ENABLE(NULL)" which can be seen in the trace file if running with SQL trace on.

set serveroutput on size unlimited

COLLECT function in Oracle

You will understand this with this example.
create table test(a varchar2(10),b number);
insert into test values('a',1);
insert into test values('a',2);
insert into test values('a',3);
insert into test values('b',1);
insert into test values('b',2);
insert into test values('b',3);

CREATE or replace type test_t AS TABLE OF number;
select a,CAST(COLLECT(b) as test_t ) from test group by a;

A CAST(COLLECT(B)ASTEST_T)
--------------------------------------------------------------------------------
a TEST_T(1, 2, 3)
b TEST_T(1, 2, 3)

Lastly,if the column is varchar2 type,then create type of varchar2.
CREATE or replace type test_t AS TABLE OF varchar2(200);

How to use Abstract datatype in Oracle

Create Type
create or replace type full_address_type
as object(
street varchar2(80),
city varchar2(80),
state char(2),
zip varchar2(10));
Create Table
create table customer(
full_name varchar2(50),
full_address full_address_type
);
Insert Data
insert into customer
values('Don Burleson',
full_address_type('123 1st st','Minoy','ND','74635'));

Select Row
Select full_name, C.full_address.state
from customer C;

How to get the parent/child of a table in Oracle

This query gives list of children tables of a given table.

SELECT p.table_name PARENT_TABLE, c.table_name CHILD_TABLE
FROM user_constraints p, user_constraints c
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND p.constraint_name = c.r_constraint_name
AND p.table_name = UPPER('&tab');




And this one gives list of parent tables of a given table.

SELECT c.table_name CHILD_TABLE, p.table_name PARENT_TABLE
FROM user_constraints p, user_constraints c
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND p.constraint_name = c.r_constraint_name
AND c.table_name = UPPER('&tab');

Wednesday, September 17, 2008

Cursor with bind variable in PL/SQL

DECLARE
TYPE TabCurTyp IS REF CURSOR;
ind_cursor TabCurTyp;
TABLE_NAME VARCHAR2(30);
INDEX_NAME VARCHAR2(30);
sql_stmt VARCHAR2(200);
BEGIN
TABLE_NAME:=UPPER('&TABLE_NAME');
sql_stmt := 'SELECT index_name FROM USER_INDEXES WHERE INDEX_TYPE <> ''LOB'' AND TABLE_NAME = :j';
DBMS_OUTPUT.PUT_LINE('ANALYZE TABLE '||TABLE_NAME||' COMPUTE STATISTICS;');
OPEN ind_cursor FOR sql_stmt USING table_name;
LOOP
FETCH ind_cursor INTO index_name;
EXIT WHEN ind_cursor%NOTFOUND;
DBMS_OUTPUT.PUT_LINE('ALTER INDEX '||index_name||' REBUILD;');
end loop;
END;
/

Tuesday, September 16, 2008

Get the DDL of TABLE, INDEX, VIEW, FUNCTION etc from DBMS_METADATA

The Syntax is,
dbms_metadata.get_ddl(
object_type IN VARCHAR2,
name IN VARCHAR2,
schema IN VARCHAR2 DEFAULT NULL,
version IN VARCHAR2 DEFAULT 'COMPATIBLE',
model IN VARCHAR2 DEFAULT 'ORACLE',
transform IN VARCHAR2 DEFAULT 'DDL')
RETURN CLOB;

Example,
First of all, you must execute the following commnad to get the output in sql*plus,
set pagesize 0
set long 90000

If you wanna get the DDL of a table of a schema then connect to that user and,
SELECT dbms_metadata.get_ddl('TABLE', 'TEST') FROM dual;

An You can also do it from a DBA user, if the name of the schema is 'TEST1' then,
SELECT dbms_metadata.get_ddl('TABLE', 'TEST','TEST1') FROM dual;

In the same way,
SELECT dbms_metadata.get_ddl('VIEW', 'TEST') FROM dual;
SELECT dbms_metadata.get_ddl('FUNCTION', 'TEST') FROM dual;

Making CUSTOM Font for J2ME application

I just wanted to make the fontsize larger. But i could not make it. Then I google it and got a good solution. And eventually I did it.

1. Download it and make a font with this.
http://www.gsmdev.com/projects/font4mobile/

2. Download the zip file named Font59.zip
http://www.59pixels.com/index2.html#

3. And see the example and try it. One more thing I should tell, the font file should be put in the res directory. And edit the line as follows,
.
.
.
bigFont = new Font59("bigFont.fnt");
.
.
.

Read a Text File in j2me application

You have to put a file named "help.txt"  in the class directory and If WTK is used then the file should be out in res folder.

import javax.microedition.midlet.*;
import javax.microedition.lcdui.*;
import java.io.*;

public class ReadDisplayFile extends MIDlet implements CommandListener
{
private Display display; // Reference to Display object
private Form fmMain; // Main form
private Command cmHelp; // Command to show a help file
private Command cmExit; // Command to exit the MIDlet
private Alert alHelp; // Alert to display help file text

public ReadDisplayFile()
{
display = Display.getDisplay(this);
cmHelp = new Command("Help", Command.SCREEN, 1);
cmExit = new Command("Exit", Command.EXIT, 1);
fmMain = new Form("Read File");
fmMain.addCommand(cmExit);
fmMain.addCommand(cmHelp);
fmMain.setCommandListener(this);
}

public void startApp()
{
display.setCurrent(fmMain);
}

public void pauseApp()
{ }

public void destroyApp(boolean unconditional)
{ }

public void commandAction(Command c, Displayable s)
{
if (c == cmHelp)
{
String str;
// Access the resource and read its contents
if ((str = readHelpText()) != null)
{
// Create an Alert to display the help text
alHelp = new Alert("Help", str, null, null);
alHelp.setTimeout(Alert.FOREVER);
display.setCurrent(alHelp, fmMain);
}
}
else if (c == cmExit)
{
destroyApp(false);
notifyDestroyed();
}
}
private String readHelpText()
{
InputStream is = getClass().getResourceAsStream("help.txt");
try
{
StringBuffer sb = new StringBuffer();
int chr, i = 0;
// Read until the end of the stream
while ((chr = is.read()) != -1)
sb.append((char) chr);

return sb.toString();
}
catch (Exception e)
{
System.out.println("Unable to create stream");
}
return null;
}
}

Saturday, September 13, 2008

WARNING: inbound connection timed out (ORA-3136)

The "WARNING: inbound connection timed out (ORA-3136)" in the alert log indicates that the client was not able to complete it's authentication within the period of time specified by parameter SQLNET.INBOUND_CONNECT_TIMEOUT.

You may also witness ORA-12170 without timeout error on the database server sqlnet.log file.This entry would also have the clinet address which failed to get authenticated. Some applications or JDBC thin driver applications may not have these details.

From 10.2 onwards the default value of parameter SQLNET.INBOUND_CONNECT_TIMEOUT is 60 seconds, hence if the client is not able authenticate within 60 secs , the warning would appear in the alert log and the client connection will be terminated.

reasons for this error:
1. Server gets a connection request from a malicious client which is not supposed to connect to the database , in which case the error thrown is the correct behavior. You can get the client address for which the error was thrown via sqlnet log file.
2. The server receives a valid client connection request but the client takes a long time to authenticate more than the default 60 seconds.
3. The DB server is heavily loaded due to which it cannot finish the client logon within the timeout specified.

following checks can be done :
1. Check whether local connection on the database server is sucessful & quick.
2. If local connections are quick ,then check for underlying network delay with the help of your network administrator.
3. Check whether your Database performance has degraded by anyway.
4. Check alert log for any critical errors for eg, ORA-600 or ORA-7445 and get them resolved first.
These critical errors might have triggered the slowness of the database server.

As a workaround to avoid only this warning messages, you can set the parameters SQLNET.INBOUND_CONNECT_TIMEOUT
and INBOUND_CONNECT_TIMEOUT_listenername to the value more than 60.
To set these parameter -
1. In server side sqlnet.ora file add
SQLNET.INBOUND_CONNECT_TIMEOUT
For e.g
SQLNET.INBOUND_CONNECT_TIMEOUT = 120

2. In listener.ora file -
INBOUND_CONNECT_TIMEOUT_listenername = 110
For e.g if the listener name is LISTENER then -
INBOUND_CONNECT_TIMEOUT_LISTENER = 110

Wednesday, September 10, 2008

How to enable AUTOTRACE in oracle

Any session can run the autotrace by SET AUTOTRACE ON at SQL*PLUS But you will get the following error if it is not enabled.

SQL :> set autotrace on;
SP2-0613: Unable to verify PLAN_TABLE format or existence
SP2-0611: Error enabling EXPLAIN report
SP2-0618: Cannot find the Session Identifier. Check PLUSTRACE role is enabled
SP2-0611: Error enabling STATISTICS report

1.Run plustrce.sql through SYS schema if it did not run before as
ON NT :- @$ORACLE_HOME\sqlplus\admin\plustrce.sql
ON UNIX :- @$ORACLE_HOME/sqlplus/admin/plustrce.sql

2.Grant PLUSTRACE to
Also PLAN_TABLE must exist in the USER’s Schema ,if user want to do autotrace with explain plan . For creation of plan_table , UTLXPLAN.sql have to be in user’s schema.

ON NT :- $ORACLE_HOME\rdbms\admin\utlxplan.sql
ON UNIX :-$ORACLE_HOME/rdbms/admin/utlxplan.sql

User can use the AUTOTRACE options ad follows
SET AUTOTRACE OFF - No AUTOTRACE report is generated. This is the default.
SET AUTOTRACE ON EXPLAIN - The AUTOTRACE report shows only the optimizer execution path.
SET AUTOTRACE ON STATISTICS - The AUTOTRACE report shows only the SQL statement execution statistics.
SET AUTOTRACE ON - The AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics.
SET AUTOTRACE TRACEONLY - Like SET AUTOTRACE ON, but suppresses the printing of the user's query output, if any.

When to rebuid an index in Oracle

Observe height in index_stats view after analyzing the particular index. If it is >= 4 , it is good to rebuild index as it indicate a skewed tree structure. Although there is always been an exception to this rule. So observer the height in index_stats time to time and if height remain unchanged . It may be assumed that It is correct .

Observe del_lf_rows and lf_rows in index_stats view after analyzing the particular index. If the percentage (del_lf_rows/lf_rows)*100 > = 20 , It is good to rebuild index as it indicate unbalanced tree because of high update and delete occurred on it.

Example :-
SQL :> analyze index IND_PK validate structure;
SQL :> select name,height,del_lf_rows,lf_rows,(del_lf_rows/lf_rows) *100 from index_stats;
SQL :> alter index IND_PK rebuild;

Monitoring Index in Oracle

Monitoring an Index determine weather the particular index is used optimally or not, so that you can drop un-necessary indexes.

Enable Monitoring By :
ALTER INDEX MONITORING USAGE ;

Disable Monitoring By:
ALTER INDEX NOMINTORING USAGE;

Database View to Monitor Index
V$OBJECT_USAGE

Example:
Start Monitoring
(1) ALTER TABLE SCOTT.EMP MONITORING USAGE;
Check Status
(2) SQL> select * from V$OBJECT_USAGE;
INDEX_NAME TABLE_NAME MON USED START_MONITORING END_MONITORING
--------------- ---------------- --- --- ------------------- --------------
EMPIDX2            EMP                  YES    NO       04/17/2003                          15:20:03

Note : USED “NO” indicate, index is not used
(3) Run you application for a sufficient time to used the table and associated indexes
check status again
(4) SQL> select * from V$OBJECT_USAGE;
INDEX_NAME TABLE_NAME MON USED START_MONITORING END_MONITORING
--------------- ---------------- --- --- ------------------- --------------
EMPIDX2           EMP                  YES     YES     04/17/2003                          15:20:03

Note: USED “YES” indicate your application did you the index.
Switch off monitoring
(5) ALTER TABLE SCOTT.EMP NOMONITORING USAGE;
Check the status
(6) SQL> select * from V$OBJECT_USAGE;
INDEX_NAME TABLE_NAME MON USE START_MONITORING END_MONITORING
--------------- ---------------- --- --- ------------------- --------------
EMPIDX2           EMP                      YES YES        04/17/2003 15:20:03        04/17/2003 35:30:03

Note : END_MONITORING is populated with the time when you run the NOMONITORING USAGE

Important Points:
*  Value is reset for a specific index , every time you specify monitoring usage.
*  LOB and IOT cannot be monitored

Difference between Deferrable/Non Deferrable Constraint

By default the constraint is non deferrable .i.e oracle will notify immediately when there is a constraint violation.By setting the constraint as deferrable , oracle checks violation only at commit point at the end of the transaction.

Test case :-
SQL :> create table emp (empno number,ename varchar2(10), constraint emp_pk primary key (empno));
SQL :> insert into emp values(1,'sam');
SQL :> insert into emp values(1,'wad');
(Error returned by oracle)ORA-00001: unique constraint (SYS.EMP_PK) violated

so you are not able to insert duplicate column because of primary key.Now let us try to make the constraint as deferred

SQL :> set constraint emp_pk deferred;
(Error returned by oracle)ORA-02447: cannot defer a constraint that is not deferrable

This error indicate that the primary key created above is not deferrable , so we have to make the primary key as deferrable as

SQL :> create table emp (empno number,ename varchar2(10), constraint emp_pk primary key (empno) deferrable);
SQL :> insert into emp values(1,'wad');
SQL :> insert into emp values(1,'sam');
(Error returned by oracle)ORA-00001: unique constraint (SYS.EMP_PK) violated

Since we made the primary key as deferrable while creation, we can set the constraint as deferred.
SQL :> set constraint emp_pk deferred;

Constraint set.
SQL :> insert into emp values(1,'wad');
SQL :> insert into emp values(1,'sam');
SQL :> insert into emp values(1,'Oracle');
SQL :> insert into emp values(2,'DBA');
SQL :> commit;
(Error returned by oracle)ORA-02091: transaction rolled back ORA-00001: unique constraint (SYS.EMP_PK) violated

You noticed with above case that oracle shows the constraints violation only as commit time at the end of transaction.
Note : You can also check constraints before oracle complain as

SQL :> set constraint all immediate

so do this before commit, otherwise your entire transaction will rollback in case of violation.
Conclusion : Deferrable is very useful clause because it checks the constraint integrity before commit only. It gives flexibility to developer to write a program with out worrying about constraint violation until transaction ends.

Monday, September 8, 2008

Exiting a PL/SQL block

We can exit from a function or procedure by using RETURN.
For example, in java we use
System.exit(0);

In pl/sql,
begin
............
return;
...........
end;

EXCLUDE and STATISTICS parameter in Export/Import Data Pump

Export/Import Data Pump is used to transfer data from one database to another or from one schema to another by using REMAP_SCHEMA. But sometimes it consumes a lot of time due to gather table statistics. We can minimize it by using this EXCLUDE parameter. Then we have to gather the statistics manually.

Example:
impdp prod/abc ....... exclude=statistics

Export/Import DataPump Parameter QUERY

QUERY in Parameter file.
File: expdp_q.par
-----------------
DIRECTORY =&nb p;my_dir
DUMPFILE = exp_query.dmp
LOGFILE = exp_query.log
SCHEMAS &nbs ;= hr, scott
INCLUDE = TABLE:"IN ('EMP', 'DEPARTMENTS')"
QUERY = scott.emp:"WHERE job = 'ANALYST' OR sal >= 3000"
QUERY = hr.departments:"WHERE department_id IN (SELECT DISTINCT department_id FROM hr.employees e, hr.jobs j WHERE e.job_id=j.job_id AND UPPER(j.job_title) = 'ANALYST' OR e.salary >= 3000)"

-- Run Export DataPump job:
%expdp system/manager parfile=expdp_q.par

QUERY on Command line
% expdp scott/tiger DIRECTORY=my_dir DUMPFILE=expdp_q.dmp LOGFILE=expdp_q.log TABLES=emp,dept QUERY=emp:\"WHERE ename LIKE \'A\%\'\"

Source: From Metalink.

Syntax of the INCLUDE and EXCLUDE Data Pump parameters

With Metadata filters you can specify a set of objects to be included or excluded from an Export or Import operation, such as tables, indexes, grants, procedures.

EXCLUDE = object_type[:name_clause] [, ...]
INCLUDE = object_type[:name_clause] [, ...]
Examples:
D:\> expdp ... SCHEMAS=scott EXCLUDE=SEQUENCE, TABLE:\"IN ('EMP', 'DEPT')\"
D:\> impdp ... SCHEMAS=scott INCLUDE=FUNCTION, PACKAGE, TABLE:\"= 'EMP'\",PROCEDURE:"LIKE 'MY_PROC_%'"

D:\> expdp ... SCHEMAS=scott EXCLUDE=TABLE:"> 'E'"

Note that if you forget to place the escape characters on the command line, and have spaces in the EXCLUDE and/or INCLUDE parameters, that the job may fail with an error such as: ORA-39071 (Value for EXCLUDE is badly formed).

Double quotes and single quotes usage
The name clause is separated from the object type with a colon. The name clause must be enclosed in double quotation marks. The single-quotation marks are required to delimit the name strings. Using the INCLUDE or EXCLUDE parameter in a parameter file is the preferred method.
EXCLUDE=TABLE:"> 'E'"
It may give the error while run from the command line but it will be ok for a par file. we have to use ESCAPE to escape the error.
D:\> expdp ... SCHEMAS=scott EXCLUDE=TABLE:\"> \'E\'\"

Using the same filter name for an object type more than once
If multiple filters are specified for an object type, an implicit AND operation is applied to them. That is, the objects that are exported or imported during the job have passed all of the filters applied to their object types.

INCLUDE=TABLE:"LIKE '%E%'"
INCLUDE=TABLE:"LIKE '%P%'"


The EXCLUDE and INCLUDE parameters are mutually exclusive.
Excluding/Including an object, will also exclude/include it's dependent objects.
Dependent objects of an identified object are processed along with the identified object. For example, if a filter specifies that an index is to be included in an operation, then statistics from that index will also be included. Likewise, if a table is excluded by a filter, then indexes, constraints, grants, and triggers upon the table will also be excluded by the filter.

Exporting or Importing a large number of objects
If a large number of objects needs to be exported (or imported), it is possible that an internal buffer limit is exceeded (e.g. for the parameters INCLUDE or TABLES). If that happens it will cause the Data Pump job to abort with an error such as: ORA-06502 (PL/SQL: numeric or value error: character string buffer too small). This happened especially in Oracle10g Release 1 where the value for the internal buffer was set to 3000 bytes. With the fix for Bug 4053129 "EXPDP fails with ORA-39125 ORA-6502 on large list of table names" (not a public bug; fixed in 10.1.0.5.0 and 10.2.0.x), this value was increased to 4000 bytes.
When exporting a large number of objects, we recommend to make use of a table inside the database that contains the names of the objects.
Example:
-- create a table that contains the names of the objects:
CONNECT scott/tiger
CREATE TABLE expdp_tab (owner VARCHAR2(30),
object_name VARCHAR2(128), object_type VARCHAR2(19));
INSERT INTO expdp_tab VALUES ('SCOTT','EMP','TABLE');
INSERT INTO expdp_tab VALUES ('SCOTT','DEPT','TABLE');
INSERT INTO expdp_tab VALUES ('SCOTT','BONUS','TABLE'); 
COMMIT;

-- run export DataPump job:
expdp system/manager DIRECTORY=my_dir \
DUMPFILE=expdp_s.dmp LOGFILE=expdp_s.log SCHEMAS=scott \
INCLUDE=TABLE:\"IN \(SELECT object_name FROM scott.expdp_tab WHERE \
owner=\'SCOTT\' AND object_type=\'TABLE\'\)\"


DIFFERENCES BETWEEN 'ESTIMATE' AND 'ESTIMATE_ONLY' IN EXPORT DATAPUMP

1. Parameter ESTIMATE
With the parameter ESTIMATE you can specify the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes). The estimate is printed in the log file and displayed on your standard output device. The estimate is for table row data only; it does not include metadata.

The value for this parameter is either BLOCKS (default) or STATISTICS.
BLOCKS: The estimate is calculated by multiplying the number of database blocks used by the target objects times the appropriate block sizes.
STATISTICS: The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.

Example:
SQL> CONNECT scott/tiger
Connected.
SQL> EXECUTE dbms_stats.gather_schema_stats('scott')
PL/SQL procedure successfully completed.
$ expdp scott/tiger DIRECTORY=my_dir DUMPFILE=exp_scott.dmp  LOGFILE=exp_scott.log ESTIMATE=statistics

Size estimates done via ESTIMATE=STATISTICS method could be inaccurate in the  following scenarios:
a) When statistics have not been collected in a while;
b) When statistics have been explicitly set via the DBMS_STATS package in order to force certain optimizer plans, instead of being directly collected from the data.

Similarly, ESTIMATE=BLOCKS method could be inaccurate when:
a) The table was created with a much bigger initial extent size than it was needed for the actual table data;
b) Many rows have been deleted from the table, or a very small percentage of each block is used.

2. Parameter ESTIMATE_ONLY
With the parameter ESTIMATE_ONLY you can estimate the space in bytes that would be consumed without actually performing the export operation. Using this Parameter will not generate the dump file other than a logfile (if specified).

The value for this parameter is either Y (yes) or N (no = default).
Y: Export estimates the space that would be consumed, but quits without actually performing the export operation.
N: Export does not estimate the space, but performs an actual export of data.

Example:
% expdp scott/tiger NOLOGFILE=y ESTIMATE_ONLY=y

Saturday, September 6, 2008

Database start at Boot Time

We may want that, our database should start at the boot time of a machine. To do this in solaries we have to edit the following file,
/var/opt/oracle/oratab

We will get some lines like,

XXXXX:/oracle/app/oracle/product/10.2.0/db_1:N

Entries are of the form:
$ORACLE_SID:$ORACLE_HOME:

The first and second fields are the system identifier and home directory of the database respectively. The third filed indicates to the dbstart utility that the database should , "Y", or should not, "N", be brought up at system boot time.

Multiple entries with the same $ORACLE_SID are not allowed.

How to sign a MIDlet

I found a nice tutorial that walks you through MIDlet signing process :

MIDlet jar signing (a tutorial) Revised

Listening for incoming SMS messages

import java.io.IOException;
import javax.microedition.io.Connector;
import javax.microedition.lcdui.Command;
import javax.microedition.lcdui.CommandListener;
import javax.microedition.lcdui.Display;
import javax.microedition.lcdui.Displayable;
import javax.microedition.lcdui.Form;
import javax.microedition.lcdui.StringItem;
import javax.microedition.midlet.MIDlet;
import javax.wireless.messaging.BinaryMessage;
import javax.wireless.messaging.Message;
import javax.wireless.messaging.MessageConnection;
import javax.wireless.messaging.MultipartMessage;
import javax.wireless.messaging.TextMessage;

public class SMSListenerMIDlet extends MIDlet
implements CommandListener, Runnable {
// The port which is listened for incoming messages
private final String PORT = "5000";
private Form mainForm;
private Command startCommand;
private Command stopCommand;
private Command exitCommand;
private MessageConnection connection;
private boolean listening;

public SMSListenerMIDlet() {
mainForm = new Form("SMS Listener");
startCommand = new Command("Start listening", Command.ITEM, 0);
mainForm.addCommand(startCommand);
stopCommand = new Command("Stop listening", Command.ITEM, 1);
mainForm.addCommand(stopCommand);
exitCommand = new Command("Exit", Command.EXIT, 1);
mainForm.addCommand(exitCommand);
mainForm.setCommandListener(this);
}

public void startApp() {
// The initial display is the main form
Display.getDisplay(this).setCurrent(mainForm);
}

public void pauseApp() {
// No implementation required
}

public void destroyApp(boolean unconditional) {
// Stop listening
stopListening();
}

public void commandAction(Command command, Displayable displayable) {
if (command == exitCommand) {
// Exit the MIDlet
destroyApp(true);
notifyDestroyed();
} else if (command == startCommand) {
startListening();
} else if (command == stopCommand) {
stopListening();
}
}

private void startListening() {
// If we are already listening, no need to start again
if (listening) {
return;
}
try {
// Open the connection to the specified port
connection = (MessageConnection)Connector.open("sms://:" + PORT);
} catch (IOException ex) {
return;
}
// Create a listener thread and start listening
Thread listenerThread = new Thread(this);
listening = true;
listenerThread.start();
mainForm.append("Listener started.\n");
}

private void stopListening() {
// If we are not listening, no need to do anything
if (!listening) {
return;
}
if (connection != null) {
try {
// Close the message connection
connection.close();
connection = null;
} catch (IOException ex) {
// TODO: Exception handling
}
}
listening = false;
mainForm.append("Listener stopped.\n");
}

public void run() {
while (listening) {
try {
// Receive all incoming messages to the specified port. The
// receive() method will block until there is a message
// available.
Message message = connection.receive();
if (message != null) {
mainForm.append("Message received.\n");
processMessage(message);
}
} catch (IOException ex) {
// Stop listening
stopListening();
}
}
}

private void processMessage(Message message) {
if (message instanceof TextMessage) {
processTextMessage((TextMessage)message);
} else if (message instanceof BinaryMessage) {
processBinaryMessage((BinaryMessage)message);
} else if (message instanceof MultipartMessage) {
processMultipartMessage((MultipartMessage)message);
}
}

private void processTextMessage(TextMessage message) {
String text = message.getPayloadText();
StringItem textItem = new StringItem("Text", text);
mainForm.append(textItem);
}
/**
* Processes a binary message.
*/
private void processBinaryMessage(BinaryMessage binaryMessage) {
// Not implemented
}
/**
* Processes a multipart message.
*/
private void processMultipartMessage(MultipartMessage multipartMessage) {
// Not implemented
}
}

Getting Cell ID in Java ME

When the MIDlet is run in either a Series 40 or S60 device, a cell ID should be shown on the Form (one will have a numeric value and another will have 'null').


import javax.microedition.midlet.*;
import javax.microedition.lcdui.*;

public class CellIDMIDlet extends MIDlet implements CommandListener {
private Form form;
private Command exitCommand;
private String S40_cell_id; // Series 40 cell id property
private String S60_cell_id; // S60 cell id property
public void startApp() {
form = new Form("Getting Cell ID");
S40_cell_id = System.getProperty("Cell-ID");
S60_cell_id = System.getProperty("com.nokia.mid.cellid");
form.append("Series 40 devices: " + S40_cell_id + "\n");
form.append("S60 devices: " + S60_cell_id);
exitCommand = new Command("Exit", Command.EXIT, 1);
form.setCommandListener(this);
form.addCommand(exitCommand);
Display.getDisplay(this).setCurrent(form);
}
public void pauseApp() {
}
public void destroyApp(boolean unconditional) {
}
public void commandAction(Command c, Displayable d) {
if (c == exitCommand) this.notifyDestroyed();
}
}

Monday, September 1, 2008

Problem in running CSALTER script in Oracle

First of all, I run the csscan then I run the CSALTER script. But i got the following error,
Checking or Converting phrase did not finish successfully
No database (national) character set will be altered
CSALTER finished unsuccessfully.

What I had to do is to take is to take a dump for the tables having exceptional data and convertible data. Then TRUNCATE those tables and run the CSALTER script.
SQL> @?/rdbms/admin/csalter.plb

Binary Superset and Normal Superset in Characterset of Oracle

We make the distinction between a binary -or TRUE- superset and a 'normal' superset. For example lets take character set A and B. Character set A is a binary superset of character set B if A describes exactly the same characters and used the same codepoint for those characters as B does. On top of that it probably also has some more characters characters at codepoints that are left free by character set B - otherwise they would be 100% the same set.
If this is the case then csscan would have flagged up a CHANGELESS conversion in the csscan output and this method will work fine. It is also possible for a character set to contain the same (and more) characters as another character set but store those characters at different code points. Although this is still a logical superset it is not a binary superset. Csscan would flag those characters up as CONVERTIBLE and we need another method of changing the character set. An example of this is (AL32)UTF8, which is a Unicode character set. It easily contains all the characters of, for example WE8ISO8859P15. However, it defines a lot of those characters at different code points. Therefore UTF8 is not classed as a binary superset of WE8ISO8859P15.

Using export/import to change the character set in Oracle

When using export/import there are potentialy 3 points at which conversion can occur:
a. If the NLS_LANG on export defines a different character set than the  export database the characters are converted and the exportfile is written in the character set defined by the NLS_LANG.
b. If the NLS_LANG on import is different than the character set the export file was created in the data is converted by the import executable when it reads the export file.
c. If the NLS_LANG on import is different that the character set of the target database the data is converted when import inserts the data into  the database.

Because we only need to convert once it's usualy best to set the character  set part of the NLS_LANG to the character set of the source database. That way conversions (a) and (b) don't take place and the data is simply
converted into the target character set when import inserts it into that  database (conversion c).

Superset/Subset of Character Set in Oracle Database

Oracle defines a superset/subset pair as 2 character sets that define exactly the same characters at the same codepoints and on top of that one (and only one) of them defines some additional characters (that is the
superset).
If a character set defines the same characters (and some more) as another character set but uses different codepoints to define these characters they are NOT classed as a superset/subset pair.

search engine

Custom Search