This blog has moved here.

Thursday, December 06, 2007

DEFAULT and NOT NULL become friends in 11g

If somebody would ask me to add a new column to a production table the first thing I would check is how many active transactions on that table are or, in other words, how hot it is from the concurrency point of view. This is because, if there are any active transactions on that table, my attempt to change its structure will simply fail. Of course, an online redefinition could be the right solution but is far more complicated than a simple ALTER TABLE. However, with Oracle 11g things change. My decision of using an ALTER TABLE statement instead of a DBMS_REDEFINITION operation when I am about to add new columns should be driven by the answers to the following questions:
1. Is it appropriate for the new column to be NOT NULL and to have a DEFAULT clause?
OR
2. Is it appropriate for the new column to allow NULLs without having a DEFAULT clause?
OR
3. Is it about a virtual column?
If the answer is YES you can safety go on with the ALTER TABLE ADD COLUMN statement.

Below is the proof:

1. create a dummy table:
create table muci (col1 number, col2 number);


2. from another sqlplus session run the following anonymous block which simulates a mixed workload with: INSERTs, UPDATEs and DELETEs:
-- simulate a workload with a mix of INSERTs, UPDATEs and DELTEs
-- close the sqlplus window in order to terminate
declare
l_option pls_integer;
begin
loop
l_option := round(dbms_random.value(1, 3));
if l_option = 1 then
update muci set col2 = dbms_random.value
where rowid in (select rowid from muci where rownum <= 1);
elsif l_option = 2 then
insert into muci (col1, col2) values (dbms_random.value, dbms_random.value);
else
delete from muci where rowid in (select rowid from muci where rownum <= 1);
end if;
commit;
end loop;
end;
/


3. now, run the following statements:
SQL> set timing on
SQL> alter table muci add (col3 number default 0 not null);

Table altered.

Elapsed: 00:00:01.32
SQL> alter table muci add (col4 number);

Table altered.

Elapsed: 00:00:03.36
SQL> alter table muci add (col5 number as (col1+col2) virtual);

Table altered.

Elapsed: 00:00:00.21
SQL> alter table muci add(col6 number default 0);
alter table muci add(col6 number default 0)
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired


Elapsed: 00:00:00.10


Very nice, except for the last case, when trying to add a column which allows NULLs and also have a DEFAULT clause. Why did not work? The answer relies, for sure, in the way the ALTER TABLE ADD COLUMN was implemented.

If the new column allows NULLs and has no DEFAULT clause then, anyway, the value of the new added column is NULL and being the last one it is not stored therefore no "data reorganization" is required within the table blocks. The rows from the "MUCI" table continue to have the following structure:



However, Oracle knows that there is a third column but its NULL value is not stored within the row.

When a new column is added having the NOT NULL constraint with a DEFAULT clause, the default value for this new column is stored just once as a table metadata. In a way it's the same like in the table compression feature where for repeated values of a column just one single value is stored, being further referenced when the table is queried. In this case, the rows from the MUCI table will still have the same structure:



The nice part is that Oracle knows that there is a third column which, on the row structure level, is like a "last column with NULLs". But how this could be true? The column was defined with a NOT NULL constraint. Well, if the row representation is like a "last column with NULLs" but the column was defined as NOT NULL then Oracle will simply take the default value which is stored in the metadata of the table and that value will be returned.

We can easily test that the rows layout remains the same:

1. create a dummy table:

SQL> create table my_table as select * from all_objects;

Table created.


2. check the corresponding segment size:

SQL> select bytes/1024 from user_segments where segment_name='MY_TABLE';

BYTES/1024
----------
8192


3. add a new "last column with NULLs":

SQL> alter table my_table add (my_column1 char(2000));

Table altered.


3. check the segment size.

SQL> select bytes/1024 from user_segments where segment_name='MY_TABLE';

BYTES/1024
----------
8192


Please notice that the size is the same.

4. add a new NOT NULL column with a default value:

SQL> alter table my_table add (my_column2 char(2000) default 'xxx' not null);

Table altered.


4. check the segment size:

SQL> select bytes/1024 from user_segments where segment_name='MY_TABLE';

BYTES/1024
----------
8192


Please notice that the segment size is unchanged.

So far so good... but what happens when you try do add a column which allows NULLs and also have a DEFAULT clause? In this case Oracle cannot decide if the value of the new added column should be NULL or the default value. For the previous two cases, it was a black&white approach but now, allowing DEFAULTS and NULLS turns out to be a gray problem and it cannot be handled in the same way. I mean, how could ever Oracle figure out whenever it's about a DEFAULT value or a NULL one? There's no possibility therefore the old approach will be used which actually implies re-organizations within the Oracle table blocks that is allocating the extra storage within each oracle block which belongs to the target table in order to accommodate space for the new column with the specified default value.

Let's check:

SQL> alter table my_table add (my_column3 char(10) default 'xxx');

Table altered.

SQL> select bytes/1024 from user_segments where segment_name='MY_TABLE';

BYTES/1024
----------
9216


Yes, the segment size has increased. Lovely!

Wednesday, November 07, 2007

Playing with Flashback Archives

Today I've decided to play a little bit with a new Oracle 11g feature called "Flashback Archive". The "Flashback" term is not new at all. I heard about it, for the very first time, in Oracle 9i. At that time, the "flashback" feature it was on its very beginning and allowed just to query a table "AS OF" a previous moment in time, taking into consideration, especially, UNDO segments constraints. As far as I remember there was no possibility to guarantee that the UNDO will really deliver the requested information as there was no GUARANTEE option for the UNDO tablespaces. However, Oracle 10g continued to refine this feature and brought a plenty of improvements: guarantee option for UNDOs, flashback database and flashback table, flashback transactions and others. Now, Oracle 11g adds a new component to the "flashback suite" which is this "flashback archive".
The "flashback archive" is nothing more than a repository which, of course, requires storage provided through one or more tablespaces and it has a well defined retention policy. This archive may be used to keep track of all changes performed on several configured tables. It is important to notice that you can setup more than one flashback archives which, interestingly, may share the same underlying tablespaces.

For example:
SQL> create tablespace test_arch_tbs datafile size 100M;

Tablespace created

SQL> create flashback archive test_archive tablespace test_arch_tbs retention 1 month;

Flashback archive created.

SQL> create flashback archive test_archive2 tablespace test_arch_tbs retention 1 month;

Flashback archive created.

Please notice that both "test_archive" and "test_archive2" share the same "test_arch_tbs" tablespace. In addition, you can configure a quota for every flashback archive you create, otherwise unlimited is supposed to be the default option.
Another important thing to notice is that you can configure a default flashback archive. You can do this at the archive creation time by specifying the "DEFAULT" option or afterwards using an "ALTER FLASHBACK ARCHIVE... SET DEFAULT". For both operations you need to be logged in as SYSDBA. Of course, you cannot have more than one default flashback archive.
Now, lets create a table with the flashback archive option enabled:
SQL> create table customers (
2 cust_id integer,
3 cust_name varchar2(100),
4 country_id varchar2(2)
5 ) flashback archive test_archive;

Table created

If the archive name is not explicitly specified then the default archive (if previously setup) will be used. It's time now to insert some records into our customers table. The question which arises is which time-stamp is used when flashback query the table: the one which corresponds to the INSERT itself or the one corresponding to the COMMIT statement. We'll figure out shortly:
SQL> insert into customers values (1, 'synapstec', 'ro');

1 row inserted

SQL> select systimestamp from dual;

SYSTIMESTAMP
-------------------------------------------------
07-NOV-07 05.50.01.383321 PM +02:00

SQL> commit;

Commit complete

SQL> select systimestamp from dual;

SYSTIMESTAMP
-------------------------------------------------
07-NOV-07 05.50.15.130301 PM +02:00

SQL> select /* INSERT TIME */ count(1)
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 05.50.01.383321 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNT(1)
----------
0

SQL> select /* COMMIT TIME */ count(1)
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 05.50.15.130301 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNT(1)
----------
1

As you can see the COMMIT time is taken into consideration when the table is flashback queried, which is the expected behavior. This means that, for example, several UPDATES onto the same column will not be stored into the archive but the last committed one.
Likewise, you may encounter problems if you need to flashback query at very precise timestamps. Please have a look to the above example:
6:21:38 PM SQL> update customers set country_id = 'fr';

1 row updated

6:21:38 PM SQL> commit;

Commit complete

6:21:38 PM SQL> select systimestamp from dual;

SYSTIMESTAMP
-------------------------------------------------
07-NOV-07 06.16.16.506341 PM +02:00

6:21:38 PM SQL> update customers set country_id = 'en';

1 row updated

6:21:38 PM SQL> commit;

Commit complete

6:21:38 PM SQL> select systimestamp from dual;

SYSTIMESTAMP
-------------------------------------------------
07-NOV-07 06.16.16.669808 PM +02:00

6:23:09 PM SQL> select /* FIRST COMMIT */ country_id
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 06.16.16.506341 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNTRY_ID
----------
ro

6:23:10 PM SQL>
6:24:18 PM SQL> select /* SECOND COMMIT */ country_id
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 06.16.16.669808 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNTRY_ID
----------
ro

6:27:56 PM SQL> select /* AFTER 3 SECONDS */ country_id
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 06.16.19.669808 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNTRY_ID
----------
ro

6:29:27 PM SQL> select /* AFTER SOME SECONDS */ country_id
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 06.16.21.669808 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNTRY_ID
----------
en

As you can see the rough time of each of the above commits doesn't return the correct flashback data. There is a delay between the COMMIT and the relative time to which the flashback archive may be queried. This drawback can be alleviated by using SCNs. So, we can replace timestamps with their corresponding, more precise, SCNs:
8:58:50 PM SQL> update customers set country_id = 'fr';

1 row updated

8:58:50 PM SQL> commit;

Commit complete

8:58:50 PM SQL> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
611944

8:58:51 PM SQL> update customers set country_id = 'en';

1 row updated

8:58:51 PM SQL> commit;

Commit complete

8:58:51 PM SQL> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
611947

8:58:51 PM SQL>
9:00:13 PM SQL> select /* FIRST COMMIT */ country_id
2 from customers
3 as of scn 611944;

COUNTRY_ID
----------
fr

9:00:13 PM SQL> select /* SECOND COMMIT */ country_id
2 from customers
3 as of scn 611947;

COUNTRY_ID
----------
en

Nice!
There are however some limitations as far as this feature is concerned. The ugliest ones, at least from my point of view, are:
1. the fact that I cannot modify columns owned by a table which has the flashback archive option enabled along with
2. the fact that we cannot perform partition operations for those tables.
These, put together with a long retention period, become a little bit scary because it is quite likely that, in 5 years for example, the business logic to change leading to modifying the type of a column, for instance. These changes are difficult to predict therefore extreme care should be given for the design of those tables which require flashback archiving. Again, if you have a big transaction table which needs flashback archiving but it is also suitable for partitioning, which feature should we elect? Hard choice, right?
If you ask me, I would, at least, prefer to have the possibility to temporary store the flashback archive records for a particular table into a temporary stage, with something like: "create table temp_stage as select * from {table} ... as of ...", then to disable the flashback archive, to do the maintenance stuff and then to have the possibility to reload the archive from the temporary stage. Well, it's just a thought... of course I have no idea about internal stuff and what this might imply.

Okey, that's it for the moment. The feature is nice and fits to a lot of projects.

Tuesday, November 06, 2007

Preparing Parameters for 12g?

Within all Oracle versions there were some features about I heard a lot something like: "well this feature is not quite ready now but it will be within the next Oracle version". This was the case for the "Oracle Resource Manager" in 9i, or for the "Oracle Managed Files" feature. Both of them were made available in 9i but many DBAs, including me, felt that they were not really ready for production. Only starting with Oracle 10g I could figure out that the strength of the OMF was especially in connection with ASM (Automatic Storage Management) and, just beginning with this version, the "Resource Manager" component had been improved quite enough to become appealing for a productive environment.


Now, I have the same feeling regarding the named and mixed notation for USER DEFINED PL/SQL functions, when they are invoked in a SQL context. For me, it was a common error in Oracle previous releases to write something like:


select dbms_random.string(opt => 'L', len => 10) from dual;

Of course, an error was raised because the SQL context doesn't know how to handle named parameters, but there is also true that when you are using an IDE like PL/SQL Developer or other powerful tools which provide code completion, it is almost impossible not to be tempted by such a feature, being sometimes hard to avoid the above mistake. However, the above code is perfectly valid in Oracle 11g. What really bothers me here is that I cannot use this approach for all functions. For example:


SQL> select regexp_count(srcstr => 'tralala', pattern => 'a') from dual;

select regexp_count(srcstr => 'tralala', pattern => 'a') from dual

ORA-00907: missing right parenthesis

Bleah, ugly! At least it is working for DBMS packages which is a great plus. But, anyway, for me it's a feature which is not "quite ready" but I hope it will be in Oracle 12g... maybe.

Monday, November 05, 2007

Describe REFCURSORs

In Oracle 11g, I really like the interoperability between DBMS_SQL cursors and REFCURSORs. Thanks God, they are finally friends and can "talk" to each other.
I remember that a few months ago I needed a feature through which to be able to describe a weak ref cursor, that is to know how many columns are in the result set along with the corresponding types for every column. Of course, I knew that DBMS_SQL package provides such a feature but it didn't apply to ref cursors therefore, bad luck!
Now, with the new Oracle 11g, it is possible to convert a ref cursor into a DBMS_SQL cursor and vice-versa so, virtually, this opens new gates, right?... including my problem with describing ref cursors.
Lets take a very simple example. Suppose we have a basic procedure which receives a ref cursor as a parameter and it has to know the structure of that cursor. Well, not a big deal in 11g. Just see below how it works:
SQL> create or replace procedure desc_refcursor(pi_cursor sys_refcursor) is
2 l_ref sys_refcursor;
3 l_cursor integer;
4 l_cols_count integer;
5 l_desc_tab dbms_sql.desc_tab;
6
7 begin
8 l_ref := pi_cursor;
9 l_cursor := dbms_sql.to_cursor_number(l_ref);
10 dbms_sql.describe_columns(l_cursor, l_cols_count, l_desc_tab);
11 dbms_output.put_line('number of columns = ' || l_cols_count);
12 dbms_output.new_line;
13 dbms_output.put_line('DESCRIBE CURSOR');
14 dbms_output.put_line('===========================');
15 for i in 1..l_cols_count
16 loop
17 dbms_output.put_line(l_desc_tab(i).col_name);
18 end loop;
19 dbms_sql.close_cursor(l_cursor);
20 exception
21 when others then
22 if dbms_sql.is_open(l_cursor) then
23 dbms_sql.close_cursor(l_cursor);
24 end if;
25 raise;
26 end desc_refcursor;
27 /

Procedure created

The whole trick consists in using the new DMBS_SQL.TO_CURSOR_NUMBER function (see line 9). Maybe you wonder why this function was not used against the pi_cursor parameter and why that additional assignment at the line 8. That's because the TO_CURSOR_NUMBER function expects an input/output parameter and because pi_cursor is read only the compiler will complain.

Now, lets test the above procedure:
SQL> set serveroutput on
SQL> declare
2 l_cursor sys_refcursor;
3 begin
4 open l_cursor for select * from user_tables;
5 desc_refcursor(l_cursor);
6 end;
7 /

number of columns = 50

DESCRIBE CURSOR
===========================
TABLE_NAME
TABLESPACE_NAME
CLUSTER_NAME
IOT_NAME
STATUS
PCT_FREE
PCT_USED
INI_TRANS
MAX_TRANS
INITIAL_EXTENT
NEXT_EXTENT
MIN_EXTENTS
MAX_EXTENTS
PCT_INCREASE
FREELISTS
FREELIST_GROUPS
LOGGING
BACKED_UP
NUM_ROWS
BLOCKS
EMPTY_BLOCKS
AVG_SPACE
CHAIN_CNT
AVG_ROW_LEN
AVG_SPACE_FREELIST_BLOCKS
NUM_FREELIST_BLOCKS
DEGREE
INSTANCES
CACHE
TABLE_LOCK
SAMPLE_SIZE
LAST_ANALYZED
PARTITIONED
IOT_TYPE
TEMPORARY
SECONDARY
NESTED
BUFFER_POOL
ROW_MOVEMENT
GLOBAL_STATS
USER_STATS
DURATION
SKIP_CORRUPT
MONITORING
CLUSTER_OWNER
DEPENDENCIES
COMPRESSION
COMPRESS_FOR
DROPPED
READ_ONLY

PL/SQL procedure successfully completed

As you can see we were able to obtain the structure of the REF CURSOR and with a minimum effort. However, there are some pseudo-limitations when using this feature and the most important one is that, as soon as you convert the REF CURSOR into the DBMS_SQL cursor representation then you cannot switch back to REF CURSOR without explicit conversion. This means that a scenario in which you'd want to describe the REF CURSOR and after that you would like to continue with your REF CURSOR using FETCH, CLOSE etc. (REF CURSOR specific), those operations will simply not work. You'll have to do the whole further cursor handling using DBMS_SQL package. However, you can re-convert to the REF CURSOR using DBMS_SQL.TO_REFCURSOR. So, the DESCRIBE_REFCURSOR procedure can be modified like this:
SQL> create or replace procedure desc_refcursor(pio_cursor in out sys_refcursor) is
2 l_cursor integer;
3 l_cols_count integer;
4 l_desc_tab dbms_sql.desc_tab;
5 begin
6 l_cursor := dbms_sql.to_cursor_number(pio_cursor);
7 dbms_sql.describe_columns(l_cursor, l_cols_count, l_desc_tab);
8 dbms_output.put_line('number of columns = ' || l_cols_count);
9 dbms_output.new_line;
10 dbms_output.put_line('DESCRIBE CURSOR');
11 dbms_output.put_line('===========================');
12 for i in 1..l_cols_count
13 loop
14 dbms_output.put_line(l_desc_tab(i).col_name);
15 end loop;
16 pio_cursor := dbms_sql.to_refcursor(l_cursor);
17 end desc_refcursor;
18 /

Now the invoking anonymous block can continue using ref cursor operations:
SQL> set serveroutput on
SQL> declare
2 l_cursor sys_refcursor;
3 begin
4 open l_cursor for select * from cat;
5 desc_refcursor(l_cursor);
6 close l_cursor; -- REFCURSOR operation
7 end;
8 /

number of columns = 2

DESCRIBE CURSOR
===========================
TABLE_NAME
TABLE_TYPE

PL/SQL procedure successfully completed

Yes, that's it! Nice feature!

Sunday, November 04, 2007

DBMS_XA and Four Eyes Principle

I am simply amazed by the new DBMS_XA package available in Oracle 11g. It offers the possibility to define global transactions and to manage them from within different sessions using a 2PC (Two Phase Commit) approach. One of the great benefit I see now is for those systems which require the implementation of a "four eyes principle", that is somebody puts a transaction into the system but somebody else checks it and finally approves it. Without this approval that transaction is not visible which, from the business point of view, it's not committed.

Let's walk through a very simple and basic example.
First of all we'll need an Oracle user:
grant connect, resource, force any transaction to fep identified by xxx;

Pay attention to the special right "foce any transaction". It is needed in order to be able to manage transactions created by other users.
Now, lets create a simple table which is supposed to store invoices.

SQL> connect fep/xxx@testdb
Connected to Oracle Database 11g Enterprise Edition Release 11.1.0.6.0
Connected as fep

SQL>
SQL> create table invoices (
2 inv_number integer,
3 inv_text varchar2(100),
4 inv_date date,
5 inv_value number(20, 6)
6 );

Table created

Now, assume that somebody books in a new invoice. This means that a new record has to be added to the "INVOICES" table but, being subject to "four eyes principle" it must not be committed till somebody else will check it. In the DBMS_XA package light this should be done like this:

SQL> DECLARE
2 l_rc PLS_INTEGER;
3 l_oer PLS_INTEGER;
4 e_xae EXCEPTION;
5 BEGIN
6 l_rc := DBMS_XA.XA_START(DBMS_XA_XID(123), DBMS_XA.TMNOFLAGS);
7
8 IF l_rc != DBMS_XA.XA_OK THEN
9 RAISE e_xae;
10 END IF;
11
12 insert into invoices values (1, 'buy a TV', sysdate, 1000);
13
14 l_rc := DBMS_XA.XA_END(DBMS_XA_XID(123), DBMS_XA.TMSUSPEND);
15
16 IF l_rc != DBMS_XA.XA_OK THEN
17 RAISE e_xae;
18 END IF;
19
20 EXCEPTION
21 WHEN OTHERS THEN
22 l_rc := DBMS_XA.XA_END(DBMS_XA_XID(123), DBMS_XA.TMSUCCESS);
23 l_rc := DBMS_XA.XA_ROLLBACK(DBMS_XA_XID(123));
24
25 IF l_rc != DBMS_XA.XA_OK THEN
26 l_oer := DBMS_XA.XA_GETLASTOER();
27 raise_application_error(-20001,
28 'ORA-' || l_oer ||
29 ' error in rolling back a failed transaction');
30 END IF;
31
32 raise_application_error(-20002,
33 'error in transaction processing, transaction rolled back');
34 END;
35 /

PL/SQL procedure successfully completed.

SQL> disconnect
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning and Real Application Testing options

Lets have a look at the above piece of code. At the 6th line a new global transaction is started. The 123 is the global transaction identifier and should be provided by the application and it is required to be unique. If the transaction has been successfully started then returning value must be DBMS_XA.XA_OK,otherwise an error has occurred. Then, the insert into our "INVOICES" table is made and the global transaction is simply suspended. Pay attention that no COMMIT is issued. Of course, it is important to note how the status of each DBMS_XA operation is checked and how the exception handler is written. Finally, the session is ended by effectively disconnecting.

Now, suppose that the second person must take over the previous transaction, to check it and to approve it. This can be done like shown below:

SQL> connect fep/xxx@testdb
Connected.
SQL> select * from invoices;

no rows selected

SQL> commit;

Commit complete.

SQL> set serveroutput on
SQL>
SQL> DECLARE
2 l_rc PLS_INTEGER;
3 l_oer PLS_INTEGER;
4 e_xae EXCEPTION;
5 l_inv_no invoices.inv_number%type;
6 l_inv_dat invoices.inv_date%type;
7 l_inv_txt invoices.inv_text%type;
8 l_inv_val invoices.inv_value%type;
9 BEGIN
10 -- resume global transaction
11 l_rc := DBMS_XA.XA_START(DBMS_XA_XID(123), DBMS_XA.TMRESUME);
12 IF l_rc != DBMS_XA.XA_OK THEN
13 RAISE e_xae;
14 END IF;
15
16 select *
17 into l_inv_no, l_inv_txt, l_inv_dat, l_inv_val
18 from invoices i
19 where i.inv_number = 1;
20
21 dbms_output.put_line('invoice number = ' || l_inv_no);
22 dbms_output.put_line('invoice text = ' || l_inv_txt);
23 dbms_output.put_line('invoice date = ' || l_inv_dat);
24 dbms_output.put_line('invoice value = ' || l_inv_val);
25
26 -- end global transaction
27 l_rc := DBMS_XA.XA_END(DBMS_XA_XID(123), DBMS_XA.TMSUCCESS);
28 IF l_rc != DBMS_XA.XA_OK THEN
29 RAISE e_xae;
30 END IF;
31
32 -- commit global transaction
33 l_rc := DBMS_XA.XA_COMMIT(DBMS_XA_XID(123), TRUE);
34 IF l_rc != DBMS_XA.XA_OK THEN
35 RAISE e_xae;
36 END IF;
37
38 EXCEPTION
39 WHEN OTHERS THEN
40 l_rc := DBMS_XA.XA_END(DBMS_XA_XID(123), DBMS_XA.TMSUCCESS);
41 l_rc := DBMS_XA.XA_ROLLBACK(DBMS_XA_XID(123));
42
43 IF l_rc != DBMS_XA.XA_OK THEN
44 l_oer := DBMS_XA.XA_GETLASTOER();
45 raise_application_error(-20001,
46 'ORA-' || l_oer ||
47 ' error in rolling back a failed transaction');
48 END IF;
49
50 raise_application_error(-20002,
51 'error in transaction processing, transaction rolled back');
52 END;
53 /
invoice number = 1
invoice text = buy a TV
invoice date = 04-NOV-07
invoice value = 1000

PL/SQL procedure successfully completed.

SQL> disconnect
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning and Real Application Testing options

Pay attention to the COMMIT statement just above the anonymous block. That's needed in order the anonymous block to start within a new fresh transaction, otherwise an error may occur.

So, using DBMS_XA package we were able to resume an uncommitted transaction made from another session, to see the data and to commit the changes. Of course, we can connect once again to see if the data is really committed.

SQL> connect fep/xxx@testdb
Connected.
SQL> set linesize 120
SQL> column inv_text format a10
SQL> select * from invoices;

INV_NUMBER INV_TEXT INV_DATE INV_VALUE
---------- ---------- --------- ----------
1 buy a TV 04-NOV-07 1000

Lovely! The data is there!

However, there are some additional areas which I still have to investigate in order to validate this model for a MAYBE "four eyes principle" system:
1) where I can see all suspended XA transactions? it seems that DBA_PENDING_TRANSACTIONS, V$GLOBAL_TRANSACTION, DBA_2PC_PENDING or DBA_2PC_NEIGHBORS doesn't show any suspended transaction.
2) are the suspended global transactions persistent across database restarts? Apparently, as far as I could test, it's really not the case.
3) what about the global transaction timeout for a session? If a global transaction should never expire than we have a problem as I don't see any special value for the timeout parameter of the DBMS_XA.XA_SETTIMOUT function (e.g. zero means forever). So the maximum timeout we can set is the upper bound of the PLS_INTEGER type which is 2147483647. That means approximately 24855 days. However, without persistence across database restarts it's, anyway, useless.
4) no idea how this component behaves in a RAC configuration. For example starting a transaction on one instance and resuming it on another one.

Monday, August 06, 2007

PGA Diagnostics

Do not forget a very important Oracle memory buffer which is the PGA (Program Global Area)! Most of us are tempted to look first of all at the SGA, especially at the library cache, whenever users complain that their SQLs run poorly. Of course, the library cache is a proper place to look at for problems, as well as the disk activity, CPU load and so many, many others. However, do NOT forget the PGA, because an inappropriate configuration can lead to a lot of performance issues.


So, reading through the "Database Performance Tuning Guide" I have found the section which talk about the tuning of the PGA. I am reading it quite often but it's still difficult to remember all the related fixed views and queries which are in connection with this topic. Therefore I have decided to put all the nice queries into a script which should give me a rough image of the PGA configuration.


Below is the script which I would like to share with all whom might be interested:



-- PGA Report
-- Displays various statistics regarding the PGA usage

set linesize 140
set pagesize 9999
set heading off
set feedback off

-- general statistics
select 'SECTION 1: GENERAL STATISTICS FOR THE PGA' from dual;
select '=================================================================='
|| '=========================' from dual;

set heading on
select name,
decode(unit, 'bytes', round(value / 1024 / 1024, 2), value) value,
decode(unit, 'bytes', 'MB', '') unit
from v$pgastat;

set heading off
select 'Check the following:' from dual;
select ' - "aggregate PGA auto target" should not be too small in comparison '
|| 'with the "aggregate PGA target parameter"' from dual;
select ' - "global memory bound" should not be lower than 1M' from dual;
select ' - "over allocation count" should be near 0 or should not increasing in time' from dual;
select ' - the bigger "cache hit percentage", the better' from dual;
select '' from dual;

set heading on
SELECT name profile, cnt, decode(total, 0, 0, round(cnt*100/total)) percentage
FROM (SELECT name, value cnt, (sum(value) over ()) total
FROM V$SYSSTAT
WHERE name like 'workarea exec%');
set heading off

-- PGA used by all DB processes
select 'SECTION 2: PGA USED BY CURRENT DB PROCESSES (IN MB), ORDER BY PGA_ALLOC_MEM' from dual;
select '=================================================================='
|| '=========================' from dual;
set heading on

break on report;
compute sum label '--> TOTAL' of pga_used_mem on report;
compute sum label '--> TOTAL' of pga_alloc_mem on report;
compute sum label '--> TOTAL' of pga_freeable_mem on report;
compute sum label '--> TOTAL' of pga_max_mem on report;

SELECT PROGRAM,
round(PGA_USED_MEM / 1024 / 1024, 2) pga_used_mem,
round(PGA_ALLOC_MEM / 1024 / 1024, 2) pga_alloc_mem,
round(PGA_FREEABLE_MEM / 1024 / 1024, 2) pga_freeable_mem,
round(PGA_MAX_MEM / 1024 / 1024, 2) pga_max_mem
FROM V$PROCESS
order by pga_alloc_mem desc;

set heading off
select 'The columns have the following meaning:' from dual;
select ' - PGA_USED_MEM = PGA memory currently used by the process' from dual;
select ' - PGA_ALLOC_MEM = PGA memory currently allocated by the process (including free '
|| 'PGA memory not yet released to the operating system by the server process)' from dual;
select ' - PGA_FREEABLE_MEM = Allocated PGA memory which can be freed' from dual;
select ' - PGA_MAX_MEM = Maximum PGA memory ever allocated by the process' from dual;
select '' from dual;

set feedback off;
select 'SECTION 3: USED PGA MEMORY BY CATHEGORIES (VALUES IN MB ORDER DESC BY ALLOCATED_MB).' from dual;
select '=================================================================='
|| '=========================' from dual;
set heading on
select category,
round(sum(allocated) / 1024 / 1024, 2) allocated_mb,
round(sum(used) / 1024 / 1024, 2) used_mb,
round(sum(max_allocated) / 1024 / 1024, 2) max_allocated_mb
from v$process_memory
group by category
order by 2 desc;

set heading off
select '' from dual;

-- workareas histogram
select 'SECTION 4: SQL WORKAREAS HISTOGRAM' from dual;
select '=================================================================='
|| '=========================' from dual;
set heading on
SELECT LOW_OPTIMAL_SIZE/1024 low_kb,
(HIGH_OPTIMAL_SIZE+1)/1024 high_kb,
OPTIMAL_EXECUTIONS, ONEPASS_EXECUTIONS, MULTIPASSES_EXECUTIONS
FROM V$SQL_WORKAREA_HISTOGRAM
WHERE TOTAL_EXECUTIONS != 0;

set heading off
select '' from dual;

-- active workareas
select 'SECTION 5: CURRENTLY ACTIVE WORKAREAS' from dual;
select '=================================================================='
|| '=========================' from dual;
set heading on
set feedback on
SELECT to_number(decode(SID, 65535, NULL, SID)) sid,
operation_type OPERATION,
trunc(EXPECTED_SIZE/1024) ESIZE,
trunc(ACTUAL_MEM_USED/1024) MEM,
trunc(MAX_MEM_USED/1024) "MAX MEM",
NUMBER_PASSES PASS,
trunc(TEMPSEG_SIZE/1024/1024) TSIZE_MB
FROM V$SQL_WORKAREA_ACTIVE
ORDER BY 1,2;
set heading off
set feedback off
select 'The meaning of the above columns is:' from dual;
select ' - SID = the active session identifier' from dual;
select ' - OPERATION = the type of the operation' from dual;
select ' - ESIZE = the expected size for the sql workarea' from dual;
select ' - MEM = Amount of PGA memory (in KB) currently allocated on behalf of this work area.' from dual;
select ' - MAX MEM = Maximum memory amount (in KB) used by this work area' from dual;
select ' - PASS = Number of passes corresponding to this work area (0 if running in OPTIMAL mode)' from dual;
select ' - TSIZE_MB = Size (in megabytes) of the temporary segment used on behalf of this work area. '
|| 'This column is NULL if this work area has not (yet) spilled to disk.' from dual;
select '' from dual;

-- top 10 sql with gurmand sql areas
select 'SECTION 6: OP 10 WORK AREAS REQUIRING MOST CACHE MEMORY' from dual;
select '=================================================================='
|| '=========================' from dual;
set heading on
SELECT *
FROM (SELECT distinct s.sql_id,
substr(s.SQL_TEXT, 1, 50) sql_text,
operation_type,
estimated_optimal_size,
max_tempseg_size
FROM V$SQL_WORKAREA a, V$SQLSTATS s
WHERE a.SQL_ID = s.SQL_ID
ORDER BY estimated_optimal_size)
WHERE ROWNUM <= 10;

set heading off
select 'SECTION 7: SQLs WITH WORK AREAS THAT HAVE BEEN EXECUTED IN ONE OR EVEN MULTIPLE PASSES' from dual;
select '=================================================================='
|| '=========================' from dual;
set heading on
set feedback on
SELECT sql_text, sum(ONEPASS_EXECUTIONS) onepass_cnt,
sum(MULTIPASSES_EXECUTIONS) mpass_cnt
FROM V$SQL s, V$SQL_WORKAREA wa
WHERE s.address = wa.address
GROUP BY sql_text
HAVING sum(ONEPASS_EXECUTIONS+MULTIPASSES_EXECUTIONS)>0;
set feedback off
set heading off

select 'SECTION 8: PGA TARGET ADVCE' from dual;
select '=================================================================='
|| '=========================' from dual;
show parameter pga_aggregate_target;
set heading on
SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
ESTD_OVERALLOC_COUNT
FROM V$PGA_TARGET_ADVICE;


set heading off
select '' from dual;
pause press any key to exit...
exit

Have fun!

Thursday, August 02, 2007

Automatic Shared Memory Management

Well, it's been a while since the last post... However, I'm back and I am going to dig a little bit on the “Automatic Shared Memory Management” 10g feature.


As you already (probably) know, every Oracle instance has a big shared buffer called SGA (System Global Area). This big memory buffer is further divided in several slices, each of them being used for special kind of memory allocations. Till Oracle 9i, the only possibility to adjust their sizes were to manually set a couple of initialization parameters like: DB_CACHE_SIZE, SHARED_POOL_SIZE, JAVA_POOL_SIZE etc. However, in 10g some buffers from the SGA were endowed with a nice capability to resize themselves according to the current workload and other parameters.

The figure below depicts a regular section through the SGA.



As you can see the maximum size upon to which the whole SGA may extend is given by the SGA_MAX_SIZE parameter. However, the SGA_TARGET parameter is the one which actually set the amount of memory to be used by all SGA buffers.
The correlation between the values of the SGA_MAX_SIZE and the SGA_TARGET parameters is important mostly when you query the V$SGA_DYNAMIC_FREE_MEMORY.
Lets have a look onto the following output:



SQL> show parameter sga_target;

NAME TYPE VALUE
------------------------------------ -------------------------------- ------------------------------
sga_target big integer 1504M

SQL> select current_size/1024/1024 size_M from v$sga_dynamic_free_memory;

SIZE_M
----------
544

SQL> show parameter sga_max_size

NAME TYPE VALUE
------------------------------------ -------------------------------- ------------------------------
sga_max_size big integer 2G

At the very first sight you may think that from the SGA_TARGET which is 1504M, about 544M are free which, of course, is not true because the Oracle server will always try to expand SGA buffers so that the whole SGA_TARGET amount to be allocated. The free memory reported by the V$SGA_DYNAMIC_FREE_MEMORY view is the difference between the SGA_MAX_SIZE and the SGA_TARGET, which should be read as: “I still have 544M free from the whole SGA_MAX_SIZE which I can use for further expansions of the SGA_TARGET”.


A nice view which I also like to query when I want to find out a brief summary of the SGA composition is the V$SGAINFO.



SQL> select * from v$sgainfo;

NAME BYTES RES
----------------------------------- ---------- ---
Fixed SGA Size 2074152 No
Redo Buffers 14700544 No
Buffer Cache Size 671088640 Yes
Shared Pool Size 838860800 Yes
Large Pool Size 16777216 Yes
Java Pool Size 16777216 Yes
Streams Pool Size 16777216 Yes
Granule Size 16777216 No
Maximum SGA Size 2147483648 No
Startup overhead in Shared Pool 100663296 No
Free SGA Memory Available 570425344

11 rows selected.

Please notice that I can also look at the size of the so-called “granules”, which are the allocation units for the SGA memory. The size of one granule varies according to the whole size of the SGA, operation system and platform. You can also find the size of the granule by querying the V$SGA_DYNAMIC_COMPONENTS.



SQL> select granule_size from v$sga_dynamic_components where rownum <= 1;

GRANULE_SIZE
------------
16777216

Another important thing to remember is that if you set values for the parameters which controls the size of the dynamic buffers into the SGA then, those values will be taken as minimum thresholds which means that Oracle server will try to adjust the size of the buffers when necessary but will never decrease them under the manually set value.
Of course, there is also the possibility to disable the “Automatic Shared Memory Management” feature by simply setting the SGA_TARGET value to zero. Likewise, this feature will also be disabled if the STATISTICS_LEVEL parameter is NONE, but this is not recommended at all as it disables other Oracle features too (ADDM, for example).


Well, that's it... not very new information but just a short review.

Thursday, June 21, 2007

/etc/oratab or /etc/rc.d/rc3.d/S99gcstartup?


A few days ago I had updated the Oracle Grid Control and its database located on a RedHat AS4 server. Everything worked fine till the server was rebooted and I had the “nice” surprise to see that the repository database was restarted but using the old ORACLE_HOME. At this point it is important to review the steps I had followed as part of the upgrading task:


1. stop agent

10ag
emctl stop agent

2. stop opmn stack

10gr
$ORACLE_HOME/opmn/bin/opmnctl stopall

3. stop listener

10db
lsnrctl stop

4. stop database

sqlplus / as sysdba
shutdown immediate

5. backup agent, database, oms homes

cp -r /opt/oracle/product/10.2.0/agent /opt/oracle/backup/
cp -r /opt/oracle/product/10.2.0/db10g /opt/oracle/backup/
cp -r /opt/oracle/product/10.2.0/oms10g /opt/oracle/backup/

6. apply patch for 4329444

oracle@aut-vie-racman:kits$ unzip p4329444_10104_LINUX.zip
Archive: p4329444_10104_LINUX.zip
creating: 4329444/
creating: 4329444/files/
creating: 4329444/files/lib/
creating: 4329444/files/lib/libserver10.a/
inflating: 4329444/files/lib/libserver10.a/qerix.o
creating: 4329444/etc/
creating: 4329444/etc/config/
inflating: 4329444/etc/config/inventory
inflating: 4329444/etc/config/actions
creating: 4329444/etc/xml/
inflating: 4329444/etc/xml/GenericActions.xml
inflating: 4329444/etc/xml/ShiphomeDirectoryStructure.xml
inflating: 4329444/README.txt
oracle@aut-vie-racman:kits$ cd 4329444/
oracle@aut-vie-racman:4329444$ /opt/oracle/product/10.2.0/db10g/OPatch/opatch apply
Invoking OPatch 10.2.0.1.0
Oracle interim Patch Installer version 10.2.0.1.0
Copyright (c) 2005, Oracle Corporation. All rights reserved..
Oracle Home : /opt/oracle/product/10.2.0/db10g
Central Inventory : /opt/oracle/oraInventory
from : /opt/oracle/product/10.2.0/db10g/oraInst.loc
OPatch version : 10.2.0.1.0
OUI version : 10.2.0.1.0
OUI location : /opt/oracle/product/10.2.0/db10g/oui
Log file location : /opt/oracle/product/10.2.0/db10g/cfgtoollogs/opatch/opatch-
2007_Jun_19_16-16-19-CEST_Tue.log
ApplySession applying interim patch '4329444' to OH
'/opt/oracle/product/10.2.0/db10g'
Invoking fuser to check for active processes.
Invoking fuser on "/opt/oracle/product/10.2.0/db10g/bin/oracle"
OPatch detected non-cluster Oracle Home from the inventory and will patch the local
system only.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local
system.
(Oracle Home = '/opt/oracle/product/10.2.0/db10g')
Is the local system ready for patching?
Do you want to proceed? [y|n]
y
User Responded with: Y
Backing up files and inventory (not for auto-rollback) for the Oracle Home
Backing up files affected by the patch '4329444' for restore. This might take a
while...
Backing up files affected by the patch '4329444' for rollback. This might take a
while...
Patching component oracle.rdbms, 10.1.0.4.0...
Updating archive file "/opt/oracle/product/10.2.0/db10g/lib/libserver10.a" with
"lib/libserver10.a/qerix.o"
Running make for target ioracle
ApplySession adding interim patch '4329444' to inventory
The local system has been patched and can be restarted.
OPatch succeeded.

7. Start db listener

oracle@aut-vie-racman:4329444$ 10db
oracle@aut-vie-racman:4329444$ lsnrctl start
LSNRCTL for Linux: Version 10.1.0.4.0 - Production on 19-JUN-2007 16:18:42
Copyright (c) 1991, 2004, Oracle. All rights reserved.
Starting /opt/oracle/product/10.2.0/db10g/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 10.1.0.4.0 - Production
System parameter file is /opt/oracle/product/10.2.0/db10g/network/admin/listener.ora
Log messages written to /opt/oracle/product/10.2.0/db10g/network/log/listener.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=aut-vieracman.
bankgutmann.co.at)(PORT=1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.1.0.4.0 - Production
Start Date 19-JUN-2007 16:18:43
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/opt/oracle/product/10.2.0/db10g/network/admin/listener.ora
Listener Log File /opt/oracle/product/10.2.0/db10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=aut-vieracman.
bankgutmann.co.at)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

8. start emrep database

oracle@aut-vie-racman:4329444$ sqlplus / as sysdba
SQL*Plus: Release 10.1.0.4.0 - Production on Tue Jun 19 16:20:27 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 536870912 bytes
Fixed Size 780056 bytes
Variable Size 275781864 bytes
Database Buffers 260046848 bytes
Redo Buffers 262144 bytes
Database mounted.
Database opened.

9. Patch OMS.
10. Patch agent
11. Start agent

10ag
emctl start agent

12. Start oms

10gr
$ORACLE_HOME/opmn/bin/opmnctl startall


The Grid Control was patched and it worked just fine. The only problem was that the database
remained at 10.1.0.4.0 version. So, I decided to upgrade the repository database too, following the steps below:


1. stop agent
2. stop oms
3. stop database
4. stop listener
5. install 10.2.0.1 Oracle software into a new home
6. upgrade to 10.2.0.3 the Oracle software
7. upgrade the database using “dbua”
8. start oms
9. start agent


Everything was normal until today when we had to shutdown the server for maintenance reasons. Unfortunately, on reboot the repository database continued to be started under the previous 10.1.0.4 oracle home and, of course, because of this the OEM couldn't be used. I had a look into /etc/oratab file and the entries were fine. The database was supposed to be started from the new upgraded ORACLE_HOME!!! Furthermore, changing the /etc/oratab file seemed to have no effect. In the end, I found the reason. Under the /etc/rc.d/rc3.d/ directory there is a script called S99gcstartup. This script calls the startup scripts for repository database, OMS and for the agent. It's enough to copy the startup script for the database under the new ORACLE_HOME directory and to change the ORACLE_HOME variable which is declared in that script. After these changes the repository database was/is started up with the correct ORACLE_HOME on every reboot.

Friday, June 08, 2007

Losing One Voting Disk

Voting disks are used in a RAC configuration for maintaining nodes membership. They are critical pieces in a cluster configuration. Starting with ORACLE 10gR2, it is possible to mirror the OCR and the voting disks. Using the default mirroring template, the minimum number of voting disks necessary for a normal functioning is two.

Scenario Setup

In this scenario it is simulated the crash of one voting disk by using the following steps:

  1. identify votings:

crsctl query css votedisk

0. 0 /dev/raw/raw1

1. 0 /dev/raw/raw2

2. 0 /dev/raw/raw3

  1. corrupt one of the voting disks (as root):

    dd if=/dev/zero /dev/raw/raw3 bs=1M

Recoverability Steps

  1. check the “$CRS_HOME/log/[hostname]/alert[hostname].log” file. The following message should be written there which allows us to determine which voting disk became corrupted:

    [cssd(9120)]CRS-1604:CSSD voting file is offline: /opt/oracle/product/10.2.0/crs_1/Voting1. Details in /opt/oracle/product/10.2.0/crs_1/log/aut-arz-ractest1/cssd/ocssd.log.

  2. According to the above listing the Voting1 is the corrupted disk. Shutdown the CRS stack:

    srvctl stop database -d fitstest -o immediate

    srvctl stop asm -n aut-vie-ractest1

    srvctl stop asm -n aut-arz-ractest1

    srvctl stop nodeapps -n aut-vie-ractest1

    srvctl stop nodeapps -n aut-arz-ractest1

    crs_stat -t

    On every node as root:

    crsctl stop crs

  3. Pick a good voting from the remaining ones and copy it over the corrupted one:

    dd if=/dev/raw/raw4 of=/dev/raw/raw3 bs=1M

  4. Start CRS (on every node as root):

      crsctl start crs

  5. Check log file “$CRS_HOME/log/[hostname]/alert[hostname].log”. It should look like shown below:

    [cssd(14463)]CRS-1601:CSSD Reconfiguration complete. Active nodes are aut-vie-ractest1 aut-arz-ractest1 .

    2007-05-31 15:19:53.954

    [crsd(14268)]CRS-1012:The OCR service started on node aut-vie-ractest1.

    2007-05-31 15:19:53.987

    [evmd(14228)]CRS-1401:EVMD started on node aut-vie-ractest1.

    2007-05-31 15:19:55.861 [crsd(14268)]CRS-1201:CRSD started on node aut-vie-ractest1.

  6. After a couple of minutes check the status of the whole CRS stack:

    [oracle@aut-vie-ractest1 ~]$ crs_stat -t

    Name Type Target State Host

    ------------------------------------------------------------

    ora....SM2.asm application ONLINE ONLINE aut-...est1

    ora....T1.lsnr application ONLINE ONLINE aut-...est1

    ora....st1.gsd application ONLINE ONLINE aut-...est1

    ora....st1.ons application ONLINE ONLINE aut-...est1

    ora....st1.vip application ONLINE ONLINE aut-...est1

    ora....SM1.asm application ONLINE ONLINE aut-...est1

    ora....T1.lsnr application ONLINE ONLINE aut-...est1

    ora....st1.gsd application ONLINE ONLINE aut-...est1

    ora....st1.ons application ONLINE ONLINE aut-...est1

    ora....st1.vip application ONLINE ONLINE aut-...est1

    ora....test.db application ONLINE ONLINE aut-...est1

    ora....t1.inst application ONLINE ONLINE aut-...est1

    ora....t2.inst application ONLINE ONLINE aut-...est1


Note: There's also possible to recover a lost voting disk from an old voting backup and to perform the “dd” command without shutting down the CRS stack.

Monday, February 05, 2007

select * from XML

A few days ago, one of our programmers asked me how he can call a procedure from the database passing as a parameter an array of records. Oracle provides this feature on the server side by using, for example, collections of objects, but building such collections on the client side and passing them as parameters is not a trivial task at all (I don’t even know if this is possible with complex types, especially if we are taking into consideration that we use IBATIS to handle database connections, statements and so on, which of course makes this attempt more difficult). However, the first thing which came into my mind was the usage of XML. In the light of this approach, the client side has to build up a XML and pass it to the invoked procedure as a replacement of an array of records. This is not difficult as far as the client csharp code is concerned and, of course, is not a big deal to write a PL/SQL procedure which expects a CLOB parameter, which actually represents the XML content. However, being a SQL guy I must admit that I don’t feel very comfortable to manually parse the XML and to do a lot of DBMS_XML stuff in order to get the data from that XML. Instead, I would rather like to see the XML content as a regular table. Below is how you can do the trick:

SELECT id, value
FROM XMLTABLE('/ROWSET/ROW'
PASSING xmltype('
<ROWSET>
<ROW>
<ID>1</ID>
<VALUE>aaa</VALUE>
</ROW>
<ROW>
<ID>2</ID>
<VALUE>bbb</VALUE>
</ROW>
</ROWSET>')
COLUMNS
id INTEGER PATH 'ID',
value VARCHAR2(30) PATH 'VALUE') src;

If you want to embed the above piece of code within a procedure with the XML taken as parameter then it’s just of matter of replacing the actual plain XML content with the corresponding CLOB parameter.

SELECT id, value
FROM XMLTABLE('/ROWSET/ROW'
PASSING xmltype(p_clob_xml)
COLUMNS
id INTEGER PATH 'ID',
value VARCHAR2(30) PATH 'VALUE') src;

I have tested this on an Oracle 10.2.0.2 server. It might work on previous releases but I cannot offer any guarantees. Happy XMLing! :P

Tuesday, January 30, 2007

Get the “tnsnames.ora” from OPENLDAP

One thing which really bothers me is when I have to go to every client machine and I must edit the local “tnsnames.ora” file. If you have to deal with just a few clients is not a big issue but as soon as the number of clients exceed five or ten, this approach becomes ugly. The most convenient way would be to have these connection descriptors into a central place and every client to query that repository in order to get the requested descriptor. We have at least two possibilities here: to use “oranames” which is obsolete or to use a LDAP server as the recommended way. Oracle provides its own LDAP server known as OID (Oracle Internet Directory) but it is part of the Advanced Security option and you have to pay for it. In this case, when the whole goal is to use a LDAP server just as a replacement for the local “tnsnames.ora”, it’s not affordable to buy such a wide option for such a tiny functionality. We’ll use instead a free LDAP server which runs seemly on every UNIX platform and it is called OPENLDAP (there are OPENLDAP distributions for Windows as well).

So, the first thing is to install the required packages. You’ll need: openldap, opeldap-servers and openldap-clients. On my Fedora server I can query the RPM repository as shown below:

root@ssafe:~# rpm -qa | grep openldap
openldap-2.3.19-4
openldap-devel-2.3.19-4
openldap-clients-2.3.19-4

As you can see the openldap-servers package is missing therefore we have to install it. We’ll use YUM to do this:

root@ssafe:~# yum install openldap-servers
Loading "installonlyn" plugin
Setting up Install Process
Setting up repositories
core [1/3]
extras [2/3]
updates [3/3]
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Package openldap-servers.i386 0:2.3.19-4 set to be updated
--> Running transaction check

Dependencies Resolved

=============================================================================
Package Arch Version Repository Size
=============================================================================
Installing:
openldap-servers i386 2.3.19-4 core 2.2 M

Transaction Summary
=============================================================================
Install 1 Package(s)
Update 0 Package(s)
Remove 0 Package(s)
Total download size: 2.2 M
Is this ok [y/N]: y
Downloading Packages:
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing: openldap-servers ######################### [1/1]

Installed: openldap-servers.i386 0:2.3.19-4
Complete!

Now, we have a nice LDAP server installed on our server. The next step is to configure it. Oracle will need some special LDAP schemas which must be loaded into our server. These schema files are: oidbase.schema, oidrdbms.schema and oidnet.schema. Save them into a new created directory called, let’s say, “oracle-ldap”. Below is the content of the needed files:
1. oidbase.schema:


attributetype ( 2.16.840.1.113894.7.1.1 NAME 'orclVersion' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.2 NAME 'orclOracleHome' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.3 NAME 'orclSystemName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.4 NAME 'orclServiceType' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.5 NAME 'orclSid' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.6 NAME 'orclProductVersion' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


objectClass ( 2.16.840.1.113894.7.2.2 NAME 'orclContainer' SUP 'top' STRUCTURAL MUST ( cn ) )


objectClass ( 2.16.840.1.113894.7.2.3 NAME 'orclContext' SUP 'top' STRUCTURAL MUST ( cn ) )


objectClass ( 2.16.840.1.113894.7.2.6 NAME 'orclSchemaVersion' SUP 'top' STRUCTURAL MUST ( cn $ orclProductVersion ) )


attributetype ( 2.16.840.1.113894.3.1.12 NAME 'orclNetDescName' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.13 NAME 'orclNetDescString' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


objectClass ( 2.16.840.1.113894.7.2.1001 NAME 'orclService' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclServiceType $ orclOracleHome $ orclSystemName $ orclSid $ orclNetDescName $ orclNetDescString $ orclVersion $ Description ) )


2. oidrdbms.schema file:


attributetype ( 2.16.840.1.113894.2.1.1 NAME 'orclDBtrustedUser' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.2 NAME 'orclDBServerMember' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.3 NAME 'orclDBEntUser' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.4 NAME 'orclDBEntRoleAssigned' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.5 NAME 'orclDBServerRole' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.2.1.6 NAME 'orclDBTrustedDomain' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.2.1.7 NAME 'orclDBRoleOccupant' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.8 NAME 'orclDBDistinguishedName' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.2.1.9 NAME 'orclDBNativeUser' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.2.1.10 NAME 'orclDBGlobalName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


objectClass ( 2.16.840.1.113894.2.2.1 NAME 'orclDBServer' SUP 'orclService' STRUCTURAL MAY ( userCertificate $ orclDBtrustedUser $ orclDBGlobalName ) )


objectClass ( 2.16.840.1.113894.2.2.2 NAME 'orclDBEnterpriseDomain' SUP top STRUCTURAL MUST cn MAY ( orclDBServerMember $ orclDBEntUser $ orclDBTrustedDomain ) )


objectClass ( 2.16.840.1.113894.2.2.3 NAME 'orclDBEnterpriseRole' SUP top STRUCTURAL MUST cn MAY ( orclDBServerRole $ orclDBEntRoleAssigned $ description $ seeAlso $ o $ ou $ orclDBRoleOccupant ) )


objectClass ( 2.16.840.1.113894.2.2.4 NAME 'orclDBEntryLevelMapping' SUP top STRUCTURAL MUST cn MAY ( orclDBDistinguishedName $ orclDBNativeUser ) )


objectClass ( 2.16.840.1.113894.2.2.5 NAME 'orclDBSubtreeLevelMapping' SUP top STRUCTURAL MUST cn MAY ( orclDBDistinguishedName $ orclDBNativeUser ) )


3. oidnet.schema file:


attributetype ( 2.16.840.1.113894.3.1.1 NAME 'orclNetSourceRoute' SYNTAX '1.3.6.1.4.1.1466.115.121.1.5' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.2 NAME 'orclNetLoadBalance' SYNTAX '1.3.6.1.4.1.1466.115.121.1.5' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.3 NAME 'orclNetFailover' SYNTAX '1.3.6.1.4.1.1466.115.121.1.5' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.4 NAME 'orclNetSdu' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.5 NAME 'orclNetServer' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.6 NAME 'orclNetServiceName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.7 NAME 'orclNetInstanceName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.8 NAME 'orclNetHandlerName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.9 NAME 'orclNetParamList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.10 NAME 'orclNetAuthenticationType' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.11 NAME 'orclNetAuthParams' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.14 NAME 'orclNetAddressString' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.15 NAME 'orclNetProtocol' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.16 NAME 'orclNetShared' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.17 NAME 'orclNetAddrList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.18 NAME 'orclNetProtocolStack' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.19 NAME 'orclNetDescList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.20 NAME 'orclNetConnParamList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.21 NAME 'orclNetAuthenticationService' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


objectClass ( 2.16.840.1.113894.3.2.5 NAME 'orclNetService' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetDescName $ orclNetDescString $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.4 NAME 'orclNetDescriptionList' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetDescList $ orclNetSourceRoute $ orclNetLoadBalance $ orclNetFailover $ orclNetShared $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.3 NAME 'orclNetDescription' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetAddrList $ orclNetProtocolStack $ orclNetSdu $ orclSid $ orclNetServer $ orclNetServiceName $ orclNetInstanceName $ orclNetHandlerName $ orclOracleHome $ orclNetAuthenticationType $ orclNetAuthenticationService $ orclNetAuthParams $ orclNetParamList $ orclNetConnParamList $ orclNetSourceRoute $ orclNetLoadBalance $ orclNetFailover $ orclNetShared $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.2 NAME 'orclNetAddressList' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetAddrList $ orclNetSourceRoute $ orclNetLoadBalance $ orclNetFailover $ orclNetShared $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.1 NAME 'orclNetAddress' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetAddressString $ orclNetProtocol $ orclNetShared $ orclVersion $ Description ) )


Then, edit the slapd.conf file. On my installation this file can be found under /etc/openldap directory. Before changing something it is wise to make a copy of the original slapd.conf file.
cp /etc/openldap/slapd.conf /etc/openldap/slapd.conf.bak

Now, open the /etc/openldap/slapd.conf file and change it as following (see the bolded lines):

#
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema


include /oracle-ldap/oidbase.schema
include /oracle-ldap/oidrdbms.schema
include /oracle-ldap/oidnet.schema


# Allow LDAPv2 client connections. This is NOT the default.
allow bind_v2

# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#referral ldap://root.openldap.org

pidfile /var/run/openldap/slapd.pid
argsfile /var/run/openldap/slapd.args

# Load dynamic backend modules:
# modulepath /usr/lib/openldap
# moduleload back_bdb.la
# moduleload back_ldap.la
# moduleload back_ldbm.la
# moduleload back_passwd.la
# moduleload back_shell.la

# The next three lines allow use of TLS for encrypting connections using a
# dummy test certificate which you can generate by changing to
# /etc/pki/tls/certs, running "make slapd.pem", and fixing permissions on
# slapd.pem so that the ldap user or group can read it. Your client software
# may balk at self-signed certificates, however.
# TLSCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
# TLSCertificateFile /etc/pki/tls/certs/slapd.pem
# TLSCertificateKeyFile /etc/pki/tls/certs/slapd.pem

# Sample security restrictions
# Require integrity protection (prevent hijacking)
# Require 112-bit (3DES or better) encryption for updates
# Require 63-bit encryption for simple bind
# security ssf=1 update_ssf=112 simple_bind=64

# Sample access control policy:
# Root DSE: allow anyone to read it
# Subschema (sub)entry DSE: allow anyone to read it
# Other DSEs:
# Allow self write access
# Allow authenticated users read access
# Allow anonymous users to authenticate
# Directives needed to implement policy:
# access to dn.base="" by * read
# access to dn.base="cn=Subschema" by * read

access to *

# by self write
# by users read

by anonymous auth

#
# if no access controls are present, the default policy
# allows anyone and everyone to read anything but restricts
# updates to rootdn. (e.g., "access to * by * read")
#
# rootdn can always read and write EVERYTHING!

#######################################################################
# ldbm and/or bdb database definitions
#######################################################################

database bdb


suffix "dc=itfits,dc=biz"
rootdn "cn=Manager,dc=itfits,dc=biz"


# Cleartext passwords, especially for the rootdn, should
# be avoided. See slappasswd(8) and slapd.conf(5) for details.
# Use of strong authentication encouraged.


rootpw secret


# rootpw {crypt}ijFYNcSNctBYg

# The database directory MUST exist prior to running slapd AND
# should only be accessible by the slapd and slap tools.
# Mode 700 recommended.
directory /var/lib/ldap

# Indices to maintain for this database
index objectClass eq,pres
index ou,cn,mail,surname,givenname eq,pres,sub
index uidNumber,gidNumber,loginShell eq,pres
index uid,memberUid eq,pres,sub
index nisMapName,nisMapEntry eq,pres,sub

# Replicas of this database
#replogfile /var/lib/ldap/openldap-master-replog
#replica host=ldap-1.example.com:389 starttls=critical
# bindmethod=sasl saslmech=GSSAPI
# authcId=host/ldap-master.example.com@EXAMPLE.COM

The “suffix” and “rootdn” should be changed according to your specific domain. It’s time to start the “slapd” daemon by simply typing slapd . Now our LDAP server is ready for incoming requests but we’ll have to create the initial structure of the LDAP tree. In order to do this we’ll need some so called “ldif” files which will be used to specify LDAP entries along with their attributes.

Let’s create the following ldif files:
1. base.ldif, with the following content:
dn: dc=itfits,dc=biz
objectclass: dcObject
objectclass: organization
o: ITFITS Network
dc: itfits
2. manager.ldif with the following content:
dn: cn=Manager,dc=itfits,dc=biz
objectClass: organizationalRole
cn: Manager
3. OracleContext.ldif with the following content:
dn: cn=OracleContext,dc=itfits,dc=biz
objectclass: orclContext
cn: OracleContext
4. pdmdb.ldif with the following content:
dn: cn=pdmdb,cn=OracleContext,dc=itfits,dc=biz
objectclass: top
objectclass: orclNetService
cn: pdmdb
orclNetDescString: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ops)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PDMDB)))

After we have created the above files we must use them to alter the LDAP tree structure:

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f base.ldif
adding new entry "dc=itfits,dc=biz"

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f manager.ldif
adding new entry "cn=Manager,dc=itfits,dc=biz"

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f OracleContext.ldif
adding new entry "cn=OracleContext,dc=itfits,dc=biz"

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f pdmdb.ldif
adding new entry "cn=pdmdb,cn=OracleContext,dc=itfits,dc=biz"

Oookey, the LDAP server is ready to serve our oracle clients. However, our oracle clients might not be prepared for talking with a LDAP server therefore we should spend some time to configure them. It is always recommended to use the provided graphical tools therefore we’ll use “netca”. Choose “Naming Method configuration” and add “Directory naming” from the available methods as a “Selected Naming” method. The “Directory naming” should be the only selected method or at least should be the first one.
Perfect! Now, we must create an “$ORACLE_HOME/network/admin/ldap.ora” file, with the following content:

DIRECTORY_SERVERS= (ssafe:389:636)
DEFAULT_ADMIN_CONTEXT = "dc=itfits,dc=biz"
DIRECTORY_SERVER_TYPE = OID

“ssafe” is the DNS name of the machine where the LDAP server was installed.


Now, let’s test the configuration:
C:\Documents and Settings\alec>tnsping pdmdb

TNS Ping Utility for 32-bit Windows: Version 10.2.0.1.0 - Production on 30-JAN-2007 12:07:27

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:
C:\oracle\product\10.2.0\client_1\network\admin\sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ops)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PDMDB)))
OK (0 msec)



Lovely! From now on our oracle clients will get the network descriptors from the LDAP repository. No more “tnsnames.ora” is required.

Monday, January 22, 2007

Recreating the Database Control Repository

I don’t know why, but after I upgraded my database to 10.2.0.2, the DB Console started to behave quite unusual. For example, I was not able to submit backup jobs because it was complaining that the host privileges were not set up. Trying to manually set those privileges within the “Preferences/Host Preferred Credentials” was unsuccessful because the displayed page was simply empty. So, I decided to recreate the repository and this fixed my problem. Below are the steps I followed:


  1. drop the repository:


  2. emca -deconfig dbcontrol db -repos drop

  3. recreate the repository using dbca.

Friday, January 12, 2007

Writing PLSQL Developer Plugins in CSHARP

Few months ago I decided to write a plugin for the well known PLSQL Developer environment, using csharp. Form the very beginning, I must admit that I haven’t a large experience with this programming language as “deep inside” I’m an Oracle guy, but sometimes is nice to play around with it, especially if you have former knowledge with Java. At that time my plans were to develop a Microsoft Source Safe plugin using InteropSSafeTypeLib. The main problem was the fact that my plugin was not seen by the PLSQL Developer IDE, despite that from the outside it looked like a regular DLL (it turned out that I was wrong). So, below is the class from which I started my research:

using System;
using System.Collections.Generic;
using System.Text;

namespace test_plugin
{
public class VSSPlugin
{
public static string IdentifyPlugIn(int ID)
{
string result = "Hello from c#";
return result;
}
}

}

This compiles nicely into a self-packaged DLL but, as I previously said, it is not suitable to be used as a PL/SQL Developer plugin. The problem actually resides in the way the DLL is built-up, its structure being designed to handle managed code which cannot be directly handled by an external application written in C or Delphi. The workaround is to decompile the DLL into its pure code called MSIL to make some changes and to recompile again providing the modified MSIL code for our DLL.

So, to decompile you have to use the following command:

ildasm.exe /OUT:test_plugin.il test_plugin.dll

This will create in the current directory two files: “test_plugin.il” and “test_plugin.res”. The file which is interesting for us is “test_plugin.il” and it contains the MSIL code. You’ll obtain something like this:

// Microsoft (R) .NET Framework IL Disassembler. Version 2.0.50727.42
// Copyright (c) Microsoft Corporation. All rights reserved.



// Metadata version: v2.0.50727
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 )
.ver 2:0:0:0
}
.assembly test_plugin
{
...
...bla bla bla...
...
}
.module test_plugin.dll
// MVID: {CA976282-5C4E-46F5-A770-39DF57596ECE}
.imagebase 0x00400000
.file alignment 0x00001000
.stackreserve 0x00100000
.subsystem 0x0003 // WINDOWS_CUI
.corflags 0x00000001 // ILONLY
// Image base: 0x00EB0000


// =============== CLASS MEMBERS DECLARATION ===================

.class public auto ansi beforefieldinit test_plugin.VSSPlugin
extends [mscorlib]System.Object
{
.method public hidebysig static string
IdentifyPlugIn(int32 ID) cil managed
{
// Code size 8 (0x8)
.maxstack 1
.locals init ([0] string result)
IL_0000: ldstr "Hello from c#"
IL_0005: stloc.0
IL_0006: ldloc.0
IL_0007: ret
} // end of method VSSPlugin::IdentifyPlugIn

.method public hidebysig specialname rtspecialname
instance void .ctor() cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ret
} // end of method VSSPlugin::.ctor

} // end of class test_plugin.VSSPlugin


// =============================================================

// *********** DISASSEMBLY COMPLETE ***********************
// WARNING: Created Win32 resource file test_plugin.res

Now it’s time to make our changes. Bellow is the IL file, modified to support the export of our “IdentifyPlugIn” function. The changed and the added lines are bolded.


// Microsoft (R) .NET Framework IL Disassembler. Version 2.0.50727.42
// Copyright (c) Microsoft Corporation. All rights reserved.



// Metadata version: v2.0.50727
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 )
.ver 2:0:0:0
}
.assembly test_plugin
{
...
...bla bla bla...
...
}
.module test_plugin.dll
// MVID: {CA976282-5C4E-46F5-A770-39DF57596ECE}
.imagebase 0x00400000
.file alignment 0x00001000
.stackreserve 0x00100000
.subsystem 0x0003 // WINDOWS_CUI



.corflags 0x00000002
.vtfixup [1] int32 fromunmanaged at VT_01
.data VT_01 = int32(0)



// Image base: 0x00EB0000


// =============== CLASS MEMBERS DECLARATION ===================

.class public auto ansi beforefieldinit test_plugin.VSSPlugin
extends [mscorlib]System.Object
{
.method public hidebysig static string
IdentifyPlugIn(int32 ID) cil managed
{


.vtentry 1 : 1
.export [1] as IdentifyPlugIn



// Code size 8 (0x8)
.maxstack 1
.locals init ([0] string result)
IL_0000: ldstr "Hello from c#"
IL_0005: stloc.0
IL_0006: ldloc.0
IL_0007: ret
} // end of method VSSPlugin::IdentifyPlugIn

.method public hidebysig specialname rtspecialname
instance void .ctor() cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ret
} // end of method VSSPlugin::.ctor

} // end of class test_plugin.VSSPlugin


// =============================================================

// *********** DISASSEMBLY COMPLETE ***********************
// WARNING: Created Win32 resource file test_plugin.res

It’s time to recompile the IL code into the target DLL file. You can use the following command:

ilasm /OUT:test_plugin.dll test_plugin.il /DLL

Copy the generated test_plugin.dll file under the PLSQL Developer “PlugIns” directory and start the IDE environment. Take a look into Tools/Configure Plugins! Bingo!!! You’ll see there “Hello from c#” which is the name of our first plugin written in csharp.

This approach should be taken as proof of concept that, basically, you can write a PLSQL Developer plugin using csharp but I cannot tell you for sure what problems you might expect or if you will be allowed to use forms or other advanced csharp programming techniques.