This blog has moved here.

Wednesday, November 07, 2007

Playing with Flashback Archives

Today I've decided to play a little bit with a new Oracle 11g feature called "Flashback Archive". The "Flashback" term is not new at all. I heard about it, for the very first time, in Oracle 9i. At that time, the "flashback" feature it was on its very beginning and allowed just to query a table "AS OF" a previous moment in time, taking into consideration, especially, UNDO segments constraints. As far as I remember there was no possibility to guarantee that the UNDO will really deliver the requested information as there was no GUARANTEE option for the UNDO tablespaces. However, Oracle 10g continued to refine this feature and brought a plenty of improvements: guarantee option for UNDOs, flashback database and flashback table, flashback transactions and others. Now, Oracle 11g adds a new component to the "flashback suite" which is this "flashback archive".
The "flashback archive" is nothing more than a repository which, of course, requires storage provided through one or more tablespaces and it has a well defined retention policy. This archive may be used to keep track of all changes performed on several configured tables. It is important to notice that you can setup more than one flashback archives which, interestingly, may share the same underlying tablespaces.

For example:
SQL> create tablespace test_arch_tbs datafile size 100M;

Tablespace created

SQL> create flashback archive test_archive tablespace test_arch_tbs retention 1 month;

Flashback archive created.

SQL> create flashback archive test_archive2 tablespace test_arch_tbs retention 1 month;

Flashback archive created.

Please notice that both "test_archive" and "test_archive2" share the same "test_arch_tbs" tablespace. In addition, you can configure a quota for every flashback archive you create, otherwise unlimited is supposed to be the default option.
Another important thing to notice is that you can configure a default flashback archive. You can do this at the archive creation time by specifying the "DEFAULT" option or afterwards using an "ALTER FLASHBACK ARCHIVE... SET DEFAULT". For both operations you need to be logged in as SYSDBA. Of course, you cannot have more than one default flashback archive.
Now, lets create a table with the flashback archive option enabled:
SQL> create table customers (
2 cust_id integer,
3 cust_name varchar2(100),
4 country_id varchar2(2)
5 ) flashback archive test_archive;

Table created

If the archive name is not explicitly specified then the default archive (if previously setup) will be used. It's time now to insert some records into our customers table. The question which arises is which time-stamp is used when flashback query the table: the one which corresponds to the INSERT itself or the one corresponding to the COMMIT statement. We'll figure out shortly:
SQL> insert into customers values (1, 'synapstec', 'ro');

1 row inserted

SQL> select systimestamp from dual;

SYSTIMESTAMP
-------------------------------------------------
07-NOV-07 05.50.01.383321 PM +02:00

SQL> commit;

Commit complete

SQL> select systimestamp from dual;

SYSTIMESTAMP
-------------------------------------------------
07-NOV-07 05.50.15.130301 PM +02:00

SQL> select /* INSERT TIME */ count(1)
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 05.50.01.383321 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNT(1)
----------
0

SQL> select /* COMMIT TIME */ count(1)
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 05.50.15.130301 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNT(1)
----------
1

As you can see the COMMIT time is taken into consideration when the table is flashback queried, which is the expected behavior. This means that, for example, several UPDATES onto the same column will not be stored into the archive but the last committed one.
Likewise, you may encounter problems if you need to flashback query at very precise timestamps. Please have a look to the above example:
6:21:38 PM SQL> update customers set country_id = 'fr';

1 row updated

6:21:38 PM SQL> commit;

Commit complete

6:21:38 PM SQL> select systimestamp from dual;

SYSTIMESTAMP
-------------------------------------------------
07-NOV-07 06.16.16.506341 PM +02:00

6:21:38 PM SQL> update customers set country_id = 'en';

1 row updated

6:21:38 PM SQL> commit;

Commit complete

6:21:38 PM SQL> select systimestamp from dual;

SYSTIMESTAMP
-------------------------------------------------
07-NOV-07 06.16.16.669808 PM +02:00

6:23:09 PM SQL> select /* FIRST COMMIT */ country_id
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 06.16.16.506341 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNTRY_ID
----------
ro

6:23:10 PM SQL>
6:24:18 PM SQL> select /* SECOND COMMIT */ country_id
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 06.16.16.669808 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNTRY_ID
----------
ro

6:27:56 PM SQL> select /* AFTER 3 SECONDS */ country_id
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 06.16.19.669808 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNTRY_ID
----------
ro

6:29:27 PM SQL> select /* AFTER SOME SECONDS */ country_id
2 from customers
3 as of timestamp to_timestamp('07-NOV-07 06.16.21.669808 PM',
4 'dd-mon-yy hh.mi.ss.ff PM');

COUNTRY_ID
----------
en

As you can see the rough time of each of the above commits doesn't return the correct flashback data. There is a delay between the COMMIT and the relative time to which the flashback archive may be queried. This drawback can be alleviated by using SCNs. So, we can replace timestamps with their corresponding, more precise, SCNs:
8:58:50 PM SQL> update customers set country_id = 'fr';

1 row updated

8:58:50 PM SQL> commit;

Commit complete

8:58:50 PM SQL> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
611944

8:58:51 PM SQL> update customers set country_id = 'en';

1 row updated

8:58:51 PM SQL> commit;

Commit complete

8:58:51 PM SQL> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
611947

8:58:51 PM SQL>
9:00:13 PM SQL> select /* FIRST COMMIT */ country_id
2 from customers
3 as of scn 611944;

COUNTRY_ID
----------
fr

9:00:13 PM SQL> select /* SECOND COMMIT */ country_id
2 from customers
3 as of scn 611947;

COUNTRY_ID
----------
en

Nice!
There are however some limitations as far as this feature is concerned. The ugliest ones, at least from my point of view, are:
1. the fact that I cannot modify columns owned by a table which has the flashback archive option enabled along with
2. the fact that we cannot perform partition operations for those tables.
These, put together with a long retention period, become a little bit scary because it is quite likely that, in 5 years for example, the business logic to change leading to modifying the type of a column, for instance. These changes are difficult to predict therefore extreme care should be given for the design of those tables which require flashback archiving. Again, if you have a big transaction table which needs flashback archiving but it is also suitable for partitioning, which feature should we elect? Hard choice, right?
If you ask me, I would, at least, prefer to have the possibility to temporary store the flashback archive records for a particular table into a temporary stage, with something like: "create table temp_stage as select * from {table} ... as of ...", then to disable the flashback archive, to do the maintenance stuff and then to have the possibility to reload the archive from the temporary stage. Well, it's just a thought... of course I have no idea about internal stuff and what this might imply.

Okey, that's it for the moment. The feature is nice and fits to a lot of projects.

Tuesday, November 06, 2007

Preparing Parameters for 12g?

Within all Oracle versions there were some features about I heard a lot something like: "well this feature is not quite ready now but it will be within the next Oracle version". This was the case for the "Oracle Resource Manager" in 9i, or for the "Oracle Managed Files" feature. Both of them were made available in 9i but many DBAs, including me, felt that they were not really ready for production. Only starting with Oracle 10g I could figure out that the strength of the OMF was especially in connection with ASM (Automatic Storage Management) and, just beginning with this version, the "Resource Manager" component had been improved quite enough to become appealing for a productive environment.


Now, I have the same feeling regarding the named and mixed notation for USER DEFINED PL/SQL functions, when they are invoked in a SQL context. For me, it was a common error in Oracle previous releases to write something like:


select dbms_random.string(opt => 'L', len => 10) from dual;

Of course, an error was raised because the SQL context doesn't know how to handle named parameters, but there is also true that when you are using an IDE like PL/SQL Developer or other powerful tools which provide code completion, it is almost impossible not to be tempted by such a feature, being sometimes hard to avoid the above mistake. However, the above code is perfectly valid in Oracle 11g. What really bothers me here is that I cannot use this approach for all functions. For example:


SQL> select regexp_count(srcstr => 'tralala', pattern => 'a') from dual;

select regexp_count(srcstr => 'tralala', pattern => 'a') from dual

ORA-00907: missing right parenthesis

Bleah, ugly! At least it is working for DBMS packages which is a great plus. But, anyway, for me it's a feature which is not "quite ready" but I hope it will be in Oracle 12g... maybe.

Monday, November 05, 2007

Describe REFCURSORs

In Oracle 11g, I really like the interoperability between DBMS_SQL cursors and REFCURSORs. Thanks God, they are finally friends and can "talk" to each other.
I remember that a few months ago I needed a feature through which to be able to describe a weak ref cursor, that is to know how many columns are in the result set along with the corresponding types for every column. Of course, I knew that DBMS_SQL package provides such a feature but it didn't apply to ref cursors therefore, bad luck!
Now, with the new Oracle 11g, it is possible to convert a ref cursor into a DBMS_SQL cursor and vice-versa so, virtually, this opens new gates, right?... including my problem with describing ref cursors.
Lets take a very simple example. Suppose we have a basic procedure which receives a ref cursor as a parameter and it has to know the structure of that cursor. Well, not a big deal in 11g. Just see below how it works:
SQL> create or replace procedure desc_refcursor(pi_cursor sys_refcursor) is
2 l_ref sys_refcursor;
3 l_cursor integer;
4 l_cols_count integer;
5 l_desc_tab dbms_sql.desc_tab;
6
7 begin
8 l_ref := pi_cursor;
9 l_cursor := dbms_sql.to_cursor_number(l_ref);
10 dbms_sql.describe_columns(l_cursor, l_cols_count, l_desc_tab);
11 dbms_output.put_line('number of columns = ' || l_cols_count);
12 dbms_output.new_line;
13 dbms_output.put_line('DESCRIBE CURSOR');
14 dbms_output.put_line('===========================');
15 for i in 1..l_cols_count
16 loop
17 dbms_output.put_line(l_desc_tab(i).col_name);
18 end loop;
19 dbms_sql.close_cursor(l_cursor);
20 exception
21 when others then
22 if dbms_sql.is_open(l_cursor) then
23 dbms_sql.close_cursor(l_cursor);
24 end if;
25 raise;
26 end desc_refcursor;
27 /

Procedure created

The whole trick consists in using the new DMBS_SQL.TO_CURSOR_NUMBER function (see line 9). Maybe you wonder why this function was not used against the pi_cursor parameter and why that additional assignment at the line 8. That's because the TO_CURSOR_NUMBER function expects an input/output parameter and because pi_cursor is read only the compiler will complain.

Now, lets test the above procedure:
SQL> set serveroutput on
SQL> declare
2 l_cursor sys_refcursor;
3 begin
4 open l_cursor for select * from user_tables;
5 desc_refcursor(l_cursor);
6 end;
7 /

number of columns = 50

DESCRIBE CURSOR
===========================
TABLE_NAME
TABLESPACE_NAME
CLUSTER_NAME
IOT_NAME
STATUS
PCT_FREE
PCT_USED
INI_TRANS
MAX_TRANS
INITIAL_EXTENT
NEXT_EXTENT
MIN_EXTENTS
MAX_EXTENTS
PCT_INCREASE
FREELISTS
FREELIST_GROUPS
LOGGING
BACKED_UP
NUM_ROWS
BLOCKS
EMPTY_BLOCKS
AVG_SPACE
CHAIN_CNT
AVG_ROW_LEN
AVG_SPACE_FREELIST_BLOCKS
NUM_FREELIST_BLOCKS
DEGREE
INSTANCES
CACHE
TABLE_LOCK
SAMPLE_SIZE
LAST_ANALYZED
PARTITIONED
IOT_TYPE
TEMPORARY
SECONDARY
NESTED
BUFFER_POOL
ROW_MOVEMENT
GLOBAL_STATS
USER_STATS
DURATION
SKIP_CORRUPT
MONITORING
CLUSTER_OWNER
DEPENDENCIES
COMPRESSION
COMPRESS_FOR
DROPPED
READ_ONLY

PL/SQL procedure successfully completed

As you can see we were able to obtain the structure of the REF CURSOR and with a minimum effort. However, there are some pseudo-limitations when using this feature and the most important one is that, as soon as you convert the REF CURSOR into the DBMS_SQL cursor representation then you cannot switch back to REF CURSOR without explicit conversion. This means that a scenario in which you'd want to describe the REF CURSOR and after that you would like to continue with your REF CURSOR using FETCH, CLOSE etc. (REF CURSOR specific), those operations will simply not work. You'll have to do the whole further cursor handling using DBMS_SQL package. However, you can re-convert to the REF CURSOR using DBMS_SQL.TO_REFCURSOR. So, the DESCRIBE_REFCURSOR procedure can be modified like this:
SQL> create or replace procedure desc_refcursor(pio_cursor in out sys_refcursor) is
2 l_cursor integer;
3 l_cols_count integer;
4 l_desc_tab dbms_sql.desc_tab;
5 begin
6 l_cursor := dbms_sql.to_cursor_number(pio_cursor);
7 dbms_sql.describe_columns(l_cursor, l_cols_count, l_desc_tab);
8 dbms_output.put_line('number of columns = ' || l_cols_count);
9 dbms_output.new_line;
10 dbms_output.put_line('DESCRIBE CURSOR');
11 dbms_output.put_line('===========================');
12 for i in 1..l_cols_count
13 loop
14 dbms_output.put_line(l_desc_tab(i).col_name);
15 end loop;
16 pio_cursor := dbms_sql.to_refcursor(l_cursor);
17 end desc_refcursor;
18 /

Now the invoking anonymous block can continue using ref cursor operations:
SQL> set serveroutput on
SQL> declare
2 l_cursor sys_refcursor;
3 begin
4 open l_cursor for select * from cat;
5 desc_refcursor(l_cursor);
6 close l_cursor; -- REFCURSOR operation
7 end;
8 /

number of columns = 2

DESCRIBE CURSOR
===========================
TABLE_NAME
TABLE_TYPE

PL/SQL procedure successfully completed

Yes, that's it! Nice feature!

Sunday, November 04, 2007

DBMS_XA and Four Eyes Principle

I am simply amazed by the new DBMS_XA package available in Oracle 11g. It offers the possibility to define global transactions and to manage them from within different sessions using a 2PC (Two Phase Commit) approach. One of the great benefit I see now is for those systems which require the implementation of a "four eyes principle", that is somebody puts a transaction into the system but somebody else checks it and finally approves it. Without this approval that transaction is not visible which, from the business point of view, it's not committed.

Let's walk through a very simple and basic example.
First of all we'll need an Oracle user:
grant connect, resource, force any transaction to fep identified by xxx;

Pay attention to the special right "foce any transaction". It is needed in order to be able to manage transactions created by other users.
Now, lets create a simple table which is supposed to store invoices.

SQL> connect fep/xxx@testdb
Connected to Oracle Database 11g Enterprise Edition Release 11.1.0.6.0
Connected as fep

SQL>
SQL> create table invoices (
2 inv_number integer,
3 inv_text varchar2(100),
4 inv_date date,
5 inv_value number(20, 6)
6 );

Table created

Now, assume that somebody books in a new invoice. This means that a new record has to be added to the "INVOICES" table but, being subject to "four eyes principle" it must not be committed till somebody else will check it. In the DBMS_XA package light this should be done like this:

SQL> DECLARE
2 l_rc PLS_INTEGER;
3 l_oer PLS_INTEGER;
4 e_xae EXCEPTION;
5 BEGIN
6 l_rc := DBMS_XA.XA_START(DBMS_XA_XID(123), DBMS_XA.TMNOFLAGS);
7
8 IF l_rc != DBMS_XA.XA_OK THEN
9 RAISE e_xae;
10 END IF;
11
12 insert into invoices values (1, 'buy a TV', sysdate, 1000);
13
14 l_rc := DBMS_XA.XA_END(DBMS_XA_XID(123), DBMS_XA.TMSUSPEND);
15
16 IF l_rc != DBMS_XA.XA_OK THEN
17 RAISE e_xae;
18 END IF;
19
20 EXCEPTION
21 WHEN OTHERS THEN
22 l_rc := DBMS_XA.XA_END(DBMS_XA_XID(123), DBMS_XA.TMSUCCESS);
23 l_rc := DBMS_XA.XA_ROLLBACK(DBMS_XA_XID(123));
24
25 IF l_rc != DBMS_XA.XA_OK THEN
26 l_oer := DBMS_XA.XA_GETLASTOER();
27 raise_application_error(-20001,
28 'ORA-' || l_oer ||
29 ' error in rolling back a failed transaction');
30 END IF;
31
32 raise_application_error(-20002,
33 'error in transaction processing, transaction rolled back');
34 END;
35 /

PL/SQL procedure successfully completed.

SQL> disconnect
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning and Real Application Testing options

Lets have a look at the above piece of code. At the 6th line a new global transaction is started. The 123 is the global transaction identifier and should be provided by the application and it is required to be unique. If the transaction has been successfully started then returning value must be DBMS_XA.XA_OK,otherwise an error has occurred. Then, the insert into our "INVOICES" table is made and the global transaction is simply suspended. Pay attention that no COMMIT is issued. Of course, it is important to note how the status of each DBMS_XA operation is checked and how the exception handler is written. Finally, the session is ended by effectively disconnecting.

Now, suppose that the second person must take over the previous transaction, to check it and to approve it. This can be done like shown below:

SQL> connect fep/xxx@testdb
Connected.
SQL> select * from invoices;

no rows selected

SQL> commit;

Commit complete.

SQL> set serveroutput on
SQL>
SQL> DECLARE
2 l_rc PLS_INTEGER;
3 l_oer PLS_INTEGER;
4 e_xae EXCEPTION;
5 l_inv_no invoices.inv_number%type;
6 l_inv_dat invoices.inv_date%type;
7 l_inv_txt invoices.inv_text%type;
8 l_inv_val invoices.inv_value%type;
9 BEGIN
10 -- resume global transaction
11 l_rc := DBMS_XA.XA_START(DBMS_XA_XID(123), DBMS_XA.TMRESUME);
12 IF l_rc != DBMS_XA.XA_OK THEN
13 RAISE e_xae;
14 END IF;
15
16 select *
17 into l_inv_no, l_inv_txt, l_inv_dat, l_inv_val
18 from invoices i
19 where i.inv_number = 1;
20
21 dbms_output.put_line('invoice number = ' || l_inv_no);
22 dbms_output.put_line('invoice text = ' || l_inv_txt);
23 dbms_output.put_line('invoice date = ' || l_inv_dat);
24 dbms_output.put_line('invoice value = ' || l_inv_val);
25
26 -- end global transaction
27 l_rc := DBMS_XA.XA_END(DBMS_XA_XID(123), DBMS_XA.TMSUCCESS);
28 IF l_rc != DBMS_XA.XA_OK THEN
29 RAISE e_xae;
30 END IF;
31
32 -- commit global transaction
33 l_rc := DBMS_XA.XA_COMMIT(DBMS_XA_XID(123), TRUE);
34 IF l_rc != DBMS_XA.XA_OK THEN
35 RAISE e_xae;
36 END IF;
37
38 EXCEPTION
39 WHEN OTHERS THEN
40 l_rc := DBMS_XA.XA_END(DBMS_XA_XID(123), DBMS_XA.TMSUCCESS);
41 l_rc := DBMS_XA.XA_ROLLBACK(DBMS_XA_XID(123));
42
43 IF l_rc != DBMS_XA.XA_OK THEN
44 l_oer := DBMS_XA.XA_GETLASTOER();
45 raise_application_error(-20001,
46 'ORA-' || l_oer ||
47 ' error in rolling back a failed transaction');
48 END IF;
49
50 raise_application_error(-20002,
51 'error in transaction processing, transaction rolled back');
52 END;
53 /
invoice number = 1
invoice text = buy a TV
invoice date = 04-NOV-07
invoice value = 1000

PL/SQL procedure successfully completed.

SQL> disconnect
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning and Real Application Testing options

Pay attention to the COMMIT statement just above the anonymous block. That's needed in order the anonymous block to start within a new fresh transaction, otherwise an error may occur.

So, using DBMS_XA package we were able to resume an uncommitted transaction made from another session, to see the data and to commit the changes. Of course, we can connect once again to see if the data is really committed.

SQL> connect fep/xxx@testdb
Connected.
SQL> set linesize 120
SQL> column inv_text format a10
SQL> select * from invoices;

INV_NUMBER INV_TEXT INV_DATE INV_VALUE
---------- ---------- --------- ----------
1 buy a TV 04-NOV-07 1000

Lovely! The data is there!

However, there are some additional areas which I still have to investigate in order to validate this model for a MAYBE "four eyes principle" system:
1) where I can see all suspended XA transactions? it seems that DBA_PENDING_TRANSACTIONS, V$GLOBAL_TRANSACTION, DBA_2PC_PENDING or DBA_2PC_NEIGHBORS doesn't show any suspended transaction.
2) are the suspended global transactions persistent across database restarts? Apparently, as far as I could test, it's really not the case.
3) what about the global transaction timeout for a session? If a global transaction should never expire than we have a problem as I don't see any special value for the timeout parameter of the DBMS_XA.XA_SETTIMOUT function (e.g. zero means forever). So the maximum timeout we can set is the upper bound of the PLS_INTEGER type which is 2147483647. That means approximately 24855 days. However, without persistence across database restarts it's, anyway, useless.
4) no idea how this component behaves in a RAC configuration. For example starting a transaction on one instance and resuming it on another one.