This blog has moved here.

Tuesday, January 30, 2007

Get the “tnsnames.ora” from OPENLDAP

One thing which really bothers me is when I have to go to every client machine and I must edit the local “tnsnames.ora” file. If you have to deal with just a few clients is not a big issue but as soon as the number of clients exceed five or ten, this approach becomes ugly. The most convenient way would be to have these connection descriptors into a central place and every client to query that repository in order to get the requested descriptor. We have at least two possibilities here: to use “oranames” which is obsolete or to use a LDAP server as the recommended way. Oracle provides its own LDAP server known as OID (Oracle Internet Directory) but it is part of the Advanced Security option and you have to pay for it. In this case, when the whole goal is to use a LDAP server just as a replacement for the local “tnsnames.ora”, it’s not affordable to buy such a wide option for such a tiny functionality. We’ll use instead a free LDAP server which runs seemly on every UNIX platform and it is called OPENLDAP (there are OPENLDAP distributions for Windows as well).

So, the first thing is to install the required packages. You’ll need: openldap, opeldap-servers and openldap-clients. On my Fedora server I can query the RPM repository as shown below:

root@ssafe:~# rpm -qa | grep openldap
openldap-2.3.19-4
openldap-devel-2.3.19-4
openldap-clients-2.3.19-4

As you can see the openldap-servers package is missing therefore we have to install it. We’ll use YUM to do this:

root@ssafe:~# yum install openldap-servers
Loading "installonlyn" plugin
Setting up Install Process
Setting up repositories
core [1/3]
extras [2/3]
updates [3/3]
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Package openldap-servers.i386 0:2.3.19-4 set to be updated
--> Running transaction check

Dependencies Resolved

=============================================================================
Package Arch Version Repository Size
=============================================================================
Installing:
openldap-servers i386 2.3.19-4 core 2.2 M

Transaction Summary
=============================================================================
Install 1 Package(s)
Update 0 Package(s)
Remove 0 Package(s)
Total download size: 2.2 M
Is this ok [y/N]: y
Downloading Packages:
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing: openldap-servers ######################### [1/1]

Installed: openldap-servers.i386 0:2.3.19-4
Complete!

Now, we have a nice LDAP server installed on our server. The next step is to configure it. Oracle will need some special LDAP schemas which must be loaded into our server. These schema files are: oidbase.schema, oidrdbms.schema and oidnet.schema. Save them into a new created directory called, let’s say, “oracle-ldap”. Below is the content of the needed files:
1. oidbase.schema:


attributetype ( 2.16.840.1.113894.7.1.1 NAME 'orclVersion' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.2 NAME 'orclOracleHome' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.3 NAME 'orclSystemName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.4 NAME 'orclServiceType' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.5 NAME 'orclSid' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.6 NAME 'orclProductVersion' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


objectClass ( 2.16.840.1.113894.7.2.2 NAME 'orclContainer' SUP 'top' STRUCTURAL MUST ( cn ) )


objectClass ( 2.16.840.1.113894.7.2.3 NAME 'orclContext' SUP 'top' STRUCTURAL MUST ( cn ) )


objectClass ( 2.16.840.1.113894.7.2.6 NAME 'orclSchemaVersion' SUP 'top' STRUCTURAL MUST ( cn $ orclProductVersion ) )


attributetype ( 2.16.840.1.113894.3.1.12 NAME 'orclNetDescName' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.13 NAME 'orclNetDescString' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


objectClass ( 2.16.840.1.113894.7.2.1001 NAME 'orclService' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclServiceType $ orclOracleHome $ orclSystemName $ orclSid $ orclNetDescName $ orclNetDescString $ orclVersion $ Description ) )


2. oidrdbms.schema file:


attributetype ( 2.16.840.1.113894.2.1.1 NAME 'orclDBtrustedUser' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.2 NAME 'orclDBServerMember' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.3 NAME 'orclDBEntUser' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.4 NAME 'orclDBEntRoleAssigned' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.5 NAME 'orclDBServerRole' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.2.1.6 NAME 'orclDBTrustedDomain' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.2.1.7 NAME 'orclDBRoleOccupant' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.8 NAME 'orclDBDistinguishedName' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.2.1.9 NAME 'orclDBNativeUser' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.2.1.10 NAME 'orclDBGlobalName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


objectClass ( 2.16.840.1.113894.2.2.1 NAME 'orclDBServer' SUP 'orclService' STRUCTURAL MAY ( userCertificate $ orclDBtrustedUser $ orclDBGlobalName ) )


objectClass ( 2.16.840.1.113894.2.2.2 NAME 'orclDBEnterpriseDomain' SUP top STRUCTURAL MUST cn MAY ( orclDBServerMember $ orclDBEntUser $ orclDBTrustedDomain ) )


objectClass ( 2.16.840.1.113894.2.2.3 NAME 'orclDBEnterpriseRole' SUP top STRUCTURAL MUST cn MAY ( orclDBServerRole $ orclDBEntRoleAssigned $ description $ seeAlso $ o $ ou $ orclDBRoleOccupant ) )


objectClass ( 2.16.840.1.113894.2.2.4 NAME 'orclDBEntryLevelMapping' SUP top STRUCTURAL MUST cn MAY ( orclDBDistinguishedName $ orclDBNativeUser ) )


objectClass ( 2.16.840.1.113894.2.2.5 NAME 'orclDBSubtreeLevelMapping' SUP top STRUCTURAL MUST cn MAY ( orclDBDistinguishedName $ orclDBNativeUser ) )


3. oidnet.schema file:


attributetype ( 2.16.840.1.113894.3.1.1 NAME 'orclNetSourceRoute' SYNTAX '1.3.6.1.4.1.1466.115.121.1.5' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.2 NAME 'orclNetLoadBalance' SYNTAX '1.3.6.1.4.1.1466.115.121.1.5' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.3 NAME 'orclNetFailover' SYNTAX '1.3.6.1.4.1.1466.115.121.1.5' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.4 NAME 'orclNetSdu' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.5 NAME 'orclNetServer' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.6 NAME 'orclNetServiceName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.7 NAME 'orclNetInstanceName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.8 NAME 'orclNetHandlerName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.9 NAME 'orclNetParamList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.10 NAME 'orclNetAuthenticationType' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.11 NAME 'orclNetAuthParams' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.14 NAME 'orclNetAddressString' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.15 NAME 'orclNetProtocol' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.16 NAME 'orclNetShared' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.17 NAME 'orclNetAddrList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.18 NAME 'orclNetProtocolStack' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.19 NAME 'orclNetDescList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.20 NAME 'orclNetConnParamList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.21 NAME 'orclNetAuthenticationService' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


objectClass ( 2.16.840.1.113894.3.2.5 NAME 'orclNetService' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetDescName $ orclNetDescString $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.4 NAME 'orclNetDescriptionList' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetDescList $ orclNetSourceRoute $ orclNetLoadBalance $ orclNetFailover $ orclNetShared $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.3 NAME 'orclNetDescription' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetAddrList $ orclNetProtocolStack $ orclNetSdu $ orclSid $ orclNetServer $ orclNetServiceName $ orclNetInstanceName $ orclNetHandlerName $ orclOracleHome $ orclNetAuthenticationType $ orclNetAuthenticationService $ orclNetAuthParams $ orclNetParamList $ orclNetConnParamList $ orclNetSourceRoute $ orclNetLoadBalance $ orclNetFailover $ orclNetShared $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.2 NAME 'orclNetAddressList' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetAddrList $ orclNetSourceRoute $ orclNetLoadBalance $ orclNetFailover $ orclNetShared $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.1 NAME 'orclNetAddress' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetAddressString $ orclNetProtocol $ orclNetShared $ orclVersion $ Description ) )


Then, edit the slapd.conf file. On my installation this file can be found under /etc/openldap directory. Before changing something it is wise to make a copy of the original slapd.conf file.
cp /etc/openldap/slapd.conf /etc/openldap/slapd.conf.bak

Now, open the /etc/openldap/slapd.conf file and change it as following (see the bolded lines):

#
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema


include /oracle-ldap/oidbase.schema
include /oracle-ldap/oidrdbms.schema
include /oracle-ldap/oidnet.schema


# Allow LDAPv2 client connections. This is NOT the default.
allow bind_v2

# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#referral ldap://root.openldap.org

pidfile /var/run/openldap/slapd.pid
argsfile /var/run/openldap/slapd.args

# Load dynamic backend modules:
# modulepath /usr/lib/openldap
# moduleload back_bdb.la
# moduleload back_ldap.la
# moduleload back_ldbm.la
# moduleload back_passwd.la
# moduleload back_shell.la

# The next three lines allow use of TLS for encrypting connections using a
# dummy test certificate which you can generate by changing to
# /etc/pki/tls/certs, running "make slapd.pem", and fixing permissions on
# slapd.pem so that the ldap user or group can read it. Your client software
# may balk at self-signed certificates, however.
# TLSCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
# TLSCertificateFile /etc/pki/tls/certs/slapd.pem
# TLSCertificateKeyFile /etc/pki/tls/certs/slapd.pem

# Sample security restrictions
# Require integrity protection (prevent hijacking)
# Require 112-bit (3DES or better) encryption for updates
# Require 63-bit encryption for simple bind
# security ssf=1 update_ssf=112 simple_bind=64

# Sample access control policy:
# Root DSE: allow anyone to read it
# Subschema (sub)entry DSE: allow anyone to read it
# Other DSEs:
# Allow self write access
# Allow authenticated users read access
# Allow anonymous users to authenticate
# Directives needed to implement policy:
# access to dn.base="" by * read
# access to dn.base="cn=Subschema" by * read

access to *

# by self write
# by users read

by anonymous auth

#
# if no access controls are present, the default policy
# allows anyone and everyone to read anything but restricts
# updates to rootdn. (e.g., "access to * by * read")
#
# rootdn can always read and write EVERYTHING!

#######################################################################
# ldbm and/or bdb database definitions
#######################################################################

database bdb


suffix "dc=itfits,dc=biz"
rootdn "cn=Manager,dc=itfits,dc=biz"


# Cleartext passwords, especially for the rootdn, should
# be avoided. See slappasswd(8) and slapd.conf(5) for details.
# Use of strong authentication encouraged.


rootpw secret


# rootpw {crypt}ijFYNcSNctBYg

# The database directory MUST exist prior to running slapd AND
# should only be accessible by the slapd and slap tools.
# Mode 700 recommended.
directory /var/lib/ldap

# Indices to maintain for this database
index objectClass eq,pres
index ou,cn,mail,surname,givenname eq,pres,sub
index uidNumber,gidNumber,loginShell eq,pres
index uid,memberUid eq,pres,sub
index nisMapName,nisMapEntry eq,pres,sub

# Replicas of this database
#replogfile /var/lib/ldap/openldap-master-replog
#replica host=ldap-1.example.com:389 starttls=critical
# bindmethod=sasl saslmech=GSSAPI
# authcId=host/ldap-master.example.com@EXAMPLE.COM

The “suffix” and “rootdn” should be changed according to your specific domain. It’s time to start the “slapd” daemon by simply typing slapd . Now our LDAP server is ready for incoming requests but we’ll have to create the initial structure of the LDAP tree. In order to do this we’ll need some so called “ldif” files which will be used to specify LDAP entries along with their attributes.

Let’s create the following ldif files:
1. base.ldif, with the following content:
dn: dc=itfits,dc=biz
objectclass: dcObject
objectclass: organization
o: ITFITS Network
dc: itfits
2. manager.ldif with the following content:
dn: cn=Manager,dc=itfits,dc=biz
objectClass: organizationalRole
cn: Manager
3. OracleContext.ldif with the following content:
dn: cn=OracleContext,dc=itfits,dc=biz
objectclass: orclContext
cn: OracleContext
4. pdmdb.ldif with the following content:
dn: cn=pdmdb,cn=OracleContext,dc=itfits,dc=biz
objectclass: top
objectclass: orclNetService
cn: pdmdb
orclNetDescString: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ops)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PDMDB)))

After we have created the above files we must use them to alter the LDAP tree structure:

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f base.ldif
adding new entry "dc=itfits,dc=biz"

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f manager.ldif
adding new entry "cn=Manager,dc=itfits,dc=biz"

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f OracleContext.ldif
adding new entry "cn=OracleContext,dc=itfits,dc=biz"

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f pdmdb.ldif
adding new entry "cn=pdmdb,cn=OracleContext,dc=itfits,dc=biz"

Oookey, the LDAP server is ready to serve our oracle clients. However, our oracle clients might not be prepared for talking with a LDAP server therefore we should spend some time to configure them. It is always recommended to use the provided graphical tools therefore we’ll use “netca”. Choose “Naming Method configuration” and add “Directory naming” from the available methods as a “Selected Naming” method. The “Directory naming” should be the only selected method or at least should be the first one.
Perfect! Now, we must create an “$ORACLE_HOME/network/admin/ldap.ora” file, with the following content:

DIRECTORY_SERVERS= (ssafe:389:636)
DEFAULT_ADMIN_CONTEXT = "dc=itfits,dc=biz"
DIRECTORY_SERVER_TYPE = OID

“ssafe” is the DNS name of the machine where the LDAP server was installed.


Now, let’s test the configuration:
C:\Documents and Settings\alec>tnsping pdmdb

TNS Ping Utility for 32-bit Windows: Version 10.2.0.1.0 - Production on 30-JAN-2007 12:07:27

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:
C:\oracle\product\10.2.0\client_1\network\admin\sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ops)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PDMDB)))
OK (0 msec)



Lovely! From now on our oracle clients will get the network descriptors from the LDAP repository. No more “tnsnames.ora” is required.

Monday, January 22, 2007

Recreating the Database Control Repository

I don’t know why, but after I upgraded my database to 10.2.0.2, the DB Console started to behave quite unusual. For example, I was not able to submit backup jobs because it was complaining that the host privileges were not set up. Trying to manually set those privileges within the “Preferences/Host Preferred Credentials” was unsuccessful because the displayed page was simply empty. So, I decided to recreate the repository and this fixed my problem. Below are the steps I followed:


  1. drop the repository:


  2. emca -deconfig dbcontrol db -repos drop

  3. recreate the repository using dbca.

Friday, January 12, 2007

Writing PLSQL Developer Plugins in CSHARP

Few months ago I decided to write a plugin for the well known PLSQL Developer environment, using csharp. Form the very beginning, I must admit that I haven’t a large experience with this programming language as “deep inside” I’m an Oracle guy, but sometimes is nice to play around with it, especially if you have former knowledge with Java. At that time my plans were to develop a Microsoft Source Safe plugin using InteropSSafeTypeLib. The main problem was the fact that my plugin was not seen by the PLSQL Developer IDE, despite that from the outside it looked like a regular DLL (it turned out that I was wrong). So, below is the class from which I started my research:

using System;
using System.Collections.Generic;
using System.Text;

namespace test_plugin
{
public class VSSPlugin
{
public static string IdentifyPlugIn(int ID)
{
string result = "Hello from c#";
return result;
}
}

}

This compiles nicely into a self-packaged DLL but, as I previously said, it is not suitable to be used as a PL/SQL Developer plugin. The problem actually resides in the way the DLL is built-up, its structure being designed to handle managed code which cannot be directly handled by an external application written in C or Delphi. The workaround is to decompile the DLL into its pure code called MSIL to make some changes and to recompile again providing the modified MSIL code for our DLL.

So, to decompile you have to use the following command:

ildasm.exe /OUT:test_plugin.il test_plugin.dll

This will create in the current directory two files: “test_plugin.il” and “test_plugin.res”. The file which is interesting for us is “test_plugin.il” and it contains the MSIL code. You’ll obtain something like this:

// Microsoft (R) .NET Framework IL Disassembler. Version 2.0.50727.42
// Copyright (c) Microsoft Corporation. All rights reserved.



// Metadata version: v2.0.50727
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 )
.ver 2:0:0:0
}
.assembly test_plugin
{
...
...bla bla bla...
...
}
.module test_plugin.dll
// MVID: {CA976282-5C4E-46F5-A770-39DF57596ECE}
.imagebase 0x00400000
.file alignment 0x00001000
.stackreserve 0x00100000
.subsystem 0x0003 // WINDOWS_CUI
.corflags 0x00000001 // ILONLY
// Image base: 0x00EB0000


// =============== CLASS MEMBERS DECLARATION ===================

.class public auto ansi beforefieldinit test_plugin.VSSPlugin
extends [mscorlib]System.Object
{
.method public hidebysig static string
IdentifyPlugIn(int32 ID) cil managed
{
// Code size 8 (0x8)
.maxstack 1
.locals init ([0] string result)
IL_0000: ldstr "Hello from c#"
IL_0005: stloc.0
IL_0006: ldloc.0
IL_0007: ret
} // end of method VSSPlugin::IdentifyPlugIn

.method public hidebysig specialname rtspecialname
instance void .ctor() cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ret
} // end of method VSSPlugin::.ctor

} // end of class test_plugin.VSSPlugin


// =============================================================

// *********** DISASSEMBLY COMPLETE ***********************
// WARNING: Created Win32 resource file test_plugin.res

Now it’s time to make our changes. Bellow is the IL file, modified to support the export of our “IdentifyPlugIn” function. The changed and the added lines are bolded.


// Microsoft (R) .NET Framework IL Disassembler. Version 2.0.50727.42
// Copyright (c) Microsoft Corporation. All rights reserved.



// Metadata version: v2.0.50727
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 )
.ver 2:0:0:0
}
.assembly test_plugin
{
...
...bla bla bla...
...
}
.module test_plugin.dll
// MVID: {CA976282-5C4E-46F5-A770-39DF57596ECE}
.imagebase 0x00400000
.file alignment 0x00001000
.stackreserve 0x00100000
.subsystem 0x0003 // WINDOWS_CUI



.corflags 0x00000002
.vtfixup [1] int32 fromunmanaged at VT_01
.data VT_01 = int32(0)



// Image base: 0x00EB0000


// =============== CLASS MEMBERS DECLARATION ===================

.class public auto ansi beforefieldinit test_plugin.VSSPlugin
extends [mscorlib]System.Object
{
.method public hidebysig static string
IdentifyPlugIn(int32 ID) cil managed
{


.vtentry 1 : 1
.export [1] as IdentifyPlugIn



// Code size 8 (0x8)
.maxstack 1
.locals init ([0] string result)
IL_0000: ldstr "Hello from c#"
IL_0005: stloc.0
IL_0006: ldloc.0
IL_0007: ret
} // end of method VSSPlugin::IdentifyPlugIn

.method public hidebysig specialname rtspecialname
instance void .ctor() cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ret
} // end of method VSSPlugin::.ctor

} // end of class test_plugin.VSSPlugin


// =============================================================

// *********** DISASSEMBLY COMPLETE ***********************
// WARNING: Created Win32 resource file test_plugin.res

It’s time to recompile the IL code into the target DLL file. You can use the following command:

ilasm /OUT:test_plugin.dll test_plugin.il /DLL

Copy the generated test_plugin.dll file under the PLSQL Developer “PlugIns” directory and start the IDE environment. Take a look into Tools/Configure Plugins! Bingo!!! You’ll see there “Hello from c#” which is the name of our first plugin written in csharp.

This approach should be taken as proof of concept that, basically, you can write a PLSQL Developer plugin using csharp but I cannot tell you for sure what problems you might expect or if you will be allowed to use forms or other advanced csharp programming techniques.

Thursday, January 11, 2007

The Infamous Update on a Primary Key

Today I was put in front of a new challenging situation. We have a reporting application which is fed from another system using materialized views. In short, on the remote source system, for several “interesting” tables materialized view logs were defined with the “WITH PRIMARY KEY” option. These logs are used to support a fast refresh mechanism for the corresponding materialized views created on the reporting system. Each materialized view was created on a pre-built table. Of course, there’s a refresh job which takes place on regular intervals in order to keep in sync both systems. As soon as a new record arrives or an existing one is updated this is marked accordingly into a flag column within the pre-built table through a trigger. Likewise, the operation type (INSERT or UPDATE) is written as well into the pre-built table. Now, the big problem is the fact that some modules from the source system are updating (yes, very ugly!!!) values from columns which are part of the primary key. From the materialized view log perspective this is converted into a delete and an insert therefore as soon as the DBMS_MVIEW.REFRESH procedure is called on the reporting system the BEFORE/AFTER DELETE and BEFORE/AFTER INSERT triggers are executed instead of a BEFORE/AFTER UPDATE trigger. Now, one of the MV interface requirement was to convert this sequence of one DELETE and one INSERT so that on the reporting system that initial UPDATE on the primary key to look like a regular update. Even at the very beginning this might appear as a trivial task in fact is quite cumbersome. The solution we found so far relies on using ROWIDs for our pre-built tables.

Okey, because the so called “too much sound but little sense” is not going to be the subject of this post, let’s delve into a concrete test scenario.

First of all, create the source table:

create table muci (col1 integer, col2 integer, col3 integer);
alter table muci add primary key (col1, col2);

insert into muci values (1, 1, 1);
insert into muci values (2, 2, 2);
insert into muci values (3, 3, 3);

commit;
create materialized view log on muci with primary key;

Now, let’s create the pre-built table and the materialized view.

create table muci_mv as
select t.rowid rid, t.*, '1' processed, 'INSERT' operation from muci t;

create materialized view muci_mv
on prebuilt table
refresh fast on demand as
select t.rowid rid, t.* from muci t;

It is important to notice that we’ve created the pre-built table with additional columns: the “processed” and the “operation” fields. These fields are going to be filled in using a trigger on the MUCI_MV table. Also, pay attention to the “rid” field which is supposed to store ROWID addresses for all records refreshed from the MUCI table (not ROWIDs of MUCI_MV records!). I’ll show you right in a minute what the reason of this column is.

Let’s go on creating a very simple package:

create or replace package MUCI_TOOLKIT
is

g_old_rid rowid;

end MUCI_TOOLKIT;

This package has a global variable which is intended to store the old value of the “rid” column of a record which is about to be deleted.

Our test scenario is almost finished! The final step is to create a trigger onto the MUCI_MV table which actually takes care of updating the additional flags:

create or replace trigger trg_biudr_muci_mv
before insert or update or delete on muci_mv
for each row
declare
begin
if inserting then
dbms_output.put_line('inserting...');
:new.processed := '0';
if muci_toolkit.g_old_rid is not null and :new.rid = muci_toolkit.g_old_rid then
:new.operation := 'UPDATE';
muci_toolkit.g_old_rid := null; -- cleanup
else
:new.operation := 'INSERT';
end if;
end if;
if updating then
dbms_output.put_line('updating...');
:new.processed := '0';
:new.operation := 'UPDATE';
end if;
if deleting then
dbms_output.put_line('deleting...');
muci_toolkit.g_old_rid := :old.rid;
end if;
end trg_biudr_muci_mv;

Now, it’s time to effectively test!

external_interface@pdmdb> set serveroutput on
external_interface@pdmdb> select * from muci;

COL1 COL2 COL3
----- ----- -----
1 1 1
2 2 2
3 3 3

external_interface@pdmdb> select * from muci_mv;

RID COL1 COL2 COL3 PROCESSED OPERATION
------------------ ----- ----- ----- --------- ---------
AACdH+AAFAADheGAAA 1 1 1 1 INSERT
AACdH+AAFAADheGAAB 2 2 2 1 INSERT
AACdH+AAFAADheGAAC 3 3 3 1 INSERT

external_interface@pdmdb> update muci set col1='123' where col1=1 and col2=1;

1 row updated

external_interface@pdmdb> commit;

Commit complete

external_interface@pdmdb> exec dbms_mview.refresh('MUCI_MV', 'f');

deleting...
inserting...

PL/SQL procedure successfully completed

external_interface@pdmdb> select * from muci_mv;

RID COL1 COL2 COL3 PROCESSED OPERATION
------------------ ----- ----- ----- --------- ---------
AACdH+AAFAADheGAAB 2 2 2 1 INSERT
AACdH+AAFAADheGAAC 3 3 3 1 INSERT
AACdH+AAFAADheGAAA 123 1 1 0 UPDATE

As you can see it works as expected… smooth! The whole trick consists in checking if the DELETE and INSERT sequence refers to the same source ROWID. If this is the case then we know that an update on the primary key was involved.

However, there are some open points regarding this approach:
1. additional storage is required for storing the added RID column;
2. the DBMS_MVIEW.REFRESH mechanism must follow the sequence order of DML commands performed on the source system which, as far as the oracle documentation states, is not always guaranteed in all Oracle versions.
3. in some MetaLink notes (see Note:67424.1) it is stated that triggers are not supported on read only materialized views.. Yet, you can find a HOWTO on the same MetaLink (176213.1) which advise you to create a trigger on a prebuilt table in order to fill in an additional column (as shown above) with the current SYSDATE value.
4. because we rely on ROWID values of course we’ll end up in big problems if on the source site some source tables are moved to another tablespace or they are re-imported.

Tuesday, January 09, 2007

To DESC or @DESC...

There is one SQLPLUS command which seems to be out of my control as far as its displaying layout is concerned. As you already guess from the post title it's about the well known DESC command. Seems that it has a “self-tuning” layout which takes into consideration the current LINESIZE settings.

For example, using a high value for the LINESIZE setting we obtain:

scott@DDB> set linesize 300
scott@DDB> desc emp
Name
Null? Type
------------------------------------------------------------------------------
-------------------------------------------------------------------------------
---------------- -------- -----------------------------------------------------
---------------------------------------------------------------
EMPNO
NOT NULL NUMBER(4)
ENAME
VARCHAR2(10)
JOB
VARCHAR2(9)
MGR
NUMBER(4)
HIREDATE
DATE
SAL
NUMBER(7,2)
COMM
NUMBER(7,2)
DEPTNO
NUMBER(2)

But using a lower value for the LINESIZE setting the output looks fine:

scott@DDB> set linesize 80
scott@DDB> desc emp
Name Null? Type
----------------------------------------- -------- ----------------------------
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)

From my point of view this behavior is quite ugly especially when I am working within a console with horizontal scrolling capabilities. Using a high value for LINESIZE allows me to execute various 'SELECT * FROM' without worrying about output wrapping but when DESC command comes in place I will always end up scrolling to the right side till I reach the displaying of the columns type, which is really, really ugly! So, in the end I've decided to create a script which provides the expected nice output. The main idea is to decrease the LINESIZE setting, to execute DESC and then to restore the old LINESIZE setting. Not a big deal! Below is my script:

store set crr_settings.cfg replace
set linesize 80
desc &1
@crr_settings.cfg

Now, all I have to do is to take care to prefix with a @ the DESC command:

scott@DDB> set linesize 300
scott@DDB> @desc emp
Wrote file crr_settings.cfg
Name Null? Type
----------------------------------------- -------- ----------------------------
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)

Have fun! :)

Sunday, January 07, 2007

Do Direct Load Inserts Generate Redo?

As to so many Oracle related questions the answer is: IT DEPENDS...

If you are in the middle of such a tricky situation where, in the development environment your bulk direct load process performs well but into production the same direct load process behaves poorly because it generates a lot of redo, then you should start by checking the following:
  1. is your development database running in NOARCHIVELOG mode? In NOARCHIVELOG mode the direct load operations produce redo information at minimum which is not necessary the same into a database which runs in ARCHIVELOG mode. It's not unlikely to have the development environment in NOARCHIVELOG and the production in ARCHIVELOG mode therefore this is the first thing to check.
  2. is the destination table from the production site configured with NOLOGGING attribute? Without this setting the direct load inserts against the table will continue to generate redo information as normal. So, check the LOGGING column from (USER/ALL/DBA)_TABLES view for the corresponding destination table.
  3. has the production database the FORCE LOGGING option enabled? This may be required if there are some standby databases in place therefore all direct load operations will produce redo information no matter what. To check this you can query the FORCE_LOGGING column from the V$DATABASE view.

Thursday, January 04, 2007

Impersonating Of Any Oracle User?

Yes, it is possible! It's like sudo command in UNIX or “Run As” in Windows (but without prompting for password). Starting with 8i version, Oracle has a special undocumented package called DBMS_SYS_SQL which provides the functionality needed for impersonating users. It is internally used in databases replication or within the XDB platform. Of course, execute right against this package should be granted with extreme care, as it can be easily exploited by hackers. Furthermore, as an additional precaution it is advisable to check from time to time to whom execute privilege for this package was granted.

So, lets try it!

SQL> connect sys as sysdba
Enter password:
Connected.

SQL> grant execute on dbms_sys_sql to alek;

Grant succeeded.

SQL> connect alek
Enter password:
Connected.
SQL> declare
2 l_uid number;
3 l_sqltext varchar2(100) := 'grant dba to super_user identified by xxx';
4 l_myint integer;
5
6 begin
7 select user_id into l_uid from all_users where username like 'SYS';
8
9 l_myint:=sys.dbms_sys_sql.open_cursor();
10 sys.dbms_sys_sql.parse_as_user(l_myint, l_sqltext, dbms_sql.native, l_uid);

11
12 sys.dbms_sys_sql.close_cursor(l_myint);
13 end ;
14 /

PL/SQL procedure successfully completed.

SQL> connect super_user/xxx
Connected.


Brrr!!! Nice but scary!