This blog has moved here.

Tuesday, December 14, 2010

Kill a Session From Any Node

I really like this new 11g feature which allows the DBA to kill a session despite his session is on a different instance than the instance where the session to be killed resides. The ALTER SYSTEM KILL SESSION statement has been improved and allows specifying the instance number where the session you want to kill is located:

ALTER SYSTEM KILL SESSION 'sid, serial#, @inst_no';

Great!

Saturday, December 11, 2010

Extending my RAC with a new node

I have a 11.2.0.2 database comprised of one node. I especially created it with one node just to have the chance to add another node later. Why? Because I wanted to play with this new GPnP feature. So, despite my RAC was comprised of one node, it was actually a fully functional environment, with GNS, IPMI, CTSAS and a policy managed database. Okey, the process should be straightforward: run some CVU checks to see if the node to be added is ready and then run addNode.sh script from the GI home of the existing RAC node. In my case, the existing node was named "owl" and the node to be added was "hen".

First of all, I ran:
[grid@owl bin]$ cluvfy stage -pre nodeadd -n hen

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "owl"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

Node connectivity check passed


Checking CRS integrity...

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
The location "/u01/app/11.2.0.2/grid" is not shared but is present/creatable on all nodes
Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"

Node connectivity check passed

Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "owl:/tmp"
Free disk space check passed for "hen:/tmp"
Check for multiple users with UID value 1100 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed


User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes


Checking GNS integrity...
The GNS subdomain name "vmrac.fits.ro" is a valid domain name
GNS VIP "poc-gns-vip.vmrac.fits.ro" resolves to a valid IP address
PRVF-5229 : GNS VIP is active before Clusterware installation

PRVF-5232 : The GNS subdomain qualified host name "hen.vmrac.fits.ro" was resolved into an IP address

GNS integrity check failed
Pre-check for node addition was unsuccessful on all the nodes.
PRVF-5229 is really a strange error: of course the GNS VIP is active because I already have my RAC installed. It really makes sense when installing a new RAC and the GNS vip sould be unallocated but otherwise I don't get it. So, I decided to go on even the CVU was complaining.

The next step would be to run addNode.sh script from [GI_HOME]/oui/bin location. I ran the script and I found that it does nothing if the CVU checks are not passed. You can figure out this if you run the script with debugging:

[grid@owl bin]$ sh -x ./addNode.sh -silent "CLUSTER_NEW_NODES={hen}"
+ OHOME=/u01/app/11.2.0.2/grid
+ INVPTRLOC=/u01/app/11.2.0.2/grid/oraInst.loc
+ ADDNODE='/u01/app/11.2.0.2/grid/oui/bin/runInstaller -addNode -invPtrLoc /u01/app/11.2.0.2/grid/oraInst.loc ORACLE_HOME=/u01/app/11.2.0.2/grid -silent CLUSTER_NEW_NODES={hen}'
+ '[' '' = Y -o '!' -f /u01/app/11.2.0.2/grid/cv/cvutl/check_nodeadd.pl ']'
+ CHECK_NODEADD='/u01/app/11.2.0.2/grid/perl/bin/perl /u01/app/11.2.0.2/grid/cv/cvutl/check_nodeadd.pl -pre -silent CLUSTER_NEW_NODES={hen}'
+ /u01/app/11.2.0.2/grid/perl/bin/perl /u01/app/11.2.0.2/grid/cv/cvutl/check_nodeadd.pl -pre -silent 'CLUSTER_NEW_NODES={hen}'
+ '[' 1 -eq 0 ']'

As you can see, the check_nodeadd.pl script ends with a non-zero exit code which means error (this perl script is really running the cluvfy utility so, it fails because of the GNS check). The only workaround I found was to ignore this checking using: export IGNORE_PREADDNODE_CHECKS=Y
After that I was able to successfully run addNode.sh script:

[grid@owl bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={hen}"
Starting Oracle Universal Installer...

... output truncated ...

Saving inventory on nodes (Friday, December 10, 2010 8:49:27 PM EET)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'hen'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each cluster node.
/u01/app/oraInventory/orainstRoot.sh #On nodes hen
/u01/app/11.2.0.2/grid/root.sh #On nodes hen
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/11.2.0.2/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

Okey, GREAT! Let's run those scripts on the new node:
[root@hen app]# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@hen app]# /u01/app/11.2.0.2/grid/root.sh
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params
Creating trace directory
PROTL-16: Internal Error
Failed to create or upgrade OLR
 Failed to create or upgrade OLR at /u01/app/11.2.0.2/grid/crs/install/crsconfig_lib.pm line 6740.
/u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib -I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl execution failed

Ups! I did not see that coming! First of all, OLR?! Yea, it's like an OCR but local. The only note I found about this error was 1123453.1 and it advises to double check if all install prerequisites are passed using cluvfy. In my case, the only problem I had was with the GNS check. Does GNS have anything to do with my error? As it turned out, no, it doesn't! The big mistake I made (and the cluvfy didn't notice that) was that the SSH setup between nodes was wrong. Connecting from owl to hen was okey, but not vice-versa. After I fixed the SSH configuration the root.sh script was executed without any problems. Great!

The next step was to clone the database oracle home. That was really easy: just run the addNode.sh in the same way I did for GI. So far so good... at this point I was expecting that little magic to happen. Look what the documentation says:

If you store your policy-managed database on Oracle Automatic Storage Management (Oracle ASM), Oracle Managed Files (OMF) is enabled, and if there is space in a server pool for node2, then crsd adds the Oracle RAC instance to node2 and no further action is necessary. If OMF is not enabled, then you must manually add undo and redo logs.

Hey, that's my case! Unfortunately, the new instance didn't show up. Furthermore, the pool configuration was asking for a new node:
[oracle@hen oracle]$ srvctl config srvpool -g poc
Server pool name: poc
Importance: 10, Min: 2, Max: -1
Candidate server names: 
Look, I have increased the importance level and I set the "Min" property to 2. Damn it! I don't know why the new server was not automatically picked up, but maybe is also my leak of experience concerning this new server pools concept. In the end I launched "dbca" from the new added node hoping that some new magic options were added. But, no... even the "Instance Management" option was disabled. But, if you are choosing "Configure database" and next, next, next until the SYSDBA credentials are requested then dbca will try to connect to the local instance and it will actually create this new instance. I'm sure this is not the way it was supposed to work but, at least, I could see some results. However, there was another interesting thing. Looking into the alert of the new created instance I found:
Could not open audit file: /u01/app/oracle/admin/poc/adump/poc_2_ora_18197_1.aud
Retry Iteration No: 1   OS Error: 2
Retry Iteration No: 2   OS Error: 2
Retry Iteration No: 3   OS Error: 2
Retry Iteration No: 4   OS Error: 2
Retry Iteration No: 5   OS Error: 2
OS Audit file could not be created; failing after 5 retries
I didn't create the /u01/app/oracle/admin/poc/adump folder on my new node and that was causing the error. So, this is another thing I should remember... as part of the addNode.sh cloning process the "adump" location is not automatically created.
And, that's all! Now, my fancy RAC has a new baby node.

Wednesday, December 08, 2010

Upgrade GI to 11.2.0.2: Simply Surprising...

I never thought I'd write a post about such a trivial task... Well, if you are going to upgrade from 11.2.0.1 to 11.2.0.2 be prepared for surprises.

The first surprise is given by the download page from the oracle support site (formally know as Oracle Metalink). The 11.2.0.2 patch set has 4.8G! WTF?! Furthermore, it is split in 7 pieces... Despite of this huge size, the good thing is that, unlike the previous releases, this patch-set may be used as a self-contained Oracle installer, which means you don't have to install a base 11.2.0.1 release and, after that, to apply the 11.2.0.2 patch-set on top of it, but you may simply install the 11.2.0.2 release directly. There's one more catch: if you want to upgrade just the Grid Infrastructure you don't need all 7 pieces from the patch-set. On the download page is not very clear mentioned but if you have the curiosity to open the README (and you should!) then you'll find the following:

Great! So, for the beginning we'd need the 3rd piece in order to upgrade our Grid Infrastructure.

The second surprise is the fact that the GI cannot be in-place upgraded. In previous releases we used to patch providing an existing home location. Starting with 11.2.0.2 in-place upgrades for GI are not supported. According to the "Upgrade" guide:

As of Oracle Database 11g release 2 (11.2), the Oracle Clusterware software must be upgraded to a new home location in the Oracle grid infrastructure home. Additionally, Oracle ASM and Oracle Clusterware (and Oracle Restart for single-instance databases) must run in the same Oracle grid infrastructure home. When upgrading Oracle Clusterware to release 11.2, OUI automatically calls Oracle ASM Cluster Assistant (ASMCA) to perform the upgrade into the grid infrastructure home.

Okey, good to know! Let's start the upgrade process of GI. The wizard provided by the OUI is quite intuitive therefore I will not bother you with screenshots and other obvious things. However, the next surprise comes when you are running the rootupgrade.sh script. The error is:
Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
 The fixes for bug 9413827 are not present in the 11.2.0.1 crs home
 Apply the patches for these bugs in the 11.2.0.1 crs home and then run 
 rootupgrade.sh /oragi/perl/bin/perl -I/oragi/perl/lib -I/oragi/crs/install /oragi/crs/install/rootcrs.pl execution failed
WTF? You cannot patch if you don't have another stupid patch already there. Okey, as an Oracle DBA you have to be a patient guy... take a deep breath and start looking for bug 9413827. First of all there is the 10036834.8 note, which basically says that you might still get this error even if you apply the patch for the 9413827 bug. As an workaround they suggest to also apply the patch for 9655006 bug. That's madness! In the end it turns out that 9655006 patch is actually the July 10.2.0.1.2 PSU. Okey, just download the appropriate version for your platform. Now, another surprise... you need an updated version of OPatch utility. Damn it! Back to metalink, search for patch 6880880 and download the 11.2.0.0.0 version for your platform (Take care not to download the wrong version. By the way, did you noticed that you may download a wget script which can be used to download the patch without using a browser? Yea, finally something good on that shitty flash GUI.) According to the README they suggest to unzip the upgraded OPatch utility directly into your CRS home, using something like:
unzip [p6880880...zip] -d [your GI home]
... which I did!
Now, you have to unzip the PSU patch into an empty folder, let's say /u01/stage, and run the following command as root:
/OPatch/opatch auto /u01/stage/ -och [your GI home]
In my case, the output was:
Executing /usr/bin/perl /u01/app/11.2.0.1/grid/OPatch/crs/patch112.pl -patchdir /u01 -patchn stage -och /u01/app/11.2.0.1/grid/ -paramfile /u01/app/11.2.0.1/grid/crs/install/crsconfig_params
2010-12-08 12:32:19: Parsing the host name
2010-12-08 12:32:19: Checking for super user privileges
2010-12-08 12:32:19: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0.1/grid/crs/install/crsconfig_params
The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0.1/grid/
The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0.1/grid/
Patch Component/Conflict  check failed for /u01/app/11.2.0.1/grid/
Upssy! Another surprise! This patch is not applicable for bla bla bla? Are you serious? Let's check the logs. They should be in your $CRS_HOME/cfgtoollogs. Search for a log file named as opatchauto[timestamp].log. The important part for the log:
2010-12-08 12:32:19: The component check failed with following error
2010-12-08 12:32:19: bash: /u01/app/11.2.0.1/grid/OPatch/opatch: Permission denied
Ha? I'm root! Aaaa... okey! Apparently it tries to run the OPatch tool under the grid user. Okey, let's fix the permissions.
chown root:oinstall /u01/app/11.2.0.1/grid/OPatch -R
chmod g+r /u01/app/11.2.0.1/grid/OPatch/opatch
Now, try again! Yeap... now it's working.
After applying the patch we are ready for our rootupgrade.sh. It's interesting that the output still contains the Failed to add (property/value):('OLD_OCR_ID/'-1') message but the upgrade continues without any other complaints. Okey, let's perform a quick check:
srvctl config asm
ASM home: /u01/app/11.2.0.2/grid
ASM listener: LISTENER

srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: 
  /u01/app/11.2.0.2/grid on node(s) owl
End points: TCP:1521
Great, the ASM and listeners are relocated to the new GI home. The next logical thing to do is to uninstall the old GI home, right? It's as simple as:
/deinstall/deinstall
Oookey, meet SURPRISE Number 6:
ERROR: You must delete or downgrade the Oracle RAC databases and de-install the Oracle RAC homes before attempting to remove the Oracle Clusterware homes.
Isn't it great? On metalink I found Bug 10332736 and, on the WORKAROUND section, it says something about writing a note with a manual uninstall procedure. However, at the time of writing this, the note wasn't published yet. Yea... all I can say is that I'm tired of these stupid issues. What happend with the Oracle testing department? They encourage to patch frequently but, as far as I'm concerned, I always have this creepy feeling before doing it.