I’m writing this blog during the Halloween season, so several of my colleagues have suggested I keep to that theme, embellishing my entry with ghosts, goblins and ghoulish pranks.

Nothing doing! Besides, my colleague David Ingram already did that last week.

However, I would like to tell you something scary.

If you fill up your car’s gas tank in New Jersey, the price of gas went up 23 cents on November 1st.

Now that’s scary, but it has nothing to do with software asset management or the products that enable that. So I’ll have to tell you something else that’s scary.

I was working with a client on an upgrade of ILMT to version 9.2.1 some months ago.  They run ILMT under Linux.  One of the pre-requisites for upgrading to that version (really, any version of ILMT now) is to upgrade DB2 to version 10.5 fixpack 5 or greater.  The DB2 upgrade looked straightforward enough – download the upgrade code, unzip it, then run an executable and off it goes.  One of the pre-requisites of upgrading DB2 is that it has to be down for the upgrade to succeed.

And that’s when things began to get scary.

The upgrade process is supposed to stop DB2 for you if it finds that it is running.  Good enough, we figured; that should cover us.  Just the same, we’ll enter the “db2 stop database manager” and “db2stop” commands under the DB2 user ID to stop the database manager ourselves, so we should be double covered.  Right?

Wrong! And we learned this the hard way - but thank our lucky stars (mine are in the gamma quadrant, yours may vary) that we cloned our database server before doing anything.

The DB2 upgrade process started smoothly enough.  It got to the part where is said it was stopping DB2, which appeared to be successful.  Then a few more steps ... and then a failure … something about the db2fmcd command not having write permissions to a directory.  Yet when we checked the permissions, they were set correctly.  We were dead in the water.  We couldn’t even start DB2, no matter what we tried. 

So we went back to our backup and restored the VM.  We started over, and once again issued the commands to stop DB2.  Then we really made sure that DB2 was stopped by issuing the “ps –ef | grep –i db2” command (which means, look at running processes then show me only those that have db2 in them whether db2 is in uppercase or lowercase).

It’s as if IBM was saying “it’s alive, IT’S ALIVE!” about DB2 … because we found several entries about DB2 still running, and if we waited long enough, we’d see that DB2 was also restarted!

So what happened? Why didn’t the upgrade process stop DB2 – even though it said it did?

It turns out that it was, uh, the fault of the fault monitor.  Really, it wasn’t the fault monitor’s fault – it was doing what it was designed to do, which was keep DB2 up … but the fault monitor had to be shut down to allow the upgrade to complete.

So at least we realized that the upgrade process had a flaw where it could not stop the fault monitor either.  We would have to stop it ourselves and make sure it was stopped.  We did some research and found two technotes that listed instructions for stopping and starting the fault monitor:

In fact, after we stopped the fault monitor, we rebooted the server to make sure that DB2 would stay down … which it did.  We were then able to complete the upgrade successfully.  We then ran the instructions to re-enable the fault monitor.

So don’t be scared if you have to upgrade DB2.  Just make sure it’s NOT alive when you upgrade it.

And did you get your gas in New Jersey before November 1st?

I’d love to hear war stories from you if you ran into anything “really fun” trying to upgrade the components of the IBM License Metric Tool and BigFix.  Sharing is caring!