I am not sure about you, but keeping my IBM i LPAR’s updated to the latest PTF level has always been a challenge! This means I tend to let things fester for some time before I even bother to look at the Fix Central and check for the latest levels to download.
With our Windows, Linux and Apple systems we always keep them to the latest update level available. Not so with the IBM i, we only check for updates when we see problems and IBM suggests we get up to the latest PTF level before they engage for a support call. We seemed to be under the impression that IBM i security was bullet proof and PTF’s tended to be more about adding new features or fixing none security based problems and left things alone.
How times have changed, we are now seeing fixes released that are patching security holes which are very severe in nature such as the recent Log4j problem. Significant new features are also being released regularly, taking advantage of them brings a lot of benefits for the platform and our solutions. We now look a lot closer at what PTF level our systems are at and making sure we keep closer to the latest PTF level.
Figuring out what PTF’s are available from IBM and what is the latest level on our systems had to be managed manually. This would require us to sign on to each LPAR (we have 12 now), look at the installed PTF group and compare that against what is available in Fix Central. A very slow and tedious task even if we only had a couple of installs to manage. We decided to look at the API’s available in the OS that would expose the PTF Groups which are installed and then compare the output from that against some kind of list. The API was an easy find, but the list from IBM did not seem to be readily available without manually scribbling list and then adding to a file for comparison with the API output.
I decided to ask IBM for assistance, I emailed Tim Rowe asking if IBM exposed any services on the web that would provide that information so I could retrieve it and use it to compare installed levels. He came back with something I had not realized existed, there is a SQL service written by Scott Fortsie that does exactly what I was looking to do, it even worked out if a PTF group had updates available and flagged accordingly in the output. We had our solution to get the information so it should be a simple task of adding it to our AAG product so we can extract the data and send any notifications required.
I needed to embed the SQL request provided by IBM into a C service program because we use a TCP/IP based responder to feed data back to the AAG monitor. This provided us with some interesting challenges (its not that well documented how to build embedded SQL program in my personal opinion) that required raising PMR’s with IBM to get the reason things were not working as we had expected. While waiting for the responses, I decided to pull the SQL service apart in an attempt to figure out what was going on and see if we could create a C program to natively read the file without going through the SQL process. This resulted in some interesting finds.
Here is the code we got from IBM, we will go through each part to show what we found and why we were forced into the embedded SQL route.
With iLevel(iVersion, iRelease) AS (select OS_VERSION, OS_RELEASE from sysibmadm.env_sys_info) SELECT P.* FROM iLevel, systools.group_ptf_currency P WHERE ptf_group_release = 'R' CONCAT iVersion CONCAT iRelease concat '0' ORDER BY ptf_group_level_available - ptf_group_level_installed DESC
The first WITH statement just gets the version of the OS installed, we asked why this was necessary because we removed the statement plus the WHERE statement that uses it and it still worked. We were told this is required particularly where users had updgraded the OS on the system system from one release to another, the system still held onto the old PTF Group information and it would pollute the results from the request.
We wanted to just pull back the data and read though it, so we thought that we could open the file we could also read through it in C just like any other file. We created the program which opened the file and displayed each record to the screen. It all appeared to work, but after the program ran we found that any file access returned a disconnected message! We had to sign off and back on again before we could access any files in the data base. This is apparently due to problems with activation groups and the way the file was built under the covers, so we decided that this was not going to work.
The file was an SQL view, we pulled the SQL apart which was used to build the content and found out that it pulled in an XML file from IBM support. The file content was extracted into a DB Table which the query ran against. We thought OK, if we can get the XML file from IBM and do the extraction ourselves we can use it instead, unfortunately that would require an XML Parser and we did not have one on the systems so that appeared to be a dead end.
We embedded SQL in a *SRVPGM and after a few tweaks we managed to get the PTF comparison process working for all of our installed LPAR’s. However when looked at the target programs on the IBM i we noticed a few strange things, firstly the programs would show they were sitting with PGM-jvmStartPa as the function being called after the request and it did not change once it had been called, secondly the CPU was through the roof! We saw 62% CPU utilization on a single request. I asked IBM about the first issue and was told it was probably leaving the JVM initialized to save resources as the startup and load of the JVM is very heavy. Even running the request many time again showed high CPU utilization (35 – 50%). This was not what we wanted to see, having the system use this much CPU to just pull back the data was beyond what we could accept. Its not that we expect this to be run frequently, but having that much CPU taken and the JVM left in memory was not a good thing.
We knew that we could get the XML file from IBM and we had already built a program to extract the current data from the system, all we needed was some way to compare the content of each against each other. Doing it on the IBM i seemed the most appropriate option but that also came with a lot of issues such as how do we parse the XML file because writing our own just for this task was not worth it. After some research we found a package for the Linux system (libxml2) which could also run on the IBM i. The documentation did state it would run natively on the IBM i, but it seemed to run in PASE and require the XML file to be presented from the IFS. After some thought, we decided that we should do the comparison on the Linux side. Finding help for libxml2 on Linux plus converting between ASCII and EBCDIC on the IBM i would add complexity, made it the right way to go in our minds.
The libxml2 documentation is not that great and even the examples needed some effort to get working, but in the end we found a way to extract the data from the XML file and compare it against the data we had pulled from the IBM i.
The programs on the IBM i are no longer using excessive CPU plus the JVM is not required making it a lot easier on the remote system resources. Moving the comparison from the IBM i to the Linux box made sense and follows our mantra of using the right platform to meet the solution needs.
AAG is constantly improving, as we see new requirements from our internal needs and from customers running the product we are adding new features that fill those needs. Being able to check that the PTF Groups are relatively up to date or maybe they have the latest HIPERS installed saves us a lot of time and effort. Maybe before too long IBM will add the ability to automatically download and install the latest fixes. Having this part already working may help with managing that process if and when its available. All Shield products have already moved in this direction allowing you to automatically retrieve and install the latest updates without lots of manual effort.
If you are interested in what AAG can do for you today or want to suggest a feature that you would like to see let us know. There are a list of the data points we check today listed on the product page of our website and we are going to be updating that list as we build more features. The fact the product runs on a Raspberry Pi or any Debian Linux instance makes it a very cost effective solution for any size of company.