Job Tracking Update.

When IBM announced the availability of a new Job Tracking capability for IBM i in Version 7.4 we were very excited and despondent at the same time. Despondent because the JT4i product could become obsolete as IBM was now offering its capabilities in a simplified form and yet excited because we could enhance JT4i to encompass these new capabilities and keeping the additional capabilities of JT4i that do all the heavy lifting when you need to recover.

Turns out the new Exit Points (QIBM_QWT_SBMJOB and QIBM_QWT_CHGJOB) are not as help full as we hoped. The biggest problem in my opinion is their inability to capture any jobs submitted via the IBM i JOBSCDE process or any system jobs that submit other jobs (tracking system submitted jobs may not be such a high hitter but JOBSCDE based submissions certainly are). JT4i tracks JOBSCDE submitted jobs already.

We installed 7.4 on one of our hosted partitions and tried a couple of tests to see what the new features actually offer. The initial problem we came across was the tracking of when the Exit Point called the program. In our case we built a program that simply updated a data area when called, but it was never called. We did start a trace and opened a PMR with IBM and it turns out that the documentation is incorrect! Entering *ANY *ANY for the job queues to track is invalid, once we changed to ‘BLANKS’ for the library variable it started to call the program. Another requirement that kept us guessing for a long time is that you need to sign off once you have set the exit point up, any requests that are made in the same session used to set up the exit point would not result in the desired effect. IBM said this is a natural effect but would consider making note of it in the documentation.

The initial tests also identified another issue which is on an open PMR, the documentation states the parameters passed to the program are the data which has all the information about the job and a second which describes the length of the data. The tests we ran appear to always have 0 for the data length, it could be we have the wrong information about how that data is provided (it should be an integer?) which would be a simple fix. If the data is not set then that will obviously require a PTF from IBM which may take some time to get?

Having gone through the initial trials we decided to add a data queue to the mix where we would simple copy the data from the 1st parameter to. This would allow us to see the information that is deposited by the exit point and test our mapping structures over that data (the fact that we did not know the actual data length sent meant we had to pad to a length which should be more than enough to send to the data queue). After the data was sent we used DMPOBJ to examine the content of the data queue to figure out what data is actually sent, unfortunately it appears that some data is not presented to the exit program which again requires us to discuss with IBM why.

So while we were pretty excited to get our hands on the new features of 7.4, at the moment it appears to be a bust. We will continue to test and work with IBM to figure out the issues with the exit points and may be able to take some of the features it offers and put into our products in the future.

We will keep posting our findings as we figure out just what this new feature offers, at the moment we feel its not something that comes close to the abilities we have built into JT4i.

Chris..

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.