JobQGenie Version 5.1 is now GA

We have been very busy getting this release of JobQGenie and a new Version of RAP out to the customer base which is one of the reasons we have not been posting much on the Blog! But we now have Version 5.1 of JobQGenie up and available for download from both the download page of the Blog and the Associate member pages on the website.

This version of JobQGenie has a totally new data capture and storage engine built in. Initially we started to look at a low level solution to be packaged with the RAP product that would offer similar technology but at a much higher level and with less functionality. After some deliberation we decided to completely rewrite the capture technology as it would be far easier than trying to bring elements of the very complex capture technology we had in JobQGenie into RAP, this proved to be a very worthwhile exercise as we were able to change some of our original concepts and use much faster and newer API’s. The new process turned out to be more efficient and provided us with a 100% capture rate for any jobs which went through the job queues.

We had been struggling with one of our current clients due to the very complex and intensive environments they run (they readily accept that they stress all of their suppliers products beyond what most others do) which caused the data capture process to run into timing issues causing some data to be lost of inadvertently overwritten by a slower process. They needed something which could handle many thousands of jobs per hour without missing a single job and yet still provide an efficient collection process. We tried to get IBM to give us more access to the job data but any changes they agreed to would only be implemented in a future release of the OS, this customer needed the solution now! So we took the new technology from RAP and started to add the same functionality we had provided in the previous versions to it.

The customer has been working with us on getting the solution tested and we have finally manged to get to a stage where they are happy with the results and hope to move it into production as soon as possible. We still have some final testing before we get the final sign off but this is the best results we have seen so far with no missed data at all! They understand that recovery from an unplanned outage is not going to be easy without this piece of the puzzle which is why it is now seeing a lot of attention paid to it.

The data capture is only part of the story, once you have the data you need to be able to work with it and determine what jobs affected what data and how that data can be rolled back to a suitable starting position. The tools provided with JobQGenie allow you to take a job and look in all of the Journals on the system to see what if any data was affected by the job. This is very important where you are using a High Availability tool for data replication which applies data as it is received. The apply process only knows about the data and where it needs to apply it, not how it was created and when. If you have to restart a job, you need to know if any data needs to be rolled back before it is restarted otherwise it will either fail of corrupt the data itself.

After you have cleaned up the data, the jobs can be resubmitted using the data collected from the source system, in some cases this will allow a batch stream to be started mid stream instead of from the beginning. A simple option from the list of jobs is all that’s needed to re-submit the job.

JobQGenie was written to work with any of the HA solutions and requires little configuration to start the capture process, replication of the data to the remote system requires configuration of the files into the HA solution which should take less than 1 day to complete the install. If you need any help we can provide remote support at very low cost to the user.

Please feel free to contact us about any question you have regarding JobQGenie or any other product we support.

Chris..

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.