Why you should look at JobQGenie

OK so this is my sales bit! I have been trying to get people to at least look at a product I developed. The problem seems to be the lack of understanding about the product and what its function is, so I hope if you read this blog you will have a better understanding of the importance of such a solution.

If you have any of the High Availability Solutions installed on your i5 or even considering one you should know one thing, none of the current technologies including IBMs clustering and Cross System Mirroring support the replication of the Job Queue content.

Why is this important you ask? Well when a system fails you have no idea of what was running or waiting to run on the failed system. As I indicated above the replication products do not replicate the Job Queue contents only the changes to the Job Queue Object (description, owner, authorities etc) so if you look into the object on the target side you will not see any content. This is not a problem under normal circumstances as you dont want content in the queues, otherwise it will start and run on the target which will create a real mess of the data and objects!

And thats important because?…

Lets assume that you have had a crash and you have switched to the target system, the first thing you have to do is check the integrity of the data and bring it into a state where you have no partial transactions in the database. This could be a difficult task unless you happen to be running commitment control (most applications don’t implement commitment control even banking applications) because you can’t see any information which would point out what transactions belong to which job and how that job state was when the source system failed. Using Remote Journal technology, JobQGenie will help you identify which data belongs to which job and what the state of that job was. It even shows the jobs which have data inter-dispersed with the open jobs (this is important when you have to run RMVJRNCHG) Using the tools provided with the product you can identify jobs which were open, jobs which ended abnormally (could be a reason for the crash?) and which jobs were sitting on the job queue waiting to run. This allows you to clean up the data using RMVJRNCHG commands or your inhouse data removal tools, load the job queues in the correct order with jobs that have to be resubmitted, plus jobs which never ran. This should allow you to make the system available to the users with data integrity maintained. Remember a HA product will replicate every change as it happens and immediately apply that change to the remote system. This is as they should, they have no concept of job state or data to job association so they dont align the data in anyway. I have heard of a new concept which will hold transactions for a period of time and then apply upto the next check point, the problems still exist, they dont consider which jobs are running and how they are ending etc at the checkpoint time…

The product is available for a free trail here. We also have a couple of white papers which should be of interest which can be downloaded here. You will have to log some information with us to get the downloads but this information is only used for tracking you within the site. You can review our privacy policy here

If you are an IBM Business Partner and involved in the System i5 market place and would like to distribute the product please contact us.

If you need more information please let us know..


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.