HA4i and IFS Journal entry replay

[adrotate group=”3,7,8,9″]

One of the features we have been working on recently is the ability to replay journal entries generated by IFS activity. The HA4i product does support the replay of IFS journal entries but only via the *IBM (APYJRNCHG) apply process. Having seen the improvements we had gained by implementing our own apply process for DB journal entries we decided that we needed to offer the same capabilities with the IFS journal entries.

The technology for applying updates to IFS objects using the journal entries had been partially developed for sometime, however development stopped when we found that the JID and Object ID could not be maintained between systems using IBM API’s. You might ask why? Well because some of the journal entries deposited into the journal do not have the path to the object and only the Object ID we needed some method to extract the true path to the object from the content of the journal entry. As the JID and Object ID are different we could not use the API’s that are available to convert those ID’s into the path. We did ask IBM if they would provide a solution in much the same manner as they do for Database file (QDBRPLAY) and Data Area – Data Queue (QjoReplayJournalEntry) which protect the JID of the created object, this in turn allows us to use the API’s to extract the actual Path of the object using the JID contained in the journal entry. But they said it could not be done (they already have to do it for the APYJRNCHG but would not expose it to others) and suggested we came up with a table or other technology which would allow us to track each and every IFS object, we thought that would be a nightmare to handle especially as one of our clients had to split his IFS over 3 journals just because he hit the maximum number of objects which can be journaled to a single journal! Still, when push came to shove we bit the bullet and built a technology to track the IFS objects which would then allow us to manipulate the IFS objects using the journal entries.

We faced a number of challenges with the replication technology such as security and CCSID conversion, but eventually we got to the bottom of the pile and the apply of IFS generated updates now works. We are still surprised people use the IFS especially with the abundance of better storage solutions out there, but we can now provide our own apply process for the IFS journal entries. Tracking of the JID and Object ID is now carried out very effectively without the use of a DB table and it is very fast and has a very low CPU impact.

We are not finished yet though, we are now working on implementing support for some of the more obscure DB2 capabilities plus the Journal Minimal Data option with *FLDBDY and *FILE support. We are also experimenting with Identity columns and User Defined Types to name a few. You may not use these capabilities now but having built the tests to allow us to test them within HA4i, I must admit I am going to use them a lot more in the future.

HA4i continues to improve as a solid High Availability solution, having already built a lot of new features in the latest Version(7.1) we now have another set of features ready for release in the next PTF. If you are looking at HA or want to reduce the cost of your current implementation give us a call, we may surprise you with what we can offer. We might be small but that does not stop us from developing first class solutions and at a cost you can afford.