I seem to get less and less time these days to do all the things that need to be done. I am sure everyone out there is feeling the same. We have just started the final testing of the next PTF for HA4i which has some pretty nice features added. This will probably be the last enhancement package for the current version as the new version is already under development and will hopefully appear before the end of the year. So whats in this PTF? Well here is a quick summary.
As usual we have added a number of bug fixes, none of them major issues but as usual customers always push the product into territory we have never imagined. The new features from the last PTF seem to have been a success with a number of new enhancements built on top of the technology.
Sync Manager is the process where we can synchronize a file using the journal as a marker for bringing the save copy into play. Previously we had locked the object from the time the request started right up to the object being restored. One customer in particular had a major issue with this, his 2MB link was so slow it was taking days (literally) to get files across the network. So one of the enhancements this PTF brings is a shorter lock time, the object is only locked for the period of the save operation and then released back to the users. HA4i now manages the lock status for the apply process to ensure the object is available when required on the target system. As part of this update we also added reloading of failed requests and a much better interface into the process so you can now see what is being processed and what stage in the process is it at. If the customer wants to bypass the network and use a save and restore process using tape etc HA4i now provides a sync point process which allows this to be carried out very easily and effectively plus adding new compression features to the save ensure the size of the Save Files we create have been significantly reduced. We added a new feature to the status screens on the target that not allow submission of failed objects via the source sync manager but also provides more data about the object which is in error such as its size, this is very important to determining what type of recovery you are going to use to re-sync the object.
Auditing is a safeguard that allows users to see that the replication processes are doing their job and the configurations are actually replicating everything that they should. Because HA4i is applying data at the receiver level it needs to have any data held in the attached receiver on the target system applied before the audit starts. A new audit process has been introduced which automatically applies the attached receivers data before the audit is started plus it allows a more granular approach to file auditing by allowing source and data files to be isolated in the audit. Because of the new features added to the sync process you can now sync audited objects via the sync manager using a single option so if an object is out of sync it is very easily brought back into a sync state.
Object replication requires objects to be available for saving, one of the major issues we see with this is the object lock state for newly created objects. Some objects will have notifications sent stating the new object exists that are available to be processed before the object is available. Even with delay processing the object cannot always be locked for the save so a failure is logged by HA4i. Over the course of a day in highly active systems this can result in a very large number of failures which need to be retried. As part of this PTF we introduced a number of new commands which allow automated retry processing for the Object and Spoolfile replication processes. When used via job schedule entries they ensure every attempt is made to replicate objects without impacting current object processing and logging those entries that were not processed because the object no longer exists, this can be important when looking at what object can be excluded from processing. Previously we added an entry for every object failure so an object which changed constantly could result in multiple failure requests, a new filtering feature ensure we only record one entry per object, if its a command we even check the command string to ensure its unique.
Role swaps are one area which every HA solution has to perform effectively. HA4i has provided a single command role swap capability for each system for sometime, but this PTF brings a new feature which allows the role swap to be carried out from a source system automatically carrying out the required processes on the target system. If the process fails you can start the process again from the source system or if needed start each side individually. We have also added exit points to the process which allow the customer to run additional processes which are not part of the HA4i role swap process such as starting subsystems, varying on communications lines etc.
HA4i is expected to run 24×7 without a break, this can result in some very big job logs and message queues. Even though job logs and message queues can be wrapped so they provide some size limitation of sorts, trawling through a job log of 6,000 plus pages can be a real drag! So we have now added a new feature which will automatically restart the processes every 24 hours keeping those joblogs to a manageable size. Messaging has also been reduced further to weed out a lot of the unnecessary messages sent to the message queues and processes, but putting the product into debug mode brings those messages back to the user so we can see exactly what is going on when required.
The next release is going to be a major change to the product, we have decided that we will provide our own apply process that will read the remote journal and apply the entries directly to the objects. The APYJRNCHG command does a great job but there are times when some of the features affect the customer experience and IBM is either unwilling or unable to effect a change that would make the process more acceptable. The IBM apply process will still be available though, so when you set up the product and configure the journals you will have the option to choose which apply process you want to use for the journal in question. This will open up a number of new options for auditing and object control, because we will be reading through the journal one entry at a time we will be able to provide a command processing feature allowing commands to be submitted in sequence with the data changes. We will also provide audit details at the record level without having to check all data has been applied to a particular point, the entry will carry the source audit information that should match exactly with the target data at that point in time.
We still have a long way to go with the next release but the early testing of the new apply process is going well. The feed back from customers is positive and we are hoping that the new features add even more value to their investment in out products. HA4i is a very viable HA solution and easily fits into the most constrained budget. If you need a HA solution or want to replace an existing one let us know, we are sure the price and functionality will meet with your approval. For those customers looking to replace an existing solution we have an offer for our JobQGenie product that makes the upgrade to HA4i even more attractive.