In-Depth

Building the Bridge: Online Communications Between Heterogeneous Applications Helps Fight Welfare Fraud

The ability of different departments within the Commonwealth of Massachusetts to query each other's data online is helping to fight welfare fraud. A store-and-forward messaging system, based on IBM's MQSeries system, provides realtime and batch communications to a wide range of agencies running the MVS, AIX, Windows NT, Windows 95 and HP-UX operating systems. In the past, matching welfare applications against wage data required two time-consuming manual tape mounts, as well as delivery of the tapes across town. Now, batch queries can be performed automatically, and realtime validation will soon be possible when application updating is completed. The system was architected and designed with the assistance of systems integrator, Systems Engineering Inc. (SEI).

The various agencies of Massachusetts enjoy a great deal of autonomy in their IT operations. The advantage of this approach is that each agency has developed systems that are well suited to its particular needs. The drawback, until very recently, has been difficulty in communicating between the heterogeneous systems operated by the different agencies. For example, federal regulations require that the Department of Transitional Assistance (DTA) frequently match welfare recipients against wage data maintained by the Department of Revenue (DOR). In the past, this required mounting a tape onto the DTA system, downloading the welfare recipient database, delivering the tape to the DOR office, mounting the tape onto their drive, and finally running the matching application. The resulting matches were then delivered back to DTA and reloaded to complete the eligibility cycle.

Manual Drawbacks

This was a lengthy process that required a lot of human intervention at both agencies involved. Sometimes, tapes were mislabeled or lost, which meant that the matching operation was delayed. Another problem was that the data centers for the two agencies were located miles apart and there was only one delivery run between the two centers each day. If the tape was not ready by the time the driver left, then matching would have to wait for the next day. These same problems occurred whenever data communications were required between different Massachusetts agencies. For example, when children are placed in foster care by the Department of Social Services (DSS) due to abuse or neglect, the welfare department must be informed to reduce payments to their parents, and the DSS financial system must be notified to begin payments to their new foster parents.

About two years ago, the IT Division (ITD) began exploring various approaches to automating data-sharing among the different departments. This project gained impetus when it was determined that the main data center would move from Boston to Chelsea, making it even more difficult to share data with the other data centers located in Boston. In addition, various agencies had identified the need for realtime data-sharing, which was an impossibility under the existing system. ITD staff members identified two technologies that appeared to be candidates: CORBA object technology and IBM's MQSeries message brokering approach. According to Anna dos Santos, Director of the Enterprise Applications Bureau of the ITD, the ITD sought a consulting partner to do the studies, planning and requirements, and design of the architecture. "We selected SEI because they did an earlier study for the Human Services Secretariat on systems integration which was very impressive and, in fact, was an important precursor to the new implementation," dos Santos says.

Asynchronous Messaging

ITD and SEI staff members selected the store-and-forward asynchronous messaging approach because it had a considerably stronger track record of successful similar applications. A key advantage of asynchronous messaging in this application is that it allows the various agencies within the Commonwealth to maintain their closely guarded independence in the application development area. "Asynchronous messaging provides a unique architecture that allows developers to write applications without consideration for the eventual message destination," dos Santos notes. "It is very unobtrusive from the application programmer's standpoint, requiring only very minimal adjustments to the individual application. Another very important plus was that MQSeries ran on each of the operating systems that we use."

Architecturally, CommBridge, the new asynchronous messaging application, is divided into two components: information requesters and information services. In the realtime mode, the interface to CommBridge is a set of functions that are called from a business application, while in the batch mode, there are executable programs that can be distinct from the business application creating the data files. The realtime mode uses a dynamic response queue, which is created with the initial request and terminates when the service response is delivered, or the session is ended. The batch mode involves static queues that are created during initial interface setup and configuration. Requests created by the business application are placed on the transmission queue and handled at some point in the future, according to rules defined during initial setup.

Transparent to Application

The result is that business programmers are isolated from the complexities that middleware normally requires. The programmer includes simple verbs in their native programming language. The service-and-requester program framework handles the routing of responses to the appropriate requester program invisibly to the requesting programmer. A key element of CommBridge is program templates developed by SEI to standardize the structure, and simplify the development of, new application interfaces. Program templates were created for file transfers, realtime transactions and asynchronous, event-driven transactions. Once a template is created, it can be reused on other platforms by other users that require access to the same information.

Configuration files resident on both the requester and service host define the parameters for the specific interface, and are accessed by CommBridge functions. A configuration generator was developed to eliminate the hand and repetitive work involved in implementing various platform-independent MQSeries environments. The analyst describes to the generator the key information regarding a proposed service or requester, and the generator creates configuration files and installation scripts for MVS, NT, AIX and HP-UX servers, as well as Windows 95 clients. This eliminates the time and potential errors involved in hand coding. The business programmer does not need to know the details of the configuration files.

Pilot Project

CommBridge was developed within the context of a pilot project involving the DTA and DOR. The focus of the pilot was to eliminate the tapes altogether and establish strong and rigorous electronic communication links between the two agencies. MQSeries for MVS/ESA was implemented on ITD's IBM 3090 mainframe used by some DTA applications, and was linked to DOR's Unisys 2200 platform via MQSeries running on a Windows NT server at the front end. "The successful pilot proved that CommBridge provides a unified architecture that allows platforms to communicate in the same way without knowing what system is at the other end," says Mark Heumann, ITD's CommBridge Project Manager. "A data matching process that previously took several days was reduced to a matter of minutes. Larger monthly data exchanges can now be run overnight and results returned the next morning."

Anna dos Santos and her team are now extending the CommBridge concept across the state. "All of the Commonwealth's systems will be able to communicate with each other in a standardized fashion using standard interfaces, eliminating the need to write a customer interface for each system in the network. We want other departments to identify additional services they can take advantage of. For example, our new Department of Social Services system, Family Net, running on an HP 9000, needs to be able to send payment requests to our mainframe accounting system. That system is now in production. Other systems also need to access that same accounting system service, and CommBridge will be deployed for those in phases."

Unduplicated Lists

The new asynchronous messaging system also met a long-time need to be able to produce an unduplicated list of clients served by more than one agency. Legislators frequently make queries on the needs of individuals. In the past, determining the services provided to that person would take an hour or more of phone calls to six different Massachusetts social services agencies. In addition, in the development of legislation, questions frequently arise as to how many people in a certain area are being served by the state. In the past, it required manually collecting files from all of the agencies involved and running an unduplicated count. With the implementation of CommBridge, this information can be generated in batch mode. Realtime mode may be used in the future for other transactions.

Overall, the Commonwealth is very pleased with the performance of CommBridge. "The store-and-forward concept behind MQSeries makes a lot of sense when you're dealing with decentralized systems," dos Santos says. "They're not always going to be available, so you want to make sure you don't lose your messages. It gives us the best of all worlds. CommBridge will also enable the state to implement a more coordinated Electronic Commerce system by integrating Web servers or other interfaces. It will also enable the exchange of data with other states in the United States, and with the federal government. SEI has done an excellent job in developing an architecture and design that can be expanded almost infinitely."

"CommBridge has allowed us to build a framework for a more transaction-oriented, online environment," dos Santos concludes. "As new applications come on stream, the online and realtime characteristics of MQSeries will become more important to us in our future plans. Once systems and departments are able to share data, that opens up a whole range of things they can do. The system will save us considerable tax dollars by maximizing our revenue, streamlining operations and reducing fraud."

About the Author: Jack Hornfeldt is the CIO and Director of IT Services at the Division of Medical Assistance for the Commonwealth of Massachusetts.

Batch Management and Parallel Sysplex

Adoption and implementation of Parallel Sysplex Clustering Technology will likely be one of the largest industrywide projects in the immediate post-Y2K period. In early 1999, only 500 of the 4,500 OS/390 installations in North America were poised to fully exploit this technology. An additional 3,000 were just beginning to adopt the technology. Exploitation of this technology promises higher availability, scalability, lower total maintenance costs and better service availability and reliability to the end user. Unfortunately, achieving these results doesn't happen quickly or without considerable planning. Parallel Sysplex exploitation compares with other large, complex technology implementations, such as frameworks, ERP, Client/Server, etc. To date, realization of the benefits of Parallel Sysplex has primarily involved implementation of parallel processing of online, CICS, DB2 and e-commerce applications. Because of the increasing demands of online systems, the increase in batch workload (5 percent to 10 percent per year, on average) must now be managed under pressure to decrease or eliminate the window.

A New Way to Look at Batch Management

Batch management in a Parallel Sysplex environment requires the adoption of new ideas and a fresh mindset. Traditional management methodology dictated a separation of systems by function or application. They were further separated between batch and online. This allowed for the management of work by type, and ensured that these 'virtual silos' did not impact the performance of the other environments. Today, in a Parallel Sysplex environment, we need to view the individual OS/390 images as completely integrated components of a single entity. One of the benefits of Parallel Sysplex is that each of the individual images can, and should, be managed in the same way. Doing so reduces the cost of maintaining your systems. Differences no longer need to be accounted for. System programmer time and effort are minimized, allowing all of your applications, both online and batch, to become more 'mobile' and less susceptible to an unexpected outage. On the other hand, if you apply a change to one image, it is likely to affect all other images. The old assumptions that batch or online runs in one spot are no longer true. You have at your control a single, very large entity that can manage all of your work, regardless of type, faster and more reliably then ever before.

What does this mean for batch processing in these huge mission-critical Parallel Sysplex environments? Foremost, a job scheduling/workload management system that works in the traditional way, (i.e., Job B runs after the successful completion of Job A) will be ill prepared for this new environment: Batch management tools must be scalable, extensible and resource-based.

OS/390 and Automation Tools as Partners

OS/390 Workload Manager is a key player in a Parallel Sysplex: It allows you to specify service goals for individual jobs, groups of jobs or business applications and dynamically alters the service priority of a job based on these goals. However, OS/390 Workload Manager can only act upon work that has been physically initiated. A comprehensive strategy involving OS/390 Workload Manager and job scheduler sharing responsibility becomes a requirement.

At all times, the job scheduler knows all of the pending work; relative priority of elements of the workload; which images are most available to do work; and which resources are required for the successful completion of the work.

The job scheduler is also aware of the current status of all of the workload anywhere in the Sysplex, even spanning multiple Sysplexed environments.

Ideally, the job scheduler can analyze, or accept user input on, the most critical jobs in the application to calculate the critical path through the application. The job scheduler can now manage the workload based upon the realtime processing conditions. It can manage the work to accommodate variances due to abends or unexpected increases in workload. Workload management focuses on critical jobs or applications to better meet goals and priorities.

The job scheduler and OS/390 Workload Manager work as a team to ensure that your workload can complete successfully even when problems occur. The two products are integrated to allow priority assignment in realtime. When a particular job or suite of jobs is not likely to meet the service level agreement, the job scheduler and OS/390 Workload Manager can initiate remedial action. Ideally, this capability can be driven by the service goals of the installation, but should also allow realtime intervention by the operations group where some special circumstance may exist.

Resource-Based Management

Considerable financial incentive was offered to upgrade hardware and software to Parallel Sysplex-compliant levels. Despite these incentives, all companies may not have licensed key systems to the entire Sysplex. Companies must decide how to manage the costs of systems that are now, potentially, based upon the MIPS rating of the entire Sysplex. A good strategy allows the company to control costs but places additional requirements on batch, workload management and automation tools.

Workload management and automation tools must dynamically manage hardware and software as resources. These tools must know the location of the resource availability of the resources to batch. Batch management must dynamically adapt to the realtime state of each image and resource. Workload can be balanced across available images based on resource availability. Your work processes more quickly and reliably and does not seriously impact the performance of any one image. Service reliability increases and set service levels are met more consistently.

Decreasing Batch Windows

Pressure to decrease the batch window and increase online availability has increased the need for parallelism in batch processes. Various 'piping' solutions have emerged to allow jobs or job steps to execute and process data in parallel, reducing the overall elapsed time of batch processing. A key consideration in the use of pipes is error recovery and management. With the use of pipes, all of the jobs must start at approximately the same time and they must all complete successfully. If any job in the pipe fails to complete successfully, you may need to recover and restart several jobs. To ensure that the jobs in the pipe can execute successfully to completion, the job scheduler must evaluate the requirements for all of the jobs as if they are a single process. These requirements include the total resource requirements for all jobs in the pipe.

As pipe processing becomes more widespread, new requirements will arise for rerun/restart products. They must have the ability to gather information about a particular job, which may have been split into steps and processed on several logical or physical machines, possibly out of logical job step sequence.

Moving Forward

Clearly, each of these issues must be carefully considered when choosing the automation tools to fully exploit a Parallel Sysplex environment. To get the most benefit, ensure that tight integration of automation and operating system tools is possible. Ensure that the tools are scalable and flexible enough to accommodate very dynamic environments. Most importantly, make sure the tools act upon the availability of the logical, data and system resources required to successfully meet your batch processing needs.

About the Author: Francis Meckevech is Product Planner at Cybermation Inc. in Markham, Ontario, Canada. He can be reached at [email protected].


Must Read Articles