In-Depth

The Integration Dilemma: Solving Legacy Application Compatibility

Not long ago industry analysts were predicting the steady demise of the mainframe. By the year 2000, many applications running on MVS were expected to be re-hosted on UNIX. The few legacy applications that remained on the mainframe would represent a small technology niche. Clearly this is one technology trend that did not turn out as expected. The MVS market has not declined but rather experienced a very healthy resurgence. Older applications re-vamped for Year 2000 compliance are often kept on the mainframe and now function as back-end servers for newly implemented Windows and HTML clients. Market leaders in the packaged application market, such as SAP and Peoplesoft, which five years ago were expected to speed up the mainframe’s demise, have mainframe initiatives planned or in the works.

Why the resurgence? Simple economic and technical common sense. System/390 prices have been steadily dropping at an average rate of 25 percent while system performance and reliability have steadily improved. No one scoffs at annual price/performance improvements that remain steady between 30 and 35 percent. In addition, enhanced System/390 support for open network standards such as TCP/IP and support for CORBA, DCOM and Java object technologies make the mainframe a viable and highly reliable server solution for the most technologically advanced enterprise.

Why Integrate?

The glowing future of mainframes should not be mistaken for a return to the time when mainframes functioned as isolated standalone systems. IS departments will continue to run legacy applications on mainframes and in some cases roll out new applications on MVS. But as companies increasingly buy rather than build their application solutions, the majority of new enterprise applications will be rolled out on either UNIX or NT. Thus, the long term viability of mainframe applications requires that they operate in highly heterogeneous and increasingly distributed environments.

Defining the Scope of the Integration

The first step in any integration project is to clearly define the business problems you are seeking to solve. Most integration projects are initiated as part of an effort to streamline a core business process that touches many areas of the enterprise.

Let’s assume you are a manufacturer and you’ve determined that there is tremendous advantage in improving how you process customer returns by automating the process end-to-end. To process a return in its entirety requires coordination between three different organizations: customer service, finance and manufacturing. All three organizations manage their part of the business process using applications geared specifically to their organizational needs. Let’s further assume that your customer service department has purchased an NT-based packaged help desk application. The finance department runs their end of the business on a packaged ERP solution that runs on UNIX. Manufacturing and inventory management processing, however, is done on an internally developed IMS application running on MVS. Your goal is to integrate these three disparate systems so that when necessary they function as a single logical system.

To fully integrate business processing requires integration on three levels:

  • Shared information needs to be kept consistent.
  • Information pertinent to multiple organizations must be readily accessible across organizations regardless of who actually manages it.
  • A truly integrated solution will assure that business transactions that logically span the enterprise are processed across the underlying applications as a single unit of work.

Returning to our example of returns processing, customer, product and parts information must be kept consistent across all three applications. Your service application must have up-to-date information on parts and products IDs in order to successfully coordinate with the inventory management system. Customer information across all three systems must be consistent if the three systems are to work successfully together on a single customer transaction.

Your integration strategy should provide all three organizations with the ability to view information relevant to the return without compromising information security. If a return can not be processed because of a problem with the customer’s account or because an essential part is on back order, the service representative needs to be provided with this information while the customer is on the phone. This means the customer service application needs to be able to readily retrieve information managed in the finance and inventory management systems.

If from a business practice perspective a customer return is viewed as a single unit of work, than it should be processed as such by the three applications. The business process "customer return" may translate to an RMA in the customer service application, an accounts receivable transaction in the finance system and a work order in manufacturing. But at an integrated enterprise level, it is a single customer return process.

Integrating Incompatible Systems

Once the integration requirements are well known the next step is defining your implementation strategy. Two requirements that should be on the top of your list are:

1) The solution should be flexible enough that if you need to upgrade an existing application or replace it with a new one, you will not be forced to re-write large portions of your integration logic.

2) The solution should be as non-intrusive to existing applications as possible. This is particularly true for legacy applications that have performance and reliability requirements that can not be compromised.

Integration Server Technologies vs. Point-to-Point Solutions. A point-to-point solution directly ties one application to another. It requires minimal abstraction of process and data and can appear initially to be the cleanest and most straightforward approach. In the long run this is rarely the case. Your integration requirements will grow with your enterprise and point-to-point integration provides you with no growth path. As the number of systems you need to integrate increases a point-to-point solution as Figure 1 shows becomes too complex to either implement or manage over time.

The growing market for message broker technologies and packaged integration or processware solutions is in direct response to the need to move away from point-to-point solutions. These solutions leverage distributed object oriented technology because it is a good fit for supporting many-to-many integration topologies. Using such an approach adds an additional layer of abstraction that de-couples a function request from the application that services the request. When an object-oriented solution is used, the customer service application submits a request for a work order ID to a central integration processor rather than requesting the information directly from the manufacturing system. The local application is shielded from the complexity of the larger enterprise.

Integration Server - Message Brokers. It’s the job of a message broker also referred to as an integration server to function as a central integration coordinator and distributed transaction manager. Applications send service requests to the integration server, rather than directly to applications. The server determines where the request is to be routed and makes sure that the request and associated data is transformed into a format that is readable to the destination application. The server handles service replies in a similar manner. Thus, integration servers fulfill the first requirement to maximize flexibility. They provide a flexible solution that minimizes dependencies between specific applications. Figure 2 depicts the reduced complexity resulting from centralized integration management.

They are also designed to fulfill the second requirement to minimize impact on local applications. They do so through what is known as publish and subscribe event handling. Most integration servers leverage two communication technologies, request brokering and intelligent message routing. Request brokering provides a naming service that allows software entities or components within a distributed environment to be location independent. Components register themselves and the interfaces they support with the broker. Clients and applications communicate with the broker to find out what services are available and then submit all distributed requests to the broker for processing.

Most brokers are built on either the CORBA standard defined by the Object Management Group (OMG) or on the Distributed Component Object Model (DCOM) defined by Microsoft. The newly defined Java standard that supports the same service is the Enterprise JavaBean standard.

Object broker technologies provide integration servers with a distributed framework, but generally don’t provide assured delivery and don’t include asynchronous support. Message Oriented Middleware (MOM) is used to deliver this functionality. The asynchronous nature of MOM is a key factor in providing an integration strategy that requires minimal impact on the local applications. The message sender hands a message to the MOM component and than moves off to the next task. The MOM delivers the message to a queue at the receiving end. The message remains in the queue until the receiver processes it. The integration framework provided by integration servers includes a publish-and-subscribe notification system that runs on top of these technologies. Business events being processed by one division’s application but which have relevance to other parts of the enterprise are published via the integration server to all interested applications.

For applications that don’t natively support an externalized object interface, an application connector or adapter must be built that will perform that function for the application. Tools used to build the application connectors must support data transformation procedures, rules for handling message routing and workflow. Integration servers also provide connector interfaces to some of the leading packaged applications.

Processware – Packaged Application Integration. The most recent addition to the integration arena is the development of packaged integration solutions. In addition to the infrastructure just described, processware vendors develop the application logic required to provide a complete integration solution. Processware vendors select integration areas such as front office to back office, human resources to finance, or integrate ERP with forecast planning. They then develop integration applications that address cross application process requirements for the selected areas. These pre-built processes may integrate returns processing, order generation, inventory management and sales history with demand planning.

Processware also includes development tools for customizing pre-built components and for building custom application connectors. These products also include connection interfaces to packaged applications. But since their product offerings include fully developed integration solutions, the connector interfaces for applications, such as SAP R/3 come fully developed to support these processes.

Choosing a Solutions Strategy

Opting for a solution that avoids using a point-to-point strategy will significantly reduce the often very high maintenance overhead associated with application integration. Unless you have a large development organization populated with systems programmers well versed in middleware and object oriented programming, consider a pre-built solution when ever possible. Building your own infrastructure from the ground up is feasible but rarely cost effective.

If your integration requirements include integrating your legacy applications to packaged applications, the processware approach is even more cost effective. You will still have to determine what calls need to be invoked in your legacy system but the access logic and data transformations associated with the packaged applications will be pre-built for you. In addition, processware delivers you the integration logic that ties the applications together. That can result in tremendous time and cost savings.

Distributed event-based strategies are excellent for handling integration that needs to occur on a real time basis. But it does not eliminate the need for batch processing. For integration scenarios that can be done off line, that involve large sets of events, batch is hard to beat. Batch is particularly optimal for integrating operational systems with analytical systems such as data warehouses. Since batch interfaces are well supported on the mainframe, it does not require Herculean efforts on the legacy system side. No enterprise integration solution is complete with out it.

Selecting the Right Interface to your Legacy Application

Regardless of whether you choose a packaged solution or decide to build your own, the portions of your solution that interact with your legacy applications will be customized components. These components will need to provide methods for accessing transactions and data on the mainframe. They must handle data conversion issues and simulate event notification if event based messaging is part of integration strategy.

Packaged integration providers will usually supply you with a means for building interfaces between their integration frameworks and your mainframe transaction systems. Supported interfaces generally come in three flavors:

*One approach involves providing an object interface to mainframe transactions and data. This is called object wrapping.

*The second approach involves mapping mainframe specific access methods to an external interface through metadata. Interfaces that use XML to provide external access are examples of this strategy.

*The third approach uses standard database access interfaces such as SQL and ODBC.

These are all viable strategies to use even if you’re not opting for a packaged solution. They leverage standardized technologies that are widely supported thus providing you with interfaces that can be leveraged to connect your mainframe with many different external systems. Object wrappers, for instance, are particularly effective for integrating legacy applications with Web applications. ODBC and SQL provide excellent means for connecting mainframe applications with two-tier client server applications. In the case of large enterprises, a strategy that leverages all three approaches is appropriate.

Native Access Methods. How you access process logic and data in your legacy applications will of course depend on the type of legacy application you are integrating. Most middleware products that interface with IBM legacy systems that don’t natively support open interfaces, do so through one of two methods: remote procedure calls and 3270 terminal emulation.

*Remote Procedure Calls - If your system is either an IMS or a CICS transaction system, the most logical access method to use is an RPC interface. Although this approach is generally assumed to be highly intrusive and require major enhancements to the local applications, this is often not the case for IBM legacy applications where RPC interfaces are very mature. In fact MQ Series, which is the cornerstone of IBM’s message broker solution, relies heavily on RPC interfaces for accessing CICS and IMS transaction systems.

RPC interfaces tend to be the most easily maintained and result in the best performance. You do, however, need to know a considerable amount of detail about the internal processing logic of your legacy system. Enough to be able to map specific RPC calls to the methods or functions your external systems need access to.

*3270 Terminal Emulation - In cases where detailed information and knowledge of the legacy applications is not readily available, 3270 terminal emulation is a reasonable option. The External Presentation Interface (EPI) processes requests via a 3270 terminal data stream. EPI essentially allows your system to mimic to a 3270 terminal. The advantage of such an approach is that it requires minimal knowledge of the internal processing logic of the CICS system. It is also a simpler interface to use.

Event Simulation. One of the toughest hurdles you’ll have to face is compensating for the fact that legacy systems were not built to work in a heterogeneous distributed runtime environment. Legacy systems were built to process scheduled batch requests or requests on demand. They were not built to participate in publish and subscribe event based messaging systems. Your integrated solution will need to know when relevant business event occurs within the legacy application. You will need to develop a way of simulating event notification within your legacy system.

Final Comments

Developing an enterprise wide integration strategy is probably the most difficult undertaking IS managers face today. The good news is there are currently many integration products on the market, that can help alleviate some of the complexity. When evaluating potential solutions for your enterprise, keep the following considerations in mind.

When developing your integration plan, always keep all your business objectives at the forefront and keep them well prioritized. This is particularly crucial when you are integrating mainframe transactional systems with applications that are not as mission critical.

Beware of opting for a "one-size-fits-all" approach. It may simplify implementation and on-going system management but it may also result in sub-optimal performance for your integrated processes as well as your standalone applications. An overly homogeneous strategy can also undermine functionality. If you decide to opt for a single vendor solution, choose a solution that will provide you with the breadth of technology you need. If that requires processing components synchronously, support for asynchronous message processing, as well as record oriented batch processing, look for a vendor that can provide you with the right technology mix.

Of course the opposite is also true. Don’t over complicate your solution. Go with the simplest approach that delivers the functionality you need.

Make sure your plan takes long term issues into account. Think about how upgrades to the underlying applications will effect your integration solution.

A point-to-point solution may be easier to implement but if the solution can not be readily upgraded to accommodate changes in the underlying applications, it could compromise your company’s ability to keep applications up-to-date.

Leverage open standards as much as possible. Integrated processes by definition operate across heterogeneous applications. Standards help minimize the cost of the diversity.

When ever possible, avoid integration strategies that require massive changes to the applications you are integrating. The time and resources required to implement the changes may be difficult to estimate and may have to be re-applied with every upgrade to any of the systems involved. This is a particularly salient point for companies that are integrating legacy applications with packaged applications that generally rev at least once or twice a year.

Don’t under-estimate the complexity involved in defining the process logic specific to the integration and if there is a processware solution available that provides with that functionality. Make sure you include it in you evaluation process.


ABOUT THE AUTHOR:

Rachel Helm is the Director of Technology Strategy at CrossWorlds Software (Burlingame, Calif.), a software company that develops processware solutions. She has been involved in the client server software industry for over 18 years working in product planning, product development and customer support.

Must Read Articles