In-Depth

E-Business Integration Solutions

E-business is radically changing the way business is done. The new e-business models are enabled by emerging technologies that allow companies to more effectively unite with customers, partners and suppliers in an "extended enterprise."

Companies that can successfully leverage these new technologies and new ways of doing business will position themselves for unprecedented growth. However, one of the key challenges of rapidly delivering e-business solutions is building new systems and integrating them with existing systems.

e-Business Integration Solutions (e-BIS) is the process of rapidly delivering scalable integration solutions that enable e-business to deliver on its promises. e-BIS solves the problem of integrating business processes and getting critical information to the right place at the right time in the right format to support the fast-paced demands of e-business.

e-BIS is broadly applicable to all e-business initiatives ranging from consumer-oriented e-commerce, or business-to-business data exchange, to customer relationship management or supply chain integration.

Delivering e-Business Integration Solutions

First generation integration technology included message queuing, message brokers and data transformation engines. Now, the newer, more powerful technology of integration brokers can be effectively combined with innovative integration techniques in the form of integration patterns to provide value-added e-business integration solutions. Integration patterns provide proven techniques and reusable designs for solving common integration problems. Integration brokers provide the technical infrastructure for integrating systems and implementing integration patterns.

Integration Brokers. Integration brokers serve as intermediaries in handling integration tasks (just like message brokers serve as intermediaries in message-oriented communications, and object request brokers serve as intermediaries in object-oriented communication).

Integration brokers provide a wide range of integration services for business process integration, data transformation, data routing, and integrating with messaging systems, databases, directory services, file system and ERP packages. Modern integration brokers also support client integration (as well as server integration) and support a wide variety of interaction patterns (synchronous, asynchronous, publish, subscribe, etc.).

An integration broker moves well beyond first generation integration technology. Integration broker technology removes the previous limitations of technology constraints to allow business managers to focus on providing value to customers, partners, suppliers and eventually shareholders.

Integration patterns. Integration patterns are reusable software architectures and designs that serve as templates for solving common integration problems.

Integration patterns provide guidance to software architects in solving common integration problems and checklists to ensure that all important integration issues have been considered and resolved. When it comes to business-to-business integration, integration patterns make it easier for multiple organizations to coordinate e-business projects by providing pre-defined integration solutions.

Web-Enabling Legacy Systems

The most common emerging e-BIS integration pattern is Web-enabling legacy systems. Billions of dollars have been spent making hundreds of thousands of legacy mainframe systems Year 2000-compliant. As IT departments around the globe emerge from the task of making legacy systems Y2K-compliant, they face an economically-driven challenge: finding ways to leverage the large investment that has been made in legacy systems by prolonging the productive life cycle of these systems.

One of the best ways to leverage this investment in legacy systems is to Web-enable these mainframe systems so that users can access, update or retrieve legacy data from a Web browser. But Web-enabling legacy systems is a complex undertaking since there are many different elements involved; these include Web browsers, Web servers, message queuing middleware as a transport mechanism, and an integration broker for tying all the pieces together. Typically, these individual pieces have been built using numerous, incompatible designs and technologies.

An integration broker for Web-enabling legacy systems needs to provide built-in features for handling Web-oriented communication mechanisms, such as XML, HTTP, FTP, e-mail, HTML, Java, COM and message queuing. It also needs to support data translation, security, session management, failover and load balancing.

Multi-Client Systems

Another common e-BIS pattern is the implementation of multi-client systems. Multi-client systems are characterized by incorporating several different types of client applications. For example, a PC application for customer service representatives working in the call center, a laptop-application for sales agents working in the field, a Web-based application for customers performing operations and accessing services over the Internet, and an interactive voice response client for customers performing operations and accessing services using a telephone.

Multi-client systems are becoming more prevalent as companies strive to compete by offering more services to customers through more channels. Multi-client systems are difficult to build when each type of client system employs a different mechanism for interacting with other systems (RPC, CORBA, COM/DCOM, HTTP, TCP/IP, messaging).

An integration broker for building multi-client systems needs to support numerous APIs and communication mechanisms (so it is easy to add new types of client systems without affecting the existing client systems or the existing server systems) and data translation features (for translating requests from different formats into a common format so they can be processed in a uniform manner). It also needs to provide a consistent, yet flexible architecture so that all client systems are treated in a uniform manner.

Multi-Server Systems

Another common e-BIS pattern is multi-server systems. Multi-server systems need to access data from multiple sources, such as many different databases and, equally importantly, multiple types of data sources such as databases, messages, directory services, host systems, files, ERP packages, etc.

Multi-server systems are becoming more prevalent as companies try to leverage stored information as a strategic corporate asset, while simultaneously taking advantage of new e-commerce and e-business opportunities. This requires integrating the diverse information resources that have evolved inside corporations over the last 30 years.

For example, an insurance company may want to build applications that show a single view of policyholder information. However, the information about each customer is spread among three mainframes, two databases from different vendors and a customer relationship management packaged application.

Another example might be an investment management company wanting to build an online securities trading system and an order management system. In this case the multi-server system would need to be able to access databases for maintaining trade state information, receive real time market data from multiple vendors, each using proprietary data formats, communicate with customers via e-mail, and exchange orders, executions and routing instructions with third-party systems.

An integration broker for building multi-server systems needs to support data transformation, request routing, publish and subscribe communication, request and reply communication, asynchronous communication, and a variety of formatters.

The data transformation needs a rule-driven mechanism for converting data among the different formats used by different systems and the ability to perform syntactic conversions, such as field-to-field mapping, one-to-many, many-to-one, many- to-many, message joining and message splitting. It also needs the ability to perform semantic conversions – conversions that depend on the meaning of the data being converted.

An integration broker for building multi-server systems also needs to provide a mechanism for defining business flows. Business flows are a rule-driven mechanism for processing requests and sequencing actions that can handle the sophisticated information processing requirements found in most multi-server systems. Business flows need to be capable of accessing/updating databases, sending/receiving messages, beginning/committing transactions, publishing/subscribing events, etc. Business flows make it easier for developers to create sophisticated applications that combine data from many sources without having to write the system and communication code that is usually required.

Clearly, multi-server systems and multi-client systems are not mutually exclusive. In fact, many modern systems combine elements of both.

Supply Chain Integration

A supply chain is a network of facilities and distribution options that performs the functions of procurement of materials, transformation of these materials into intermediate and finished products, and the distribution of these finished products to customers. Supply chains exist in both service and manufacturing organizations, and the complexity of the chain may vary from industry to industry and firm to firm.

Two common e-BIS patterns frequently occur in supply chain management: the consolidator and the distributor.

The consolidator pattern involves collecting information from many sources for further analysis. The simplest form of this pattern is used for building data warehouses. More dynamic versions are event-driven and support online collection and analysis of the information. These versions are more functionally sophisticated and require message queuing and dynamic data routing.

The distributor pattern is used to move information to multiple sites. Some of these sites may be regional offices and individual stores; other sites might be trading partners.

All supply chain integration systems share many e-BIS patterns. The Integration Broker for supply chain integration needs to support data extraction (collect data from a wide variety of sources), data transformation (perform data transformations based on database schemas and developer-defined rules), and data cleansing (apply relevant business rules when storing, updating and moving information).

About the Authors: Greg Lomow is Product Manager for Geneva Integrator, Level 8’s enterprise application integration technology. He can be reached at glomow@level8.com.

Ivan Casanova is Product Manager for FalconMQ, a message-oriented middleware product from Level 8. He can be reached at icasanova@level8.com.

 

***

SIDEBAR

Baptism by Fire: Disaster Moves Case Western Reserve to Distributed Storage

By Bob Jefferis

Case Western Reserve University (CWRU) in Cleveland didn’t realize how much critical information had migrated out of its mainframes until a fire gutted historic Adelbert Hall a few years ago and destroyed 172 desktop computers. These distributed systems housed irreplaceable works in progress from various executive administrative and faculty research projects. But – unlike the mainframes in the campus computing center – the PCs were not protected by regular backups to offsite storage.

while a handful of the computers actually melted in the fire, the majority were ruined by water damage that shorted out the motherboards but spared the hard drives in their enclosures. The university eventually recovered most of the data on these drives, but it was a tedious, time-consuming and very expensive process. As the reconstruction of Adelbert Hall began, university officials gave the IS department its marching orders: Make the implementation of a world-class, highly scalable backup and recovery system for CWRU’s increasingly distributed computing environment a top priority.

The fire and its aftermath gave the IT department a wake-up call about backing up a lot more than the mainframes, and got the staff to approach backup as an organizational issue, not just an IS problem.

Users were supposed to be saving their files to servers, but – human nature being what it is – that didn’t happen very often. And even when files did make it to the servers, backups were not handled in a consistent and disciplined way by individual departments. Tape-rotation schemes didn’t get followed, and tapes frequently didn’t get moved out of the departments to offsite storage.

"It became apparent that a lot of hidden people costs were being incurred with the client/server approach. It was very labor-intensive, with people going out to do maintenance on all the distributed servers."

The staff realized that having users move files to intermediate servers just so that they would get backed up didn’t make a lot of sense. Critical files were going to end up on PCs, period, and had to be backed up from there, so a centralized system that leveraged the university’s existing mainframe storage infrastructure was the way to go.

CWRU spent about two years evaluating solutions and narrowed the choices down to two: IBM’s ADSM and BETA Systems Software’s HARBOR Network Storage Manager. After attending a BETA user conference and talking to customers, CWRU went with BETA’s HARBOR NSM because of its flexibility, platform support and ease of use.

But there was no enforcement capability in a university environment. Departments have to be interested in backup services and willing to pay for them, and the services have to be transparently easy to use or users won’t bother with them. Departmental freedom has also led to a proliferation of different server and desktop computing platforms that IS solutions have to support: Windows, Macintosh, Solaris, Irix, NetWare, HP-UX, AIX and Digital UNIX.

Requiring no additional hardware, the HARBOR NSM software runs on the CWRU mainframe and integrates with the existing mainframe backup system. This enables the distributed desktops and servers to back their data up to mainframe storage subsystems, including robotic tape drives and offsite disaster-recovery storage.

This offsite protection helps to sell the faculty and students on the service and get them to use it. Individual users can back up or restore data at any time, and desktops and servers can be scheduled to automatically back up at specified intervals. Departmental administrators are left in charge of their servers, but their backup processes are assessed daily, and they get notified by e-mail if anything is found lacking.

The implementation of the BETA solution coincided with and facilitated a campus-wide server consolidation, as multiple departments were put on one server, and the HARBOR NSM software let each department have its own backup image. This means the departments can restore their files independently even when they are sharing a server.

HARBOR NSM is a highly intelligent system that can be configured to back up only portions of data that have been changed since its last session, and to automatically archive files that haven’t been accessed for a certain period of time. Word processing documents can be handled file by file, and databases can be dealt with at the record level.

One of the things that the IS staff discovered was that their databases had different backup characteristics. They didn’t want to have to transfer a 10-gigabyte database to tape again just because one record was changed. By interfacing with database utilities, HARBOR NSM can do backups down to the individual record level for the university’s Oracle and Microsoft SQL Server and Exchange databases.

By raising automation capabilities and ease of use to a new level and streamlining backup operations, HARBOR NSM is reducing IS costs significantly (estimating that cutting half the people time out of backups could save $1.3 million per year).

Support was another factor that tipped the scales in HARBOR NSM’s favor. The vendor provides customers with a Web-based window into its support operations so that they can check the resolution status of any outstanding trouble tickets. There is no need to track down account representatives by phone.

Other Factors

There are several other factors when evaluating enterprise backup systems:

• Focus on total cost of ownership, not entry costs. Solutions that look like a bargain up front can end up costing a lot more in the long run. Think, instead, of how many people it will take to perform various backup tasks and evaluate products accordingly.

• Look for systems that let you monitor the backup process end to end from a central console, so that you can tell where something went wrong if a backup doesn’t occur. The backup software should be logging activities at various points in the network, and it needs to include intelligent and configurable retry capabilities.

• Check the vendor’s track record for keeping software revisions up with new versions of the operating systems and databases you have to support. Make sure that the client agents you use are getting updated in a timely fashion.

• Consider the partnerships that the backup solution provider has with other key vendors. These can determine whether you will get the software tools and hardware support you need down the road.

Most organizations never see their backup systems go through such a literal baptism by fire; but ongoing mini-disasters – such as hard-disk crashes, corrupted software and human errors – actually pose an even bigger threat. In fact, studies show that far more critical data is lost through accidental deletions than from computer viruses or major disasters combined.

Mainframes have always defined the state of the art in backup and disaster-recovery technology, and they should be leveraged to the fullest as backup solutions for distributed systems are designed and implemented. HARBOR HSM lets enterprises like CWRU bring all their computing systems and data under the protective umbrella of existing mainframe infrastructures.

About the Author: Bob Jefferis is the Assistant Director for Data Services at Case Western Reserve University.

Must Read Articles