In-Depth

All Roads Lead to the Middle

With the rapid evolution of computing architectures to three-tiered networking, enterprise IT departments are finding that integrating legacy systems with new platforms and new applications with one another is a major challenge. In the last 10 years, architecture state-of-the-art has changed from host-based to client/server to client/server/ server. Though each has proven benefits, they’re not so compelling that customers totally jettison previous systems. As a result, enterprises evolve hybrid systems where newer architectures are layered over older ones. Often in a client/server environment, mainframes are absorbed from the old host-based system as servers in the new one. Similarly, by adding Internet access to client/server systems, Web browsers attenuate those systems into client/server/server structures.

However the architectures change, though, the data they process comes predominantly from the same places – customers might use an intranet instead of the phone to place the same orders; companies might use an extranet instead of an order form to request the same inventory. Likewise, the same historical data, once exclusively stored on mainframes, gets migrated to relational databases.

But these distinctions about where data originates and is stored are rarely unambiguous. A customer service rep might assemble a query based on a complaint made in a phone call, transaction records gathered from a Web site, account information stored on a relational database and service level definitions kept on a mainframe. Information in different places is necessarily stored in different forms and accessed by different semantic rules. To efficiently move data around such heterogeneous systems, customers need third-party software that lets the different environments seamlessly communicate. And as new applications are added to the latest layer of the system, they must communicate with each other, as well as with the existing platform. Software that enables this activity is called middleware.

The Problem

Middleware’s first task is accessing data in its myriad forms on different architectures. It’s harder to access data from a host-based backend.

In client/server environments, all data is stored in relational databases, but those databases are made by different vendors. Retrieving data from DB2 and Oracle requires different processes. For instance, similar data in tables of different databases is stored in different forms. Data about a person’s bank account might be stored as a six-character string in one database and as a nine-character string in another. The middleware has to know this in advance to select and combine similar data, despite its dissimilar formats.

Because many client/server systems evolved from mainframe environments, a mainframe can act as a data source in a heterogenous backend with relational databases linked to UNIX and NT servers. That multiplatform server environment, furthermore, often interfaces with or is based on increasingly popular ERP systems, like SAP. Middleware links these legacy configurations with ERP systems also. Then it presents the data to client/server or Web apps in protocols they understand, such as CORBA or HTTP. This involves further translation.

IBM’s New Middleware Offerings

All middleware is designed to unlock data from disparate backend sources and repackage and distribute it to various applications tapped by fat or thin clients. But some middleware is optimized to interact with Web applications and some with legacy applications. Some is designed for quick, low-cost, opportunistic deployment, and some for gradual, expensive, strategic deployment.

To establish an Internet presence, most organizations want to quickly supply their customers and partners with tactical legacy data from some of their data sources – this is relatively easy. To conduct enterprisewide electronic commerce, organizations must supply customers and partners with tactical and strategic legacy and Web data from most of their data sources – this is relatively difficult. Many organizations want to accomplish both and, therefore, need Web- and legacy-oriented middleware that complement each other. IBM is evolving products that suit the needs of all of these customers.

Web-Up

IBM calls its Web-oriented middleware "Web-Up Majority" products, and its premier Web-Up offering is WebSphere. Yesim Natis, Research Director at GartnerGroup, says customers buy Web-Up middleware "to address an immediate need – an opportunity that they have to capture, competitive pressure that they have to respond to." This type of product, he continues, is characterized by "modern features, produces quick results, has low cost for software and requires few people" to deploy and manage its installation. Though Web-Up products usually quickly improve productivity, they aren’t necessarily built to scale up over time. Long-term viability is nice to have, but not necessary, says Natis, because "it’s not clear how long these applications will be important – some will transition into mission-critical applications, others will not." WebSphere could be deployed alone and tactically, for instance, to quickly create a Web presence, but not evolve into mission-critical enterprise e-com. It could also Web-enable a mission-critical enterprise application integration effort.

Nigel Beck, IBM’s Manager of Market Management at WebSphere Products, describes WebSphere as a "line of products" targeted at customers processing HTTP-based transactions for Internet-oriented solutions, from simple Web presences to complicated e-business applications.

The Application Server contains a Java servlet run-time engine, database connectors and application services. Beck says it lets developers "write Java code to connect Web pages to data source content." The run-time engine executes Java servlets, the connectors access data from various databases for presentation on the Web and the services are frameworks that, says Beck, accomplish common tasks like keeping track of a Web session over time on a "stateless" Web. Beck says the Application Server runs with any HTTP server and comes packaged with two of them, Apache and Lotus Go.

The Performance Pack lets you optimize operational aspects of Web sites like load balancing, caching, filtering and file replication. It also lets site managers offer differentiated quality of service to customers with different needs. For instance, if a site is managed from diverse geographical locations, Beck claims Performance Pack lets you reroute site traffic to servers closest to each customer to improve performance. This quicker response costs the customer a premium. Managers could also reroute traffic to servers with specialized uses; customers would pay more for access to a server providing real-time stock updates, says Beck, than to one hosting hobby discussion groups. The Performance Pack also lets you scale up site services to more customers without interrupting service to any.

Beck recommends WebSphere to application developers who want to move beyond the Web publishing now done with CGI scripts. He says WebSphere’s Java servlets let developers create application logic, connect to data sources, use any HTTP server and access more powerful programming models than CGI scripts to transition a Web site into an e-bus system.

Enterprise-Down

Natis defines the "Enterprise-Down Minority" products as heavily researched before they’re purchased, expensive, used for building mission-critical systems that are scalable over the long term, and require many people to deploy and manage. Their benefits are strategic and long term. IBM’s two new enterprise-down offerings are Component Broker and MQ Integrator.

Whereas WebSphere connects application logic on servers to Web browsers, these two products link legacy data to server applications. Natis claims there are several major differences between the two. Though both are application servers, IBM positions Component Broker as an application server for new systems and MQ Integrator as one for existing systems. Component Broker repackages legacy data into a distributed object environment whereas MQ Integrator is a "message broker" that repackages legacy data into a messaging environment. Component Broker is also a less complete product than MQ Integrator, and therefore less suitable for integration of existing applications.

Alistair Rennie, IBM’s Marketing Executive for Component Broker, defines the product as an "integration system for developing a set of distributed components and deploying them on top of distributed servers on an enterprise scale." To develop components, says Rennie, Component Broker will first use its "Resource Managers" to access legacy data from multiple sources, such as different relational databases, ERP systems (like SAP), mainframes, as well as other middleware environments, like CICS and MQ Integrator. He says it will then unite that data with applications by means of its "Application Adapters" which repackage and represent it as objects on the server. Rennie says Component Broker is best used by enterprises building new component-based business applications and must access data in multiple legacy systems over which they have tight control. Component broker is also ideal for projects that must scale. Rennie says it will scale from NT up to 390 systems, if necessary.

Rennie says MQ Integrator, on the other hand, "works well when you don’t have tight control over your legacy data – it’s an expeditious way of getting applications talking to one another." He recommends it to enterprises that will "use messaging to bring together a wide range of [existing] systems – it’s very straightforward and speedy."

That’s because, says Natis, it’s "a more complete application integration product." Whereas Component Broker, only has access to DB2 legacy data, MQ Integrator can access data in CICS, IMS and elsewhere – Rennie, in fact, claims it supports 20 platforms. It also has a transformation engine, which, says Natis, is a built-in tool that automatically lets applications with different semantics talk to one another when the programmer configures the system. Component Broker lacks such a tool, so programmers have to actually program all the semantic transformations. Natis says MQ Integrator also offers a flow control engine which, based on conditions, expedites retrieval of data from multiple legacy sources. Without flow control, Component Broker takes longer to coordinate data from multiple sources in the applications integration process.

Benefits/Vertical Markets

Natis believes middleware offers three major benefits to customers. Whereas enterprises will use Web-Up products, "to offer functionality that already exists (like business services) to a new class of users who are accessible through the Web," they’ll use enterprise-down application integration products like MQ Integrator to achieve "closer integration with partners and suppliers" by linking applications across disparate legacy environments. To create platforms for launching new functionality tied to legacy applications, he thinks they’ll deploy enterprise-down application server products like Component Broker that do more than link applications – they "take ownership of applications" by controlling their availability, integrity and performance. These products will be critical in addressing the needs of new users who gain access to previously unavailable services and create unforeseen opportunities for new types of business like affinity marketing.

Though middleware is generally implemented horizontally, due to the type of data it deals with primarily – transactions and messages – it’s well suited to the financial and telecommunication markets where it has gained a foothold. Jim Johnson, Chairman, Standish Group International, says financial organizations need to process transactions quickly but with maximum integrity – they can’t lose buy/sell requests from stock brokers, for instance. On the other hand, he says telecommunication companies must process messages as quickly as possible and put less premium on integrity – a dropped call here and there is acceptable per their service level agreements.

The Competition

Johnson says the quality of a middleware product is based upon how well it balances performance (transactions/ messages per second) with integrity (percentage of lost or wrong transactions/messages) to best address your application. The number of concurrent users the product supports is also an important aspect of performance, but only as it’s relevant to your application. At the same time, Johnson admits that it’s almost impossible to benchmark these middleware products. If a vendor claims a high-end system absolutely processes 100 TPS and supports 5,000 concurrent users, Johnson says they’re just inventing numbers. Every application environment is unique and will relativize performance measurements accordingly – some messages/transactions are longer than others, some route to more data sources, some update databases and so on, and all these factors will affect the TPS rating. Generally, the higher number of users supported by a system, the lower the TPS. But even this is misleading – some systems are optimized to be fast and support many users.

Also, though most middleware deals with transactions or messages, some like Component Broker deal with objects. A useful way to categorize these products would be according to the type of content they address and whether they’re predominantly Web-up or enterprise-down products.

Component Broker, of course, is enterprise-down and deals with objects. It’s generally agreed that no comparable product exists in this category. Johnson says Nprise makes an object request broker, but not on the scale of Component Broker, and BEA Systems’ M3 application integration product offers some object functionality but is far less mature than Component Broker. He adds that Microsoft’s NTS object transaction monitor might also qualify for this category.

MQ Integrator is an enterprise-down message broker. Johnson says notable products in this competitive space are BEA Systems’ Message Q, PeerLogic Pipes and Telarian SmartSockets, but neither IBM nor its competitors match up closely on a feature-by-feature comparison – rather, some of the products accomplish some of the same things. He includes Microsoft’s NQ here, but dubs it merely an "MQ Series knock-off." Natis says MQ Series was the precursor to MQ Integrator – IBM combined it with Neon’s NeoNet to create MQ Integrator – and is still the fastest growing middleware product on the market.

CICS deserves mention as enterprise-down mainframe-based middleware that deals with transactions. Johnson calls it "the father of all middleware" and lists BEA Systems’ Tuxedo and Top End (the second recently acquired from NCR), as well as Siemens’ Open UTM, as comparable products. Natis, however, includes Tuxedo and Top End, along with Ionics Orbics, as enterprise-down competition for MQ Integrator also.

In the Web-up transaction space, Natis says WebSphere competes with other application servers like NetScape Application Server, Sun Net Dynamics, Bluestone’s Sapphire Web, Oracle Application Server and Microsoft’s Site Server. Of these, he thinks the NetScape product (formerly Kiva) is the most scalable and mature and Oracle’s as scalable and maturing but possessing the distinct advantage of being optimized for Oracle databases. Sun and Bluestone, like IBM, offer extensive toolkits for use in Integrated Development Environments (IDEs) where multiple developers build and integrate large applications concurrently.

Market Size and Growth

Different consulting groups categorize these product types in slightly different ways, but all the figures indicate the middleware market is healthy. GartnerGroup predicts the market for application servers like WebSphere and Component Broker will double from $300 million in 1997 to $600 million in 2001, and for message brokers like MQ Integrator from $300 million in 1997 to $1.3 billion in 2001.

Standish Group uses different categories for more granular analysis. It groups products according to whether they deal with objects, messages or transactions. Object-oriented products are either object request brokers like Component Broker, or less robust object management products, which Standish calls object monitors. The first will jump at 58 percent CAGR, from $116 million in 1997 to $727 million in 2001; the second at 174 percent, from $7 million in 1997 to $368 million in 2001. Message brokers like MQ Integrator will climb at 49 percent, from $221 million in 1997 to $1.083 billion in 2001. Transaction processing middleware (mostly CICS), in comparison, will average 9 percent from $1.414 billion in 1997 to $2.033 billion in 2001.

About the Author: John Harney is a freelance writer based in Washington, D.C., who specializes in technology reporting.

Must Read Articles