Big Iron Coup: IBM Touts New Mainframe Data Integration Software

Information Server seems tailor-made for Big Blue’s bring-it-all-back-home to System z philosophy -- particularly with respect to data processing workloads.

Earlier this month, IBM Corp. trumpeted the arrival of yet another new System z workload: in this case, its IBM Information Server data integration platform.

Big Blue first announced Information Server last October, with initial availability slated for distributed Windows, Linux, and Unix.

Availability for Linux-on-System z is far from an afterthought, however; in fact, Information Server seems tailor-made for Big Blue’s bring-it-all-back-home to System z philosophy—particularly with respect to data processing workloads.

Fully loaded, Information Server is a data processing workhorse. It combines extraction, transformation, and loading (ETL) capabilities with data quality and data profiling (or data cleansing) functionality, metadata management features, and data federation (or enterprise information integration) capabilities.

"Information Server is an information integration software platform that enables global organizations to … derive more value from information spread across complex heterogeneous systems," comments Sean Crowley, program manager for Information Server with IBM.

Crowley says Information Server can help drive data governance, improve accuracy and insight, increase application responsiveness (primarily by means of automating otherwise manual—or script-driven—integration tasks) and improve scalability, thanks in part to its support for parallel processing. Crowley maintains that Big Blue’s data integration workhorse is built to scale. "Information Server leverages parallel processing technology to handle massive data volumes while accommodating real-time, event-driven data integration."

IBM resells these features—and more—as Information Server modules: the core ETL module (DataStage); a core data quality module (Quality Stage); a data profiling module (Information Analyzer); a metadata management module (Metadata Workbench); and an EII module (Federation Server).

A full-blown Information Server configuration—complete with Business Glossary and Information Services Director modules—may be pricey (IBM hasn't yet disclosed pricing information), but, officials claim, Information Server’s modular sales model makes it easier for organizations to purchase only the functionality they need.

In this respect, IBM officials say, some of its features—such as DB2 connectivity (via the universal connector) and DRDA—can take advantage of Big Blue’s low-cost zSeries Integrated Information Processor (zIIP) engine.

In addition to its zIIP capabilities, Information Server for System z boasts support for a number of other zLinux amenities—such as Hipersockets connectivity. "Communication between native z/OS data sources and the Information Server parallel processing engine is via TCP/IP which can be implemented on top of a Hipersockets connection," says Michael Curry, director of product strategy and management at IBM.

Information Server doesn’t currently support zAAP—or the zSeries Application Assist Processor (a specialty engine that lets organizations inexpensively run Java workloads on z/OS)—in part because its WebSphere Application Server-based services framework runs on a Linux IFL. Elsewhere, Curry says, Information Server can store its meta data repository in either DB2 on zLinux or DB2 on z/OS.

Nor does Information Server provide out-of-the-box support for typical mainframe data sources (CICS, IMS, VSAM, Software AG’s Adabase, and the like); IBM resells its WebSphere Data Integration Connector for z/OS that provides connectivity (along with changed-data-capture feeds) for IMS, VSAM, CA-IDMS, CA-Datacom, Adabas, and sequential data sets. "Connectors for DB2, Oracle, SQL-Server, and other relational data sources are included with the Information Server platform at no charge," adds Curry.

Speaking of changed-data-capture, or CDC, the latest rev of IBM’s DataStage module for Information Server also incorporates CDC capabilities from the former DataMirror, which Big Blue acquired earlier this year. DataMirror specialized in connections to mainframe and minicomputer (System i) environments. Its acquisition by IBM earlier this year was seen as quite a feather in Big Blue’s host integration and connectivity cap.

IBM’s Information Access Renaissance

Not that IBM’s data integration portfolio (which it markets under the "Information On Demand," or IOD, brand) needs much in the way of cosseting.

According to market watcher Gartner Inc., for example, Big Blue is the overall data integration market leader: IBM bested rival (and long-time champ Informatica) in Gartner’s most recent "Magic Quadrant" survey, and also outstripped connectivity leaders iWay Software (a subsidiary of Information Builders Inc.), SAS Institute Inc. (whose Enterprise ETL tool enjoys an enormous customer base), Business Objects SA (which was recently acquired by SAP AG), Microsoft Corp., and Oracle Corp. The salient point, many industry watchers say, is that Big Blue’s IOD stack is really starting to come into its own.

For example, says James Kobielus, a principal analyst for data management with consultancy Current Analysis, Big Blue not only announced a host of new data integration features (including its DataMirror CDC technology), but also introduced a brand new meta data management offering, Metadata Server. Elsewhere, Kobielus points out, IBM touted a number of vertical-specific data integration project accelerators and broadened its data management professional services, channel-support, and customer-engagement offerings.

"IBM continues to enhance, extend, and develop new go-to-market implementations of IBM Information Server," he comments, citing the availability of Information Server for zLinux and a Fast Track quick-deployment version of that product geared to the mid-market, as well as a blade-based appliance version.

Indeed, if there was any single drawback to this month’s IOD event, it was that IBM might have announced too much at once, industry watchers say. After all, Big Blue also touted a new version of its DB2 Warehouse (release 9.5), in addition to new Web 2.0 development and management capabilities. With so much released at one time, Kobielus and other analysts say, it’s tough to keep track of everything.

"IBM is implementing a disruptive vision for data design, mashup, deployment, administration, and governance across heterogeneous SOA and Web 2.0 environments," he points out, citing Big Blue’s forthcoming IBM Data Studio, an Eclipse-based tool that will be available by the end of this month. IBM Data Studio boasts plug-in support for IBM’s Rational tools and facilitates lifecycle data management across IBM and non-IBM DBMSs. "IBM also presented a new vision—[dubbed] "Info 2.0"—announced new tooling [such as IBM Mashup Starter Kit, IBM Mashup Hub], and a community [namely, QEDwiki] for Web 2.0-oriented data mashup," Kobielus points out.

About the Author

Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.

comments powered by Disqus