Workloads Return Home to the Mainframe
Workloads are coming back to the mainframe, as BI powerhouse Informatica demonstrated last month
As the importance of the mainframe rises and falls, a development last month emphasizes, once again, how the enterprise is turning to the mainframe to process heavy workloads.
Last month, Informatica Corp., a leading provider of extraction, transformation, and loading (ETL) technologies, announced a native version of its flagship PowerCenter ETL tool for zSeries mainframes. Informatica becomes the latest of a slew of new ISVs that have gone native, so to speak, on zSeries.
In 2003, for example, IBM counted 50 new ISVs that released as many as 150 applications for the mainframe, said zSeries marketing manager David Mastrobattista in an interview last year. By October 2004, Mastrobattista noted, an additional 40 ISVs had embraced Big Iron for the first time.
Equally striking is Informatica’s justification for finally embracing PowerCenter on zSeries. That company has, after all, marketed a version of PowerCenter for Unix and Windows environments for some time now, and—until last month, anyway—had been content merely to provide gateway access (via its PowerExchange solution) to mainframe data.
PowerCenter for Mainframe ups the ante, letting customers do ETL processing natively on zSeries. This is a reversal of no small fortune for Big Iron, which—for more than a decade—has been hemorrhaging workloads. But according to Don Tirsell, director of product marketing with Informatica, a lot of the workloads that moved off the mainframe ten years ago are once again returning home—in droves, even. And of course, Tirsell notes, the mainframe is also the locus of new workloads, thanks in no small part to the prominence of Big Iron Linux.
“We hear from a lot of people who don’t necessarily want to bring that [mainframe data] down. They want their data warehouse right on the mainframe,” he says, citing the preponderance of business-critical data that's still housed in VSAM, IMS, or mainframe DB2 data stores. “For a lot of [customers], it just makes more sense to do [ETL processing] natively on the mainframe, because that’s where most of the data is, anyway.”
This is a realization to which a lot of long-time Big Iron customers have cottoned, says Colette Martin, zSeries program director with IBM. “It does start with the desire … and the efficiency of wanting to be able to run those workloads close to the data, because it really helps them integrate their business better, and they find that it’s very efficient when they can do it,” she says.
Like Informatica’s PowerCenter product for distributed platforms, PowerCenter for Mainframe is written in C++ and exploits object-oriented programming technologies and concepts. So it’s not as if Informatica had to earn its COBOL chops—or port its existing PowerCenter assets to Java and J2EE—to retool that product for zSeries. Nevertheless, Tirsell claims, PowerCenter for Mainframe represents a significant investment of resources on Informatica’s part.
“We’ve actually transitioned some of our resources from the core PowerCenter development team into the mainframe team,” he says. “We’ve actually developed a complete technical overlay in our field organizations to support mainframe engagements. So the folks who were helping take PowerExchange into the mainframe world now have another weapon to talk that talk, to know how to sell these products into the mainframe environment.”
Of course, it used to be that one of the drawbacks of doing ETL—or processing of any kind, for that matter—on the mainframe was the historically high cost of Big Iron compute cycles. For this reason, Tirsell concedes, mainframe-based ETL may not be for everybody. “It’s really intended for [customers] who can spare the horsepower and want to do that [ETL processing] natively on zSeries. But if you have the capacity to spare, there are a lot of good reasons to do your ETL on the mainframe,” he points out, noting that some customers prefer to minimize the risk associated with extracting business-critical data sources for transformation and consumption in an operational data store, for example.
This may be changing, however. IBM’s Martin points to several new deliverables that reduce the cost of mainframe compute capacity. For example, she notes, Big Blue has on several occasions reduced the price of its integrated facility for Linux (IFL) on zSeries hardware, and last year, IBM introduced its zSeries Application Assist Processor (zAAP), which lets customers run Java workloads on the mainframe at no extra cost.
“Whereas in the past, the choice they had on the mainframe was to run [Java-based workloads] on z/OS on a standard engine, they were then being charged for all of those MIPS on a standard engine at standard engine prices,” Martin explains. “Now they have the ability to run half the workload on the zAAP engine and the other half on the standard engine, and there are no IBM software charges on that zAAP engine.”
One upshot, says Martin, is that customers are increasingly willing to implement Java-based workloads, running on IBM’s WebSphere Application Server, on the mainframe. “We are now very, very competitive for some of those more Web-serving-type workloads, those WebSphere-type workloads, whereas in the past it might not have been clear that this was something that customers would do on the mainframe,” she explains.
Elsewhere, Martin says, customers have become more accepting of mainframe capacity pricing, thanks in part to IBM’s sub-capacity pricing initiative, which, she says, has seen uptake from a number of ISVs, including prominent vendors such as BMC Software Corp. and Computer Associates International Inc.
“A few years ago, if you asked any [customers], one of the top things on their list would have been ISV software pricing,” she says. “That issue has moved significantly down on their list. Of course, pricing is always going to be a factor in anything that we do, so they’re going to continue to watch it for sure, but it’s definitely not as big a concern, and part of that is because the ISVs are participating in the programs that we’re kind of leading the way on.”
Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.