In-Depth

The Mainframe Capacity Conundrum: Getting Better All the Time

What’s not to like about z/Linux and other cheap mainframe workloads?

It’s no wonder mainframe shops have embraced z/Linux, J2EE, and other next-generation workloads out of necessity. After all, IBM Corp. prices z/Linux and z/WebSphere capacity at a fraction of the cost of full-blown z/OS capacity (for COBOL and Assembler applications), so—on paper, at least—it’s a no-brainer: that switching (where possible) to next-generation workloads can result in large savings.

Some mainframe vets take the opposite view, however. As far as they’re concerned, Big Blue’s next-generation push is actually a Big Iron bait and switch (see http://www.esj.com/Enterprise/article.aspx?EditorialsID=1608).

What’s not to like about cheap mainframe workloads? For one thing, skeptics argue, z/Linux and Big Iron WebSphere can’t husband system capacity and resources as efficiently as z/OS, TPF, VSE, and other traditional mainframe operating environments. Organizations save money in the short term but end up paying more in the long-term by purchasing bigger, more powerful mainframes. This shouldn’t and doesn’t disqualify IBM’s next-generation push, however.

For one thing, mainframe hardware has grown ever more powerful with each new generation. When it debuted five years ago, the z900 mainframe was the largest and most powerful Big Iron system IBM had ever developed. But the z900 was, in turn, dwarfed by the z990 (T-Rex) systems that Big Blue announced just 30 months later. T-Rex was itself humbled last summer when IBM again unveiled its latest and greatest mainframe, System z9.

The point, officials argue, is that customers who stick with the platform will reap the rewards—primarily in the form of more Big Iron capacity for the buck. “As customers move from z900 to z990 to z9, and as they move from z/OS 1.4 to 1.5 to 1.7 … those customers who stay the most current [on mainframe hardware and operating environments] have seen very dramatic price/performance improvements in z/OS workloads of all kinds,” said Colette Martin, zSeries program director for IBM, in an interview earlier this year.

Of course, customers that aggressively pursue next-generation workloads will probably need all the mainframe bang-for-the-buck they can get, at least in the short term.

“I think the doubling of MIPS, the growth of the MIPS capacity of the machines, that is encouraging—but the software that consumes the new MIPS is typically in the WebSphere and the Linux workloads,” says Andre den Haan, CIO of mainframe ISV Seagull Software Inc. “To support 1,000 users, you have X [times] the number of MIPS that you require in a traditional workload environment. It is not unrealistic that you might need at least 20 times as many MIPS to support the same number of users in a Java environment.”

Don’t call den Haan a next-gen skeptic, however. He thinks the emergence of Big Iron Linux and J2EE are indications of a healthy, if evolving, mainframe market.

“It is essentially a fact of life. It is the case with every computing platform that new methods of building software require you to double or quadruple your computing capacity,” he comments. “You saw it in the PC world. We ran PC applications in the early 1980’s with 128 KB floppy disks and 16 KB of main memory. Of course, you’ve had this huge growth of hardware capacity since then—it’s probably 1000 fold or more of what it was 20 years ago.”

What’s more, says Ken Sharpe, an operating systems specialist with a southwestern state government, there’s a capacity trickle-down effect that benefits all mainframe shops. “We do have [Big Iron] Linux but we [have] not sift[ed] any COBOL workloads from MVS to it,” Sharpe observes, noting that he won’t, for this reason, comment on the merits of z/OS efficiency relative to those of z/Linux.

Even so, he says, his and other mainframe shops are benefiting from Big Blue’s next-gen push, which has, arguably, helped spur enormous expansions in mainframe capacity from one hardware generation to the next.

“Just watch out for the sales [and] enhancement changes IBM is doing on the new machines [such as System z9],” he says, noting that Big Blue’s new MSU pricing structure at first threw him for the proverbial loop. “The MSU rating for the new boxes is discounted. This got me very confused until I read the fine print. I told our managers the migration … would only give us an increase of 3 percent CPU and I was very wrong.” Instead, Sharpe explains, a migration from his organization’s existing 2064-2C4 systems to new 2094-702 mainframes would result in an 11 percent improvement in mainframe price/performance.

In many cases, Big Iron vets who have deployed Linux- or J2EE-based workloads have pragmatic takes on the pluses and minuses of the next-gen value proposition. “We've been working with Linux on z/Series for approximately four years. Two years ago we piloted a z/VM [and] WebSphere trial, and that has gone quite well. We have been in production with z/VM and Linux on z/Series for about 16 months. We have [a] couple [of] dozen applications coming down the [pike], and about 10 or so in production. One of these is a fairly heavy hitter,” says Jim Melin, a mainframe systems programmer with a Minnesota county government. “That said we have [two IFLs] with 7 [GB] of memory driving the environment [6 GB central, 1 GB expanded]. At peak loads, we're seeing 25 percent of processor capacity being used on average.”

“What have we learned about Java efficiency [versus] 'traditional' workloads such as COBOL? We've learned that WebSphere is resource-intensive, and Java, even though it is byte code that is 'compiled' by the just-in-time compiler, is not as efficient as traditional workloads on z/OS,” Melin comments. “Why is that? Well, for one, z/OS compiler design, and as a result, COBOL coding design, comes from the very old-school, resource-constrained environments of the early mainframe world. Every iota of efficiency you could get out of something the better you were. A simple coding change to a program can vastly alter the performance in the real world—sometimes for the worse.”

He contrasts this approach with traditional J2EE application development. “Traditional Java development is done in the Wintel world, or even if it is on Linux, it is largely done on Intel hardware where memory is plentiful, cycles are cheap, and efficiency isn't a huge concern. If it's not behaving well, people tend to throw more processing power or memory at it. It's a cheap way of getting more out of the application,” he explains.

“In the Linux on z/Series under z/VM environment, inefficiency is multiplied, not masked. If you have an application on four server regions and it unnecessarily uses more memory that it needs to, and is inefficient in its design, you will consume memory and CPU resources that in the Intel world affect a single machine, but in the virtualized world—be it z/VM or VMWare—this consumes resources that could otherwise be used by other virtual machines.”

This doesn’t have to be a showstopper, Melin argues. “The lesson to be learned here is that there are always tuning opportunities in the environment, the configuration and the applications themselves.”

More to the point, he says, some solutions to next-gen capacity woes can be found in last-gen best practices. “There are application-design methods that are time tested in the z/OS environment that have bearing in the J2EE world,” he explains. “Case in point—our heavy-hitter application recently ran into some problems. [We] rolled out a new version and it was not behaving well. We examined everything we could tune in the z/VM, z/Linux environment and asked some questions of IBM. What came out of it was that the application was changed to be more efficient. This solved that issue for them.”

About the Author

Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.

Must Read Articles