In-Depth

Big Blue's Intel Server Line Gets Virtualization Overhaul

How well does Big Blue's highly virtualized Intel server vision compare with mature implementations available on System z or RISC/Unix?

IBM Corp. last week rebranded its xSeries Intel server line, announced a new server virtualization software tool, and hyped the benefits of commodity server virtualization technology. But just how well does Blue’s highly virtualized Intel server vision compare with mature implementations available on System z, System i, or competitive RISC/Unix platforms?

Like its System z (nee zSeries), System i (iSeries), and System p5 (pSeries) brethren, xSeries is losing its suffix and getting a new “System” prefix. Officials hope the rebranding will draw attention to the virtualization strategy IBM plans to use to differentiate its technology in the commodity server space.

Market watchers say 2006 is the year in which server virtualization goes mainstream. A confluence of factors—including commodity 64-bit processors, dual-core chips, and the maturation of commodity virtualization solutions—will fuel help fuel a virtualization renaissance of sorts, both International Data Corp. (IDC) and Forrester Research predict.

The “mainstreaming” of virtualization is by no means a slam dunk (much less a fait accompli), however. The big server vendors must first give customers a reason to care about virtualization—particularly in the low-end, or commodity, server space. After all, the commodity chips from Intel Corp. and Advanced Micro Devices (AMD) Inc.—Xeon and Opteron, respectively—probably aren’t the first designs that come to mind when one thinks of highly scalable, virtualization-friendly microprocessors (or their associated chipset logic), especially compared with the proprietary processor and chipset CMOS designs developed by Hewlett-Packard Co., IBM, and Sun Microsystems Inc., among others.

That’s why companies such as HP and IBM others have been working overtime to sell prospective customers on the benefits of commodity server virtualization.

Take Big Blue, for example, which last week announced its new Consolidated Discovery and Analysis Tool (CDAT), a utility that’s designed to help identify ways in which customers can consolidate and virtualize existing x86 systems.

“It basically goes out on your network to every IP address it can find, tells you how many servers you have—which is often an enlightening thing for many IT managers—and it tells you what OSes are there. You can [also] determine what software [is installed], look for security issues, and stuff like that,” explains Jay Bretzman, director of IBM System X servers. “When all is said and done, it comes back with a recommendation on where you can virtualize and why.”

Big Blue isn’t just giving CDAT away, of course. Instead, Bretzman pitches it as an IBM business partner-oriented offering, designed to connect customers with business partners who can educate them about the virtues of virtualization. “We’ve trained 600 business partners on how to use the tool, and when they actively engage with their clients, we see about a 90 percent success rate.”

The impetus to virtualize, Bretzman claims, is real and substantive. Big-Iron shops are accustomed to squeezing 90 percent, 95 percent, or even 100 percent utilization out of their (considerably pricier) mainframe CMOS, but utilization rates are typically much lower in the commodity server space.

“[The mainframe is] a little different in that it’s back-office—you’re doing a lot of batch processing, there aren’t so many huge, unanticipated spikes in demand. In the [commodity] server [segment], you’re aiming for 50 percent utilization, if you’re lucky, because that allows for a lot of spikey traffic.

Thanks to virtualization-friendly hardware enhancements, as well as the maturation of virtualization software from EMC Corp.’s VMWare subsidiary, Microsoft Corp., and other vendors, virtualization is now a considerably more feasible proposition on commodity servers, Bretzman maintains. He cites IBM’s own Enterprise X architecture—now in its third revision—as a virtualization-friendly case-in-point: Enterprise X, which describes a chipset logic for Intel’s 32- and 64-bit Xeon processors, is based on a proprietary memory controller design that circumvents the limitations of Intel’s commodity memory controller (which in effect is separate from the front side bus).

This lets Big Blue’s System X servers address more memory more rapidly—that is, with greater memory bandwidth—than competitive designs. And memory, Bretzman argues, is the lifeblood of virtualization technology.

“With our [Enterprise] X 3 architecture, our experience has been that customers can aim a bit higher, so they can probably go for a 65 percent utilization [level], because the scale up is more structured [and] they’re able to pool that excess [memory] capacity across all those processors,” he observes.

In part, Bretzman says, this is because Enterprise X revision 3 can support as many as 64 processor cores in a single system image.

“You can plug 32 processors into one single instance [System X 3950], and if you have dual-core processors, that gives you 64-way SMP. But you get the same amount of memory from 32- to 64-way—512 GB—so for virtualization [scenarios] that might not make a lot of sense [deploying so large a system image] because memory is so important.”

Nevertheless, he argues, the availability of mature, scalable, and available virtualization technology provides a plausible response to a long-standing meme—namely, that even if commodity server technology has grown more scalable, commodity operating system technology is unable to take advantage of that scalability. To some extent, Bretzman concedes, this is true: the sweet spot for the straight-up (i.e., conventional) server space is still in the two- to four-way bracket. But virtualization—which lets customers wring unprecedented usability levels out of their commodity hardware—is a game-changing technology.

“We see the sweet spot in the virtualization space being 8-way servers with 16 processor cores, that’s considerably more” than in the conventional server space, he says. “In a virtual environment, he who deploys the fewest servers wins, so you want to pick a platform that has a little bit more capability, uses that excess capacity, and can implement more processors than you can on the smaller [conventional] server platform.”

There’s another strong selling point associated with virtualization, Bretzman argues: the transition to a highly portable application infrastructure. Now, instead of re-deploying and re-installing an operating system and its associated application (in the event of a server outage, migration, or replacement scenario), one has only to copy over a virtual operating system image to a new destination platform. It can be as simple as dragging and dropping, he says. Ideally, vendors will at some point improve interoperability between and among competing virtualization schemes—such that a user of VMWare’s ESX Server product could move virtual system images to Microsoft’s Virtual Server platform, and vice-versa—although that’s a pipe dream at present, Bretzman acknowledges. “In VMWare ESX, however, you can have a pending failure event kick off the migration of [virtual system image] software, and that is [happening] on the fly.”

Bretzman stops short of comparing commodity server virtualization technology with the mature implementations available on System z, System i, and other platforms. “It’s different than the mainframe technology, in part because with the mainframe, you’re aiming for much higher utilization [levels]—and you’ve got [batch-oriented] workloads that let you do that,” he comments. “But it also doesn’t make sense to compare the investment we have in [mainframe virtualization] with x86 [commodity server] virtualization. It’s unfair, considering how much effort went in to developing that [mainframe virtualization], and because we design the [mainframe] hardware ourselves.”

Big Blue Still Cool Toward Opteron

Bretzman and IBM have been ambivalent about AMD’s Opteron processor, which was the first 64-bit chip, and which is Ur-productization of the 64-bit extensions used in both AMD’s and Intel’s 64-bit processors. IBM currently markets an Opteron-based workstation, along with an Opteron-powered 1U rack system, which it typically pitches for high performance technical computing (HPTC) scenarios.

At this point, Bretzman says, Big Blue has no plans to incorporate Opteron into its System x line. For starters, he argues, AMD’s HyperTransport-based bus design (in which the memory controller is actually integrated into the Opteron chip itself) is mostly incompatible with Enterprise X—its most salient value-add is its proprietary (faster and more scalable) memory controller design.

“Enterprise X is pretty much a completely different architecture [than HyperTransport]. AMD, of course, has the integrated memory controller, and because we invested in our own memory technology, it didn’t make as much sense for us to ship high performance AMD systems because we’ve closed that [performance] gap [with our Intel-based Enterprise X technology].”

There’s another reason IBM isn’t hell bent on Opteron, says Bretzman—one that’s particularly apposite to the virtualization problem. Because AMD’s Opteron design uses an integrated memory controller, it has much tighter memory tolerances than Intel’s or IBM’s approaches. As a result, Opteron (and Athlon64) systems can typically be populated with fewer DIMMs (running at slower speeds) than comparative Intel-based systems.

“In an AMD system with one processor plugged into it, the maximum DIMMs I could address would be six to eight, depending on the speed [at which the memory was running]. Compare that with an Enterprise X configuration, where you can have 16 [DIMM] slots,” he comments. “If the biggest DIMMS you can fit [in those slots] are 4 GB, that’s … 24 to 32 GB of memory for the Opteron and 512 GB for Enterprise X, in a four-chassis configuration.”

Must Read Articles