In-Depth

Virtual Systems of Tomorrow Could Take Cues from Today's Mainframes

The Big, Fast virtual systems of tomorrow might look a lot like today's Biggest, Fastest, and most eminently Virtual system: IBM's System z mainframe.

Given its many turnkey virtues -- e.g., the ability to rapidly deploy new operating system (OS) or application instances (or to provision additional OS or application resources to address demand requirements) -- it's tempting to think of virtualization as a turnkey proposition: you buy a hypervisor, you install it on your hardware, and -- voila! -- you've virtualized your infrastructure.

Not quite. Virtualization is both a software and a hardware play, but -- chiefly because software hypervisors from VMWare Inc., Citrix Systems Inc., or even Microsoft Corp. seem to garner the lion's share of attention (virtual and otherwise) -- you almost wouldn't know it. Nonetheless, experts say, the successful realization of an infrastructure in which all (or almost all) IT assets get treated as virtual resources will require a big helping hand on the hardware side, too.

OEMs, thankfully, are already building the Big, Fast virtual servers of tomorrow. To a surprising degree, tomorrow's highly virtualized systems sound like (and might actually look like) the most highly virtualized platform of today: IBM Corp.'s System z mainframe.

Virtualization Changes Everything

Pervasive virtualization will change just about everything, says Gordon Haff, a principal IT advisor with consultancy Illuminata -- starting first and foremost with how shops go about operating or optimizing their data centers. That's because virtualization in its many flavors (e.g., server, storage, network) tends to encourage the shifting of workloads from one physical resource to another.

To do this, or to do it more effectively than is possible today, virtualization must also change how these systems are designed, Haff points out.

"[A] system intended to run a dynamic mix of mobile workloads doesn't necessarily have the same characteristics as one oriented toward running a modest number of static applications," he says. "We're seeing different tradeoffs in system specifications -- such as increased memory capacities and a requirement for processors with virtualization assists built into their instruction sets -- as a result."

The upshot, Haff continues, is truly the stuff of sea change: system design is shifting away from physical conceptual models and toward a virtual model. That's as it should be, according to Haff.

"[V]irtualization changes how physical servers are used," he asserts. "[T]hat's basically the point. It lets you run a variety of workloads on a single system, increase hardware utilization, and shift around workloads in response to changes in demand," Haff continues. "Thus, it would hardly be surprising if servers optimized for virtualization didn't necessarily mimic designs favored for running a modest number -- or even just one -- application."

Even so, he concedes, the virtual-friendly servers of today still look a lot like their predecessors -- for good reason. "The differences aren't necessarily dramatic. They don't -- so far -- result in servers that are unrecognizable, or that aren't also suitable for running un-virtualized workloads as well," Haff stresses, "but we're clearly starting to see changes -- both in the way that virtualization is being woven more tightly into the system's fabric and in the way that other aspects of the hardware are evolving in response to the differing -- and often more demanding -- requirements of virtualized workloads."

Borrowing from Big Iron

One refinement that we're already starting to see is the embedded hypervisor: VMWare, for example, touts ESXi, a hypervisor that runs in 32 MB of Flash memory (see http://www.esj.com/news/article.aspx?EditorialsID=2823); ditto for Citrix and its XenExpress technology, which it got as part of XenSource.

It's an idea with mainframe-esque roots, according to Haff, who cites as an example Start Interpretive Execution (SIE), a specialized virtualization instruction that IBM first enabled on its System/370 mainframes back in the early 80s. The embedded hypervisors of today -- which many x86 hardware OEMs now offer as available options -- are somewhat similar, according to Haff.

"The idea is that you buy a server with an embedded hypervisor sitting somewhere on a flash memory card or a USB key. Booting the server for the first time then kicks off a menu-driven configuration process that would end up with an installed hypervisor ready for guest operating systems to be loaded on top. Effectively, the base platform exposed to the administrator becomes the hypervisor rather than the hardware."

It's in this respect and others that the ideal "virtual-ready" server of tomorrow might look a lot like an existing archetype -- the old mainframe. Of course, even with embedded hypervisor support, x86 virtualization is still a far cry from the mainframe gold standard, Haff points out.

"[W]hen a vendor controls the whole technology stack [as IBM does] from processor to operating system, that control can be leveraged to make virtualization really hum," notes Haff, who adds that "System z remains the gold standard in this regard" -- although Big Blue's POWER systems -- with their PowerVM implementation -- aren't slouches, either, thanks to their hypervisor decrementer (HDECR) technology, which enables mainframe-like granularity.

Ditto for the question of system or image size: the virtual-ready servers of tomorrow could resemble -- in sheer mass, if not internal CMOS, at least -- the highly virtualized Big Iron systems of today. The sheer horsepower of today's systems all but demands that resources be virtualized, Haff argues. Many SMP systems are simply too powerful to play host to the discrete applications or services that once comprised their raisons-d'etre. Which begs a question: why buy big SMP servers at all? If it's truly a virtual world, why not just go with dozens (or even hundreds) of comparatively inexpensive blades?

Here, too, the mainframe leads by example. "[T]here are offsetting advantages to using fewer, but larger, physical servers," comments Haff. "Big Iron has customized internal connections that let a system communicate internally at memory access speeds," he explains. "Such interconnects are more expensive than the networks used to coordinate a collection of scale-out boxes. It's also several orders of magnitude faster … than networking gear."

Even in virtual space, large trumps small. "Specialized high-end servers carry premium price tags, and administrators may need to learn new tools and acquire some new skills to operate them," Haff concedes. "[O]ther things being equal, large is better than small when it comes to hardware for virtualized environments."

Other virtual-ready accoutrements include support for considerably more memory and fatter pipes. Big Commodity Memory is coming: high-speed memory specialist Hynix recently demonstrated three 16 GB DDR3 memory modules running in a tri-channel configuration (for a total of 48 GB per channel -- a 300 percent improvement over previous memory densities). Furthermore, pipes don't get much fatter than virtualized Ethernet, Haff notes.

It's in this respect, too, that the virtual server of tomorrow smacks of the mainframe system of today. "[L]arge numbers of Linux guests [running on z/VM] don't need to communicate with each other over a standard network interface. Oh, they think that's what they are doing. However, … the traffic never enters any physical networking hardware," he points out.

These days, x86 virtualization players are employing similar design tactics. "VMware and XenServer provide analogous capabilities on x86 servers. In addition to traversing interconnects that are faster than a 'real' network, this virtual Ethernet can do some other optimizations" -- including the elimination of CPU-intensive TCP checksum processing.

Must Read Articles