The Infrastructure Strikes Back: Virtualizing Everything Else

If server virtualization dominated news in 2007, this year expect a focus on infrastructure virtualization.

by Kevin Epstein

If 2007 was the year of (x86) server virtualization, then 2008 is the year we'll all spend on the infrastructure -- e.g., virtualizing, automating, and managing everything else those servers need to function in a real, production-ready, heterogeneous data center.

This is not a pretty picture. If it were a film, it would be the middle installment of the original Star Wars trilogy. The initial excitement of the new rebellion is over, and we're all staring at the massive, existing, embedded dark side of the force, as personified by the rest of our (physical) data center.

What were we thinking? We fell prey to one of the oldest traps -- thinking inside the box. Server virtualization is great, but it takes place largely within a single computer. Hypervisors allowed us to run multiple, complete servers inside that single physical computer, enabling consolidation and boosting utilization. However, every server still needs connectivity to data and storage networks, and that means looking at the physical world again.

That physical world is an ugly one, too, full of un-virtualized heavy-load servers, running production databases or remote presentation servers, or systems on SPARC or PowerPC. The physical world has a mix of hypervisors, which can't trade virtual machine files (yet). Worst, the physical world has static IP addresses and 16-digit alphanumeric world-wide names (WWNs), burned into ROM on Fibre-channel HBA cards, quite effectively locking servers to specific storage zones unless, that is, you'd like to re-key all of those numbers yourself.

Adding insult to injury, server virtualization compounded the physical issue. Now, each physical machine had to have access to all of the networks and all of the storage zones required in aggregate by all the virtual machines it was running. If you have two physical machines and want to move a virtual machine between them, be sure you're on the same subnet and have access to shared storage. Otherwise, you risk a much better and more painful understanding of the phrase "no man (or computer) is an island"

Adding to our challenges, enterprises now operate under even greater financial and regulatory constraints. Even as everything is driven by the need to do more with less and do it safely, we also must implement failover/disaster recovery capability.

What does that look like in 2008? You guessed it: The Revenge of Infrastructure (scored by John Williams).

In 2008, "do more with less and do it, safely" translates into green data centers (less power), automation (less people), server repurposing (less space, more resiliency), and the next wave of virtualization -- infrastructure virtualization (less power, money, space, resources, plus fewer people and with resiliency).

That last term is worth a closer look. I've heard it called "real-time infrastructure" by Donna Scott at Gartner, "Virtualization 2.0" by Jean Bozman at IDC, and "what do we do now" by Rachel Chalmers of The451Group. Infrastructure virtualization is all about what happens once you've installed a critical mass of hypervisors such as VMware or Xen (or soon Microsoft).

Hypervisors are just operating systems, a fact many forget. With a hypervisor, you can run many servers on a single physical computer -- like a Windows e-mail server and a Linux Web server -- instead of the current "one computer = one server" model.

Underneath it all is still a hypervisor, and most of them are still on x86 machines, too.

What happens to your network, connections to storage, and the interaction among network, single-OS machines, hypervisor machines, and storage in a dynamic data center environment (where you're trying to test and deploy and run) when you need to shift those systems to meet your business' changing needs?

The answer is of infrastructure virtualization, where the servers and their connections to network and access to storage are not tied to specific hardware. At 8am, one rack might be running a Microsoft e-mail environment on bare metal blades and at noon that environment is running virtual machines on rack-mounts while the blades are running bare-metal Linux transaction processing. Of course, this concept has to extend across x86, SPARC, and PowerPC platforms, too (unlike today's x86-only hypervisors).

The goal, of course, is the elusive "Rack once, Cable once; Repurpose Infinitely" model.

It sounds like a dream, but it exists in running production environments today, in some of the biggest companies in the world. Many enterprises there are implementing software from VMware and Xen as well as EMC, Unisys, and Scalent Systems to tie together the networking, storage, and bare-metal aspects.

The server and associated network and storage mobility lets you turn off unused servers (green data centers), automate capacity and load shifting (automation), change what servers are running and which groups are using them (server repurposing), and keep things running elsewhere easily when servers fail (failover/disaster recovery).

In 2008, the infrastructure may, indeed, strike back, with green data centers, automation, and server repurposing.

Like any good trilogy, there's a third -- and hopefully upbeat -- conclusion. We'll call it Infrastructure Virtualization: The Return of the (Virtual) Data Center.

- - -

Kevin Epstein is the VP marketing and products for Scalent Systems, makers of infrastructure virtualization software. Kevin served as a director for VMware, Inc. from 2002 until 2006, and previously for Inktomi Corporation's Network Products division, RealNetworks, Netscape, and others. Kevin holds a BS degree in High Energy Physics from Brown University and an MBA from Stanford University. He is the author of "Marketing Made Easy" (2006, Entrepreneur Magazine Press/McGraw Hill).
comments powered by Disqus