Virtualization Best Practices: Tweaked Proverbs

Despite the novelty of virtualization (along with its unforeseen applications), tried-and-true axioms and sage wisdom of the past -- albeit with new twists -- can guide best practices while treading uncovered ground.

By Kenneth Klapproth

The rapid pace of technological change affects nearly every facet of our lives -- particularly for those of us in the IT industry -- causing us to constantly relearn new ways of accomplishing similar but outmoded tasks. The milk crates that neatly housed your collection of vinyl LPs gave way to a suitcase of eight-track tapes, only to be replaced by a valise of cassette tapes, which was supplanted by a rack of compact discs, to be eventually outdone by a portable flash memory device chock full of every digital music file ever to have crossed your path.

Instead of having to rouse from your chair every 32 minutes to flip the record, swab the platen, gently run your thumb over the needle to dislodge any accumulated dust, adroitly set the arm to preclude skipping, and reset the volume to ensure consistent sound levels, you can now listen for days or even weeks interrupted only by the eventual necessity to recharge the permanently sealed lithium ion battery. Songs are no longer just audio, albums are now playlists, and managing your collection now requires a computer, an organizational application, a backup/restore policy, and the better part of an evening spent arranging tracks to achieve "killer" status.

As we race headlong into the next life-changing technological revolution such as virtualization, is it possible to stem the bleeding of trial and error from the cutting edge technology, or are we doomed to reinvent every process and re-experience every growing pain we survived from previous iterations? Despite the novelty of virtualization (along with its unforeseen applications), tried-and-true axioms and sage wisdom of the past -- albeit with new twists -- can guide best practices while treading uncovered ground.

One Good Turn Deserves Another, and Another, and Possibly Another

One of the key benefits of virtualization technologies is the ability to virtualize servers. Not only does this enable rapid deployment of various operating systems, but it reduces maintenance costs, takes less administrative time to manage, saves on energy consumption/costs, reduces capital expenditures, and mitigates security risks. Because each virtual machine (VM) is encapsulated into a self-contained instance, multiple virtual servers can be deployed onto a single physical server allowing organizations to achieve higher, more cost-effective utilization rates -- particularly on x86 platforms.

More than 80 percent of server shipments today utilize x86 architectures. High-density x86 rack servers make physical deployments easy and allow for better real estate management in the data center. However, a typical x86 server only uses five to 10 percent of its available computational capacity on a daily basis. Using multiple virtual servers can boost utilization to the 60-70 percent range recommended by capacity planning experts, reducing power and cooling requirements as well as the need to acquire new hardware as the business must add new enterprise applications.

Nothing Exceeds Like Excess -- Or Too Many VMs

There is a cost of diminishing return when it comes to virtual server deployments. A balance must be struck between physical server utilization rates and the perceived performance of the applications or services delivered to end users. With the ratio too low, you limit the overall potential savings from the new technology. Conversely, if too high, users will experience degraded performance and -- complain to the help desk.

Getting the mix just right requires an understanding of operational characteristics across physical servers, the virtual servers, and the network connecting them all to end users. With the right management application -- one providing visibility to all three areas -- you can collect statistics that provide insight into the complex interaction between these tiers and make well-informed decisions. Only by monitoring, collecting, and alerting on the operation and performance of the virtual machines, the physical servers that host them, and the networks upon which they deliver applications and services can operations personnel ensure that mission-critical business services are continually available.

Out of Sight, Out of Mind, and Out of Control

Another key benefit to virtualization technology is the speed and ease of deploying new instances for a broad range of uses. Test environments can be quickly implemented to validate new enterprise applications, isolated environments can be secured to quarantine sensitive projects or mitigate viral attacks, or requisite operating systems can be configured despite a lack of the supporting physical hardware. Its range of application, ease of use and deployment, and versatility have IT organizations using it faster and broader than even the virtualization solution providers could have imagined. This explosion (and resulting operational difficulties) has spawned the term "VM sprawl."

It's not enough to know that a certain number of VMs are deployed, or even know the location where they are deployed. Technologies such as VMware VMotion can dynamically move them between physical servers when VM performance degrades. In doing so, however, the route from the user to the business application they require has also changed -- possibly negating the overall intended effect. The number of network device hops could increase. The throughput of those devices, the capacity of that network segment, or the makeup of the overall traffic could be negatively impacted. The result would thus give the user perceived performance degradation. Only a management solution that can track VMs as they move about the network and assess the overall performance contribution of the network would give operations personnel the ability to study, understand, and document compliance, allowing them to provision capacity delivering users optimum performance.

To Err is Human; Excuses Require Documentation

In the words of Henry David Thoreau, "It's not what you look at that matters, it's what you see." Making informed decisions about the correct mix and deployment of virtualization technology requires more than just the reams of data points you can collect about its operation. Too much data can obscure what may be whittling away at your ROI. The thousands of dollars in hardware costs saved by virtualized servers can easily be negated if every user accessing the enterprise application hosted on that virtual instance now has to wait for transactions to complete.

However, simple reports and tabulated data are of minimal value in analyzing complex, multi-variable, interdependent data sets. New graphical and interactive Web-based dashboards deliver contemporary flexibility and interactivity for selecting, exploring, analyzing, and visualizing data to uncover previously unknown trends and patterns and empower strategic business decisions.

That Which Doesn't Kill You Will Eventually Stop Hurting

Despite these complexities, new technologies such as virtualization can introduce, common-sense approaches can minimize their negative implications and lead you on the road to success. What you may lose in the comfort of the status quo can be well overcome in the speed, versatility, or economical and operational benefits it brings. Current generation management solutions bring visibility and manageability of contemporary technologies such as virtualization in the context of the overall network to balance ROI and ensure end-user service satisfaction.

Kenneth Klapproth is vice president of marketing at Entuity. You can reach the author at [email protected]

Must Read Articles