In-Depth

Virtualization: Five Key Decisions You Must Make

You can increase your chances of success by answering these five questions early in your virtualization project.

By Kevin Epstein

There's a massive shift occurring in data centers, due to a disruptive technology known as infrastructure virtualization, which fundamentally changes the way data centers utilize physical assets, improving performance and efficiency while reducing costs.

Achievable? Absolutely, but we suggest you consider the following decisions you and your IT team should make as part of your virtualization plan.

1. Decide which virtualization products will fit

Let's start with an analogy of physical servers and real estate. Imagine a neighborhood of ten houses, where each house has its own foundation, utility lines, wiring for cable and Internet access, mortgage, property taxes, landscaping, and maintenance. To save capital costs (the cost of buying the house) and operational costs (the utilities and maintenance costs), you might consider razing all ten houses and building a high-rise apartment building on one of the lots.

Similarly, if you own ten physical servers in your data center and want to consolidate them to one physical server, server virtualization software allows you to do just that. The leading hypervisor companies -- VMware, Xen/Citrix, and, later this year, Microsoft -- provide software that allows a single server to run multiple operating system instances. In addition, IBM, Oracle, Red Hat, Sun Microsystems, and Symantec are all investing in the open-source Xen hypervisor project.

For enterprises starting to look at virtualization software, the most important question to consider should be "which virtualization product is appropriate for my environment?" It turns out that there actually are good and bad targets for server virtualization software. The best candidates are ideally drawn from lower load boxes without a lot of disk or network I/O requirements. Directory-based services, for example, are low I/O applications. Other good candidates to run on a virtual machine inside a physical server include LDAP active directories, print servers, and mail servers provided you are not an email host provider like Yahoo!

Conversely, the high I/O applications, such as databases, SQL servers, data mining applications, and streaming multimedia, often aren't good candidates to run as virtual machines together on a physical server because they compete for all the same limited resources and generate high overhead.

Therefore, if you are running high I/O applications, it is appropriate to consider virtualization technologies beyond the hypervisor or server virtualization, such as infrastructure virtualization software. While hypervisors can extend the power of a single physical machine by enabling you to run multiple virtual server instances, infrastructure virtualization encompasses the entire data center, so you can achieve rapid re-provisioning of physical machines as well as virtual machines, dynamically shifting workloads as needed, while provisioning the appropriate network and storage connectivity.

There are very few infrastructure virtualization vendors: the leading candidates include Scalent Systems V/OE and Unisys's uAdapt. There's also a proprietary hardware-based solution from Egenera. The software-based infrastructure virtualization solutions (uAdapt and Scalent) support existing bare-metal servers running Windows, Linux, Solaris, and AIX, as well providing full VMware ESX, Xen, and Microsoft hypervisor support. Each product installs on a management server outside the data path, and works against the existing servers, storage, and network in the data center.

Comparing server versus infrastructure virtualization reveals that the two solutions provide distinct and complementary functionality in the data center. The former creates "apartment buildings" of servers, and the latter dynamically shifts occupants between buildings along with the associated connectivity to the network and storage. Depending on the workloads that you manage, you probably need both.

2. Decide what infrastructure changes are necessary Let's examine the housing analogy from a different angle. Think about the physical infrastructure required by a neighborhood of ten houses, compared to the infrastructure required by a ten-story apartment building. Creating a neighborhood of single-family homes requires that each house has basic utilities and a separate foundation to support its load. This is very different from the infrastructure required for a ten-story apartment building, where the single foundation would have to support the load of a far heavier structure as well as higher-gauge utilities (such as electrical, plumbing, cable TV) designed with the capacity to serve the varying needs of tenants.

Not surprisingly, the underlying hardware and connectivity in the data center becomes much more important in the context of virtual machines. Consider these infrastructure changes:

  • Attach servers to network storage to avoid running the images from an over-taxed local disk, so the physical machine will then be able to access multiple network and storage zones required by the hosted virtual machines.
  • Do not rely on single points of failure. When you choose a physical server to host ten virtual machines, understand that you have exposed those ten virtual machines to the same physical points of failure. Depending on the service being delivered, it would be wise to create multiple paths to network and storage as well as a plan for rapid failover in the event of server downtime.
  • Physical servers should be attached to central storage instead of the local disk. Many companies are moving towards a network boot (NAS, SAN) environment to increase performance, reduce storage management overhead, and simplify the process of server repurposing. Although this isn't a requirement of virtualization implementation, the combination of a central repository of images with server virtualization enables a very flexible infrastructure to support computing needs.

The good news is that even with additional infrastructure costs, server virtualization will help drive up the utilization on your existing hardware, while server consolidation will reduce the server hardware footprint, resulting in overall cost savings.

3. Manage your new environment effectively

Let's look at the difference between managing ten houses and managing a ten-unit apartment building. The ten homeowners each a different phone number, water supply, furnace, gas, and other utilities to obtain and manage. The apartment complex may not need ten water heaters, but the water heater that it does deploy will have to be larger and more reliable than the water heater for any individual house. The infrastructure in the apartment building's machine room needs to be appropriately sized to manage the demands of all of the units. Thus, the difference between these two scenarios is that when the apartment complex's hot water heater goes out, ten families will be calling, not just one.

It should come as no big surprise that a purely physical hardware environment would have to be managed differently than one that has a mix of physical and virtual servers. Care must be taken to ensure that the management of a hypervisor or underlying physical machine does not affect other virtual machines hosted on the server: for example, an inadvertent network configuration or pulled cable can now bring down a number of virtual machines.

In a non-virtualized environment, if a server were to cause problems on the network, you could unplug the machine and be done with server management. This isn't a great management policy, but it's certainly an option. In a virtualized data center, if a server were to cause problems, you might not know which physical machine hosts the offending server, and even if you did, that physical machine may be hosting other mission-critical servers so you cannot afford to simply cut off the power.

Therefore, the more complex an environment becomes, the more important it is to migrate to a software-based management tool that will protect against inadvertent configuration changes while enabling rapid configuration and problem resolution. In a virtualized environment, you can no longer identify and fix problems through physical intervention, as a machine or cable will have any number of virtual servers relying on it.

At the end of the day, the right virtualization solution should be able to tell you which physical machine is down and then automatically fail over to another physical machine. Crucially, the right virtualization solution should be able to adaptively manage your infrastructure.

4. Decrease risk, plan for recovery

Virtualization technologies can actually put your data center at risk unless your design addresses points of failure and you make continuity plans for server recovery. Everyone knows that hardware fails and downtime happens, but your data center managers must figure out how to back up the primary machine and plan for the rapid restoration to a new machine. This is where virtualization software can help.

Disaster recovery planning is like buying insurance for your apartment building -- it is more expensive than the insurance for a single home but less expensive than that for ten individual homes. With any building, the lender will always require an insurance policy, and analogously, many companies are audited for regulatory compliance in disaster recovery planning as insurance against downtime. That's because data center availability and management have become serious issues; companies are recognizing the need for disaster recovery planning in the data center, but the smart ones don't wait until the auditor comes knocking on the front door.

The good news is that virtualization technology can help. Server virtualization masks the hardware differences between machines, simplifying restoration of a virtual machine backup. In turn, infrastructure virtualization can enable you to quickly boot and run your servers, with associated network and storage connectivity, from remote copies of the same images that are running on your production servers. When the production data center fails, infrastructure virtualization software can automatically allocate and boot an appropriate replacement machine for each server. You can easily adjust network connectivity and storage access in your remote data center via a GUI interface, so the topology remains identical to that of the production data center, without changing IP addresses, storage remapping, or manual intervention.

By applying server and infrastructure virtualization software to the disaster recovery process, you can avoid keeping your backup data center pre-deployed and synchronized and also avoid any midnight scrambling to fix physical configuration problems. These technologies reduce the potential for human error and can turn your disaster recovery process from an unreliable problem into an efficient, dependable system with testable and repeatable results. How cool is that?

5. Decide what you will do with the leftovers

By deploying virtualization software, most companies can decrease capital and operational costs and simultaneously increase scalability and utilization on their remaining assets. After a company consolidates its data center infrastructure through virtualization, there are often spare IT components left over that are no longer necessary in the more efficient, virtualized environment. You can repurpose the leftover assets to accommodate growth in other business areas, such as test and development, QA, staging, or pre-production.

Returning to our housing analogy, the purpose of constructing an apartment building is to increase the population of the neighborhood, not to tear down houses. The apartment building allows more people to efficiently use the same piece of land, increasing utilization and opening up land for parks or other development. By increasing the efficiency and utilization of the data center, companies can achieve a greater return from their data center assets by reducing costs.

Infrastructure Virtualization: The Time is Now

Achieving data center flexibility and performance while decreasing cost is well within your reach. Implementing both server and infrastructure virtualization is the foundation of an adaptive, real-time data center. By selecting the right virtualization technologies, companies can increase the effectiveness of their data center infrastructure and transform it from an under-utilized "necessary evil" into a competitive weapon.

- - -

Kevin Epstein is the vice president of marketing and products for Scalent Systems, makers of infrastructure virtualization software. Kevin served as a director for VMware, Inc. from 2002 until 2006, and previously for Inktomi Corporation's Network Products division, RealNetworks, Netscape, and others. Kevin holds a BS degree in High Energy Physics from Brown University and an MBA from Stanford University. He is the author of Marketing Made Easy (2006, Entrepreneur Magazine Press/McGraw Hill).

Must Read Articles