Best Practices to Avoid Virtual Sprawl

Despite the challenges of virtualization, organizations can have a smooth deployment and achieve a more cost-effective and highly available server infrastructure.

by Pam Snaith

Until recently, each business application required its own dedicated server—plus backups in case of failure. A server was rarely used for anything other than its intended application. The result: many physical servers running at less than 10 percent capacity, wasting computing power and associated time and money.

While it may be fine for human beings to use just 10 percent of their brainpower, leveraging a data center investment at such a low rate is poor business practice. As a result, many organizations are turning to virtualization to improve utilization rates and increase the return on their IT investments.

In a nutshell, with server virtualization, each physical server can host multiple virtual servers. Each virtual server acts as its own independent device. Thus, organizations can continue to deploy applications on a dedicated server—and that dedicated server may now be virtualized, running alongside other virtual servers each supporting their own applications. The result is one physical server playing host to multiple virtual servers—enhancing overall utilization and improving ROI.

Server virtualization holds great promise, but it can also lead to greater complexity. Relative to their physical siblings, virtual machines (VMs) are relatively easy to build, duplicate, and deploy. A seasoned IT professional can "clone" an existing configuration, install it, and have it ready for action with a few mouse clicks. As a result, some IT shops are finding themselves under pressure to deploy a new virtual server for nearly any pet project that comes along. The unintended consequence: virtual server sprawl.

A large health-care provider, for example, recently consolidated 450 physical servers to just 45 physical servers—each running 10 virtual machines—and achieved essentially the same computing power and flexibility with one-tenth the required physical space. Seeing the speed with which VMs could be deployed, business users began urging the IT organization to make servers available for a whole host of new test applications and for other development purposes. Within three months, the company's virtual machine infrastructure grew to 1,350—a three-fold increase over its peak physical infrastructure.

While the company benefited from the additional computing capacity, the IT organization had begun to lose track of where VMs were being added and by whom. The benefits of virtualization were quickly eroded by data center complexity and skyrocketing support costs. Given this phenomenon, is it still possible to achieve the benefits of virtualization without suffering the unintended consequence of virtual server sprawl?

Absolutely—with good planning and the right management approach. To avoid sprawl, data center managers should keep these four best practices in mind:

  • Evaluate the state of the current infrastructure’s strengths and weaknesses
  • Ensure that efficiency of management solutions will not decline in virtualized environments
  • Carefully select which business processes to virtualize
  • Automate processes whenever possible

Best Practice #1: Evaluate the state of the current infrastructure

The best way to achieve the dynamic infrastructure virtualization promises is to be methodical from the start. Data-center managers must thoroughly evaluate what makes the current physical server infrastructure work well, as well as what causes it to function poorly. These same concepts should then be applied to virtual servers. Key questions to ask include:

  • Does your staff have visibility into server utilization and available capacity?
  • Are they notified when server performance begins to degrade?
  • Is it easy for them to find faults or do people waste time pointing fingers?

The problems of managing a physical infrastructure will be multiplied by the number of virtual systems an IT organization runs on each host.

Best Practice #2: Ensure that efficiency of management solutions will not decline in virtualized environments

By the same token, management strategies that work well with physical infrastructures may not extend to virtual infrastructures. With virtualization, for example, hardware no longer defines a tangible entity since a single server can host many VMs. Therefore, VMs must be mapped to physical servers in a way that is visually understandable and monitored for utilization and capacity. If a physical server management solution has been effective, the organization must determine if the solution provides the same degree of management information and visualization for virtual systems.

Best Practice #3: Carefully select which business processes to virtualize

Organizations must determine the appropriate business processes to virtualize. New applications or business processes are good candidates, since their usage may initially be low but may increase rapidly. If the server environment is large, virtualization may give business processes a great deal of "headroom," since a virtual environment allows for rapid growth without locking up an entire server that might otherwise sit idle until needed. Virtualization also provides tremendous flexibility in terms of resources that can be made available to support a business application from within a single physical server or across a class of similar machines.

Non-critical business processes are also good candidates for virtualization. If these processes don’t justify dedicated resources, they are susceptible to degradation at peak usage times. Virtualization may result in better performance because of the additional processing power that can quickly be made available, and because they can share a host server with processes that have different peak periods.

Best Practice #4: Automate processes whenever possible

Once an organization determines what is going on in its infrastructure, it is important to keep up with it – especially because change occurs so quickly with virtualization. There are many reasons to automate, including:

  • Automated discovery keeps track of VMs being deployed so that an organization can evaluate each new VM and ensure that it makes sense and is cost-effective.
  • Automation makes more efficient use of resources, accurately assessing situations for over-utilization and under-utilization and making quick corrections.
  • Automation enables dynamic resource allocation, essential for realizing the benefits of virtualization.

The demand for resources by business processes can increase or decrease constantly. Some business processes may require additional resources once every month or quarter, while others may be less predictable. Because the process of resource allocation is constant and continuous, it should be automated to keep pace with business process requirements. Manual allocation is simply not fast enough.

Once resources made available from a virtual server "pool" are no longer needed by a particular application, it is also important that they be automatically returned. Otherwise, like inactive VMs, they simply become unproductive parts of the virtual sprawl. In resource allocation, remember that both CPU and memory may be needed. The ability to allocate them and reclaim them independently and automatically is very important. Business-based policies drive successful resource allocation.

Despite the challenges of virtualization, organizations can have a smooth deployment and achieve a more cost-effective and highly available server infrastructure. However, virtualization demands rigorous discipline to ensure that its potential benefits aren’t cancelled out by the downside of virtual sprawl. With careful planning and employment of best practices, such as the four suggested above, organizations can avoid the pitfalls and reap the true benefits of virtualization.

Pam Snaith is a product marketing manager in the enterprise systems management business unit at CA. She’s had extensive experience in the industry at companies such as Avaya, Lucent, Digital Equipment Corporation, Xyplex, and Agile Networks. You can reach the author at