Four Ways IT Can Avoid Wasting Resources

As your data center embraces private clouds and advanced virtualization, do not overlook the threat of wasted resources these initiatives can introduce. These four steps can help you avoid depleting resources.

By Cadman Chui

The shift to virtualization and private cloud computing is helping IT departments become more efficient, but that efficiency is masking a less visible waste of resources that threatens return on investment as well as performance levels. The source of that resource drain includes virtual machine (VM) sprawl, excessive proliferation of snapshots and over-provisioned machines.

IT professionals know this is happening, and they want to preserve their resources. How can they do that? Here are four steps IT administrators should take to avoid depleting resources.

Step #1. Set standards for VM configurations to avoid performance issues when provisioning

Having a set of standards for VM configurations is an effective way to ensure that machines don't cause compatibility or performance issues from the onset of provisioning. IT administrators can create a set of templates from established, tested configurations, and users can then base their VMs on pre-approved operating system, application, and data configurations. IT can give users role-based access to applications and services to give further granularity. Finally, users can access those templates through a self-service catalog on a self-service portal.

After VMs have been provisioned, the controller needs to be able to make changes to resource allocations such as central processing unit (CPU), disk, memory, and network configurations to accommodate the changing workloads of the data center. Workloads can shift significantly over time, creating imbalances between workload demand and resource supply. Provisioning based on standards and monitoring for deviations to those standards allows IT administrators to bring the data center to a desired state and curb waste.

Step #2. Understand how to dodge VM over-population or virtual sprawl

Virtualization has made it extremely easy for VMs to proliferate uncontrollably. Giving users the ability to provision their own VMs exacerbates the problem, even if there are approval controls in place. The resulting creation of VMs without an established path to decommissioning is a recipe for VM over-population or virtual sprawl.

IT can control the "death rate" for VMs through a combination of three approaches:

  • Pre-scheduled decommissioning
  • Periodic review and deletion of dormant or orphaned VMs
  • Ad hoc requests by users who realize they no longer require a particular machine

Additionally, automatic decommissioning based on VM life cycles can save organizations hundreds of thousands of dollars in waste that can be caused by the unnecessary physical hardware upgrades data centers make to support unused VMs.

Step #3. Limit the snapshots per VM

IT loves snapshots because they let development and operations staff make frequent adjustments to VMs while maintaining the option to easily rollback to previous versions whenever necessary. The downside of those snapshots, though, is the creation of numerous and often unnecessary backups that are rarely deleted.

Snapshots are the culprits for tremendous amounts of wasted disk space as well as the introduction of risk into the data center. Why risk? Because reverting to an outdated snapshot could lead users to unapproved or obsolete configurations. That's not to say IT should do away with snapshots; they are useful tools when managed properly. With the appropriate feedback, administrators can quickly see where they have an overabundance of snapshots and take action to free up disk space, which can then be devoted to higher priorities.

Step #4. Optimize the virtual infrastructure based on feedback

Feedback mechanisms should be at the top of every IT manager's checklist when it comes to selecting a cloud management platform. To optimize the virtualization infrastructure, feedback options should include human-based controls (in which IT administrators make manual adjustments based on system feedback) and fully automated controls. The first is necessarily periodic in nature, and the second takes the burden off IT staff and ensures efficiency.

An autonomic control system relies on steady-state ranges that are pre-determined by an IT administrator, and the system migrates and stabilizes at those levels automatically. For instance, an IT administrator might want the system to only tolerate the existence of up to 10 VMs powered off for a month. When these numbers surpass the threshold, the system automatically begins decommissioning them to maintain the desired steady state. Another example is the additional allocation of CPU, memory, or disk for particular VMs running at 90 percent of their capacity to bring them down to more acceptable levels.

Summary

As your data center embraces private clouds and advanced virtualization, do not overlook the threat of wasted resources these initiatives can introduce. To avoid the potential drain on cloud return on investment (ROI), IT should seek out private cloud vendors who can deliver self-service provisioning, change and configuration management, and performance dashboards in an integrated package. Those combined capabilities will help IT leaders minimize waste while maximizing the performance of their data centers.

Cadman Chui is the vice president of marketing at Embotics. With over 15 years of experience in the technology space, Chui previously served as the vice president of marketing at Platespin, a virtualization software vendor, where he was responsible for building and leading the marketing team. Chui has also held senior marketing positions at Cybermation (now CA) and DataMirror (now IBM). You can contact the author at cchui@embotics.com.
comments powered by Disqus