In-Depth

3 Tips for Predicting Your Way to a More Efficient Data Center

The full benefits of virtualization technologies can only be realized by integrating predictive analytics into IT operations so managers can gain greater control over capacity, efficiency, and risk.

By Andrew Hillier

Virtualization and cloud computing models have in some ways obscured the requirement to accurately predict future requirements. The dynamic nature of these environments tend to lead organizations to believe that a purely reactive operational model is now possible, but in practice many of these organizations soon realize that this is untrue. Too much reactivity creates volatility, not agility, and forces organizations to overprovision virtual and cloud environments in order to be prepared for upcoming demands. This soon leads to the realization that efficiency and agility are better achieved through proper planning, including detailed contemplation of potential future scenarios. Managers of these environments would do well to adopt the Boy Scout motto: Be prepared.

The challenge in being prepared is that IT is still relying on the approaches used in managing "old school" physical infrastructure. Although sometimes built into tools masquerading as "virtualization ready," these utilization trending and growth-centric models cannot accurately predict the future for these "new school" environments. Although sometimes shrouded in sophisticated algorithms, the approach is straightforward -- watch the utilization trends for a given period of time and then estimate capacity requirements based on the trend and anticipated growth factors.

Unfortunately, trending utilization results in horribly inaccurate predictions of infrastructure requirements in virtual and cloud environments, where the largest impact on capacity tends to be incoming demands from new infrastructure requests, application deployments, and workloads being transformed into the environment. Understanding "organic" workload growth trends is important, but it is now just one factor in establishing the safe balance of workload demand and resource supply and making sure you have the right amount of capacity in an environment.

To that end, organizations need to make three major changes in the way they view and manage capacity.

Change #1: Accurately model upcoming demand

Most IT organizations cannot model inbound capacity requirements from users. Self-service portals capture basic requests and can usually be used to fulfill immediate requests, but they typically don't have the ability to model the true resource requirements, and they certainly don't "hold" or reserve capacity for the future.

These portals also don't reflect the inbound demand coming from large transformation projects, which are still underway in the majority of environments and are a key step in populating new internal clouds. This creates a massive gap in forecasting and results in one of two problems: falling short on capacity and being unable to respond quickly to the business; or wildly over-provisioning capacity.

To solve this challenge, shift your focus from trending to one that is centered on modeling capacity reservations from all the sources of demand, including the lines of business, transformation programs and application development teams.

Change #2: Optimize workload placements to make the best possible use of infrastructure

Managing virtualized infrastructure looks more like a complex game of Tetris than something that can be solved in a spreadsheet. There are literally billions of permutations and combinations when considering how applications and VMs can be placed on a given host infrastructure, and doing the math properly pays huge dividends in density and efficiency. Add to this the fact that the workloads are growing, shrinking, appearing, and disappearing over time, and it is easy to see why it is so tough to make predictions.

Traditional trending approaches and "incremental thinking" simply cannot deal with this, and tend to create jumbled environments with stranded capacity and wasted IT assets. On average, organizations can increase density by 48 percent simply by improving the placement of workloads within an environment.

Change #3: Incorporate management criteria and policies into operational systems

Resource sharing also introduces new constraints that pertain to which applications and VMs can share infrastructure from a business perspective. These constraints must be respected. Often these management criteria will take precedence over the optimal workload "dovetailing," as many business constraints are absolute in nature and must be adhered to even if it increases operational costs.

These constraints tend to take the form of affinities (what needs to stay together), anti-affinities (what needs to be kept apart), and containment (restricted mobility for groups of things). Affinities are useful in optimizing performance by allowing components to communicate more efficiently. Anti-affinities tend to be more common, and are often used to ensure application components, data, or end users do not "intermingle" on the same infrastructure, thus ensuring compliance with operational and business policies.

Containment has yet another purpose, and is typically helpful in restricting licensing costs by limiting the number of physical servers used to host a certain software package. Regardless of the specifics, these are all components of an overall policy for managing shared environments, and combined with the permutation-based placement, ensure environments are not only efficient but also safe and compliant.

Leverage the Right Data

Being able to address these factors is critical to managing modern IT infrastructures. It isn't enough to collect data in monitoring tools, CMDBs, and other management tools. Organizations need to leverage the right data in order to effectively plan for the future. The ability to release stranded capacity for productive use, to quantify the true spare capacity and application headroom, and to proactively book it up to achieve the optimal density can only be achieved through predictive analytics.

Predictive analytics enables IT managers to make good choices about workload placements, sizing, and infrastructure requirements. As history has proven, the data center continues to evolve, and although today's technologies bring the promise of agility and efficiency, they also bring new layers of complexity. The benefits of predictive analytics can only be realized by integrating it into operations, so IT managers can gain greater control over capacity, efficiency, and ultimately risk in their environments.

Andrew Hillier is the co-founder and CTO of CiRBA, a provider of data center intelligence software. You can contact the author at. marketing@cirba.com.

Must Read Articles