Choosing the Right Key Performance Indicators for Data Centers

Data center personnel desperately need a more granular view into the data center that encompasses all aspects of performance.

By David Appelbaum

Today's data center is experiencing rapid change as enterprises modernize and evolve to meet new business requirements. Their original capacity and utilization profiles are largely obsolete, so the challenge becomes how to provide state-of-the-art levels of availability, agility, and performance while minimizing risk and without increasing capital expenditures. That's a formidable task.

What are the most critical key performance indicators (KPIs) for personnel striving to achieve the most efficient management of their data centers? A September survey of 5,000 data center professionals my company conducted highlighted the metrics that these professionals deal with daily. The survey wanted to know:

  • How do you report metrics?
  • What metrics are most important to you?
  • What results are you achieving?

The responses to these straightforward questions indicated a lack of clarity into vital data center performance metrics. What are these issues and their associated metrics for optimal performance? Let's examine the survey responses.

Virtualization: The primary initiative across all survey respondents, regardless of data center size, was virtualization, now considered a table stakes technology designed to assist in eliminating server redundancies as well as increasing overall data center capability. Highly efficient data centers have implemented significant virtualization initiatives across their servers, storage, and network environments. However, the average level of virtualization within each facility surveyed varied from 40 percent up to 60 percent. Although statistically there was not a great difference in the level of virtualization, there was a wide range of virtualization technologies in play within a broad variety of data centers.

Virtual OS per host: The surveyed companies reported running anywhere from one to 15 virtual operating systems per host, with the greatest number of respondents falling within the 6-10 range. Respondents revealed a wide variety of uses, as well as that both Microsoft Windows and Sun's operating system no longer dominate the data center.

CPU utilization: When evaluating data center capacity, the ability to pull additional compute resources becomes absolutely indispensable. Although a great many (89 percent) of those surveyed did not know their average CPU utilization, there was also a reasonable number (21 percent) who knew their utilization ranged from 41 to 60 percent capacity, for an average capacity utilization of 50 percent.

This poses important questions that data center professionals need to ask themselves: In that 50 percent CPU capability, how much is being consumed by zombie servers? (Zombies are very low in their utilization and are thus candidates for consolidation or decommissioning.) Understanding the actual level of CPU utilization is key for true performance optimization of the data center -- and, more important, for getting the maximum use of computing resources out of the facility at the lowest cost.

Power density: A critical component for understanding overall data center performance is visibility into power density. Among the survey respondents, the average power use in watts per square foot was a reasonable 101 to 150. However, a significant number did not know the power density figure for their facilities.

To ensure the correct power density and simultaneously maximize productivity, the data center professional must understand where that power density lies. He or she needs to understand if there is the potential to spread that computing load across multiple locations in order to ease the cooling and power utilization. In the quest for data center optimization, understanding the cause of power density numbers and how much is applied to useful work is exceedingly helpful.

Power usage effectiveness (PUE): A reasonable PUE number is anywhere from 1.0 to 1.9. In this survey, a large percentage (42 percent) of data centers fell in the 1.6-1.9 range, indicating a reasonably high PUE. However, ignorance of the PUE metric ran at a surprising 38 percent of respondents. This underscores a problem managing data center costs and energy consumption. Gaining control over how much energy is dedicated to productive work in the data center is the decisive measure for achieving true understanding of a facility's overall efficiency.

Memory utilization: Much like CPU utilization, memory utilization was between 25 and 75 percent for about half of the survey respondent pools. That wide range indicated there was significant headroom within memory pools due to the low cost of memory chips and relative ease of upgrading memory within existing servers. However, this range also indicates that data centers are gradually nearing maximum capacity, a fact not to be ignored.

Disk space allocation: Along with power, perhaps the most valuable commodity within the data center is disk space allocation. Among the companies surveyed, there was a significant amount of pre-allocated space. Although the greatest number of survey respondents (37 percent) was unaware of their consumption, 29 percent of those surveyed indicated that over three-quarters of their disk space was already allocated -- not necessarily used but merely allocated.

Survey respondents clearly anticipate growth, with data center personnel planning for additional applications and solutions and, more important, holding disk space in reserve to accommodate that growth. The more applications and services that stem from the data center, the more data is going to grow. It is therefore crucial that data center professionals have an in-depth understanding of their overall storage profile and disk availability.

Space: Among the finite resources affecting all data centers, space is clearly a pivotal issue. More than half of the survey respondents were "crunched" for data center space. From a statistical standpoint, 15 percent indicated their rack space utilization was under 50 percent. This should indicate room for growth. However, looking at the top range of the survey respondents, about 35 percent had utilization rates as high as 70 percent, with an additional 17 percent at 80 percent utilized.

As IT starts to fill the physical space in its data centers, they must either invest in additional colocation facilities or build new sites -- an expensive proposition. Data center professionals need a clear understanding not only of what is being used in each rack, but the productivity of those servers within those racks. Is there an opportunity to consolidate, thereby freeing up more space? A granular level of visibility into the data center is necessary to know what is actually in use versus what is simply running.

Conclusion

Today's data centers are ill equipped to handle the current state of the art in IT. Given the increasing demand for data services combined with capacity limitations and flat budgets, IT is looking for new ways to extend the lives of their current infrastructure and facilities, yet real visibility into data center performance is still not very widespread. In this survey, the largest percentage of responses to questions on nearly each one of these KPIs was "I don't know." That's not an answer to instill confidence in a CIO looking for greater operational efficiency.

Data center personnel desperately need a more granular view into the data center that encompasses all aspects of performance at both the facility and IT level, including assets, applications, and racks. Clearly, there is still work to be done.

David Appelbaum is vice president of marketing at Sentilla Corporation, headquartered in Redwood City, Calif. Prior to joining Sentilla, David worked in software marketing roles at Borland, Oracle, Autonomy, Salesforce.com, BigFix, and Act-On. You can contact the author at david@sentilla.com.

Must Read Articles