Top 10 Questions Data Center Operators Must Answer in 2011

With the rising costs of energy and increasing pressure on organizations to be more "green," implementing a DCIM and/or DPO solution is inevitable.

By Clemens Pfeiffer, Co-founder and CTO, Power Assure

Several recent changes are putting a spotlight on power utilization and capacity management in data centers. Electricity now represents about 25 to 40 percent of the operational expenditures for the typical data center. Within a few years, the U.S. Department of Energy predicts the cost to power a server over its useful life will exceed the original capital expenditure. Gartner estimates that, on average, 40 to 60 percent of server rack space is underutilized or wasted, and that up to 30 percent of available power is similarly stranded -- both of which can lead to organizations outgrowing data centers prematurely.

These forces, along with the advent of new tools for data center infrastructure management and dynamic power optimization, built-in hardware power management capabilities, attractive utility incentives, and highly publicized case studies will have CEOs asking some basic questions that facilities managers and data center operators will need to answer.

This article identifies the top 10 questions that are likely to be asked as organizations increase their focus on minimizing energy costs, and offers guidance for how to find and provide the "right" answers -- those that will best satisfy the CEO's concerns.

The Top 10 Questions

Here are the ten questions, in order of increasing difficulty, a savvy CEO is likely to ask in 2011. Although they may be worded differently when asked, how many are you able to provide a good answer for today?

1. Are we effectively measuring power utilization in our data center(s) today?

2. What's our overall efficiency rating (PUE, DCIE, and CADE), and how does that compare to the industry norm?

3. Are we able to accurately determine which equipment is being utilized the least and/or is the least efficient?

4. Can we determine energy consumption by application, transaction, or department, and does it make sense to allocate those costs accordingly?

5. How much is this year's technology refresh cycle expected to improve overall data center efficiency and capacity?

6. What gains did we realize (or do we expect to realize) from consolidation and virtualization of server and storage resources?

7. Can we optimize those gains with "what-if" analyses of all available options or through other measures?

8. Where are we most likely to experience a serious limitation, and are we at risk of having a data center exceed its available capacity?

9. How can we minimize energy consumption (and costs) in our data center(s)?

10. What will be the return on investment for improving overall efficiency in our data center(s)?

Getting the "Right" Answers

No one wants to say "No" to the CEO who asks, "Are we using power as efficiently as possible in the data center?" Most operators are able to answer the first two questions with a few strategically placed power meters that separately measure IT loads (servers, storage, and networking equipment), recognizing that the remainder goes to the cooling, lights, power distribution system and other "overhead" in the data center. With these basic measurements, operators can calculate the data center's baseline Power Usage Effectiveness (PUE) or Data Center Infrastructure Efficiency (DCIE) rating, and how that compares with the industry average of 2.0 or 0.5, respectively.

The U.S. Environmental Protection Agency (EPA) has established the target for data centers of a PUE rating between 1.1 and 1.4 (or a DCIE of 0.9 to 0.7). Achieving this level of improvement will likely involve a range of initiatives in most organizations, including right-sizing UPS and power distribution equipment, raising temperatures and eliminating cooling inefficiencies, and improving server utilization.

Achieving the EPA's target and answering our remaining questions will require more granular measurements, such as those provided by a Data Center Infrastructure Management (DCM) solution. Most DCIM solutions are able to measure power utilization down to the individual outlet level, enabling operators to determine which resources are currently consuming the most energy. Some DCIM solutions are also capable of monitoring environmental conditions, such as temperature, humidity, and airflow, also at a granular level, enabling operators to identify "hot spots" or other inefficiencies.

This basic, granular information will help answer some of the questions, and allow for some improvements, but to do more, operators will need an even more capable DCIM solution -- one that can assess the efficiency (not just the power consumption) of individual servers. The enterprise will also need a DCIM that can determine how much power is required for individual applications and/or transactions.

The lack of visibility into individual servers and applications is why some DCIM solutions are unable to calculate a Corporate Average Datacenter Efficiency (CADE) rating, which takes into account the biggest source of waste in data centers today: low server utilization. CADE is the product of Facility Efficiency (similar to DCIE) and the IT Asset Efficiency, with the latter being a measure of server utilization.

Consolidation and virtualization, especially in combination with a server refresh, inevitably improve a data center's CADE rating, but some organizations experience rather disappointing results with these initiatives. They begin with an average utilization rate of about 10 percent for dedicated servers, and achieve only 20 percent to 30 percent utilization afterwards -- far short of the 60 percent some vendors claim is possible. Some enterprises refresh some of their oldest servers, but experience little or no improvement in PUE, DCIE, or CADE ratings. In effect, they lack the ability to perform the what-if analyses needed to make optimal choices, causing the CEO to ask some even tougher questions.

Getting the Best Answers

Peak efficiency is possible only by moving beyond the traditional (and wasteful) "always on" mode of operating a data center to an "on demand" mode where servers are powered up only as needed. The rationale for this change is that server load is variable. Every data center experiences a peak demand, whether on a daily, weekly, monthly, or annual basis. Every data center is configured with the server capacity required to accommodate that peak demand with an acceptable level of performance. The only thing all of those servers are doing during all of the non-peak periods, when demand can be as much as 80 percent lower, is wasting power -- and money.

Dynamic Power Optimization (DPO), a new capability found only in sophisticated DCIM solutions, works in cooperation with load-balancing or virtualization systems to dynamically match server capacity with demand, resulting in a typical savings of 50 percent or more in power consumption. To achieve this dramatic improvement, DPO employs a real-time calculation engine that continuously assesses server demand, taking into account both current demand and trends (the increase or decrease and at what rate), along with historical or anticipated patterns. When the engine detects an impending mismatch between anticipated demand and current capacity, it automatically informs the virtualization system or load-balancer to make the appropriate adjustments.

Conclusion

With the rising costs of energy and increasing pressure on organizations to be more "green," implementing a DCIM and/or DPO solution is inevitable. Indeed, Gartner predicts that 60 percent of all organizations will utilize a DCIM solution by 2014. The longer an organization waits, the greater the risk of facing a costly upgrade or outgrowing the capacity in a data center. With some of the most powerful DCIM solutions now available in the cloud as a software-as-a-service (SaaS) offering, the return on your modest investment required begins accruing in less than a month.

Clemens Pfeiffer is a 22-year veteran of the software industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies. Before co-founding Power Assure), he served as founder, president, and CEO for 10 years at International SoftDevices Corporation, focused on designing and implementing a multi-industry smart process automation platform and integration agents. Pfeiffer holds a MS degree in Information Technology from Furtwangen University, Germany. You can contact the author at clemens@powerassure.com.

Must Read Articles