In-Depth

Treat Energy as a Key IT Asset

Treat energy with the same care and attention that your other data center assets receive.

by Joe Polastre, Ph.D.

IT managers have a busy job. Managing assets such as servers, storage, applications, and networking is an all-consuming endeavor. They have to keep applications performing up to service level agreements (SLAs), employ the right management tools, track usage and efficiency patterns, plan for future growth of services, and determine what works and what needs to be replaced.

What about energy management? Energy consumes at least half of the operating costs of a data center and, of course, you can't run any of your assets without energy. Doesn't it make sense to treat energy with the same care and attention that your other assets receive in the data center? Instead of looking at power as part of the infrastructure, you should view it as a key asset so you can reduce the potential of downtime and ensure business continuity, cut costs, improve overall performance, and release the capacity to run even more applications within your data center. Let's take a closer look at each of these benefits.

Energy Management Helps Mitigate Risk

It's always best to solve small problems before they become big ones. With a well-planned energy management strategy, companies know how to anticipate huge problems lurking around the corner. Just as you're proactive to prevent application bottlenecks, you need the same approach for power bottlenecks, imbalances, and provisioning. If a power issue occurs and servers go down, all of the applications go down as well. Trace the power flow back to the source, and keep on top of it with dashboards and alerts, just as you do for system and application management.

Think of energy as having an SLA. Thus, you will need tools to ensure the availability of energy. By comparison, look at the application pools set up to run critical business services. Pools are replicated for production, development, Q/A, and testing. Each is structured as a pool to provide availability and redundancy in the event that any one server goes down or demand goes up. Sometimes the pools are tiered as well, such as Web server, application server, and database pools all for one service (see Note 1).

The same principal applies to power; redundant and resilient power is delivered via UPSes, multiple utility feeds (such as multiple network feeds), and generators; it is delivered over multiple phases to multiple power supplies. Any single power failure is designed to be benign, but if poorly provisioned, one failure can have a cascading effect that could take out an entire service, or worse, an entire data center. Tracking energy risk metrics provides real-time insight into potential problems, unearthing them before they become disasters.

Energy Management Helps Control Data Center Costs

According to the joint Server and Energy Efficiency Report by Alliance to Save Energy and 1E, there are approximately 4.75 million servers worldwide being run, managed and upgraded without being actively used on a daily basis (as of 2009). Those unused servers cost $20.7 billion to run, and they consume another $3.7 billion in energy costs.

Here's another eye-opening statistic: About $21.4 billion is wasted each year on hardware, maintenance, management, energy, and cooling for unused servers. This is roughly equal to the cost of the Apollo space program (see Note 2).

Managing energy actively results in a 40 percent or more energy savings in the data center. When you consider that a data center can consume 10 to 100 times more energy per square foot than the average office building, and in some cases up to 40 percent of an organization's carbon footprint, you begin to understand the importance of managing energy in the data center.

Here's an example from the realm of storage: Data deduplication improves data protection and ultimately lowers costs by eliminating redundant data. This effectively reduces the required storage capacity because only the original data is stored. You can manage energy in this same way: by identifying and eliminating unneeded or redundant power consumers and tracking unused power capacity, you could significantly control costs, reduce hardware required, and optimize operations.

Energy Management Helps Improve Performance in the Data Center

When you buy a car, one of the things you look at is its efficiency (typically measured in miles per gallon). Efficiency is a ratio of the useful work done to the amount of energy required to accomplish that work. Performance is also a metric of useful work, comparing it to the amount of resources required. Thus, you can look at the energy performance of your data center as well as the assets inside it. By merging energy with IT management data, you can track how much work is done, how much energy is consumed, and what resources are required. With this information, you can identify the best place to run applications, identify where computing resources are abundantly available but unused, and better manage your infrastructure and applications to achieve greater utilization without impacting energy consumption or cost. That truly is the definition of a performance increase.

When it comes to measuring the carbon impact of your data center, it's crucial for power management to be treated the same as any other asset. Some companies overlook energy as an asset because it doesn't come in a box like an application or a server. Instead, it comes from a utility company. That's one of the reasons why people aren't generally thinking about IT management in the same way as servers and storage, but in reality it affects your operations just the same. By setting up reporting structure and developing a company-wide plan, you can better manage your data center's performance and establish chargebacks to each line of business for the resources they use.

Energy Management Helps Plan for Future Capacity in the Data Center

By efficiently managing your data center energy as an asset, you can maximize both computing and energy capacity. You need accurate information about exactly where and when your energy is used in order to plan for expansion. You need to ask: Where in the data center is the most optimal place -- in terms of power, cooling, and space -- to add a new server? When you provision new IT equipment using standard management tools, provision the energy and environmental components as well. It seems obvious to do this, but often it's overlooked.

At Sentilla, we look at energy management from an IT enterprise management perspective. We merge and correlate IT performance with energy consumption. This provides you with tools that reduce cost, increase performance, optimize capacity and mitigate risk. By providing visibility into energy consumption and leveraging energy management as a key component of IT software portfolios, data center executives can deliver exceptional IT performance at minimal cost and without disruption.

Notes

  1. See the "Scaling X" series at James Hamilton's blog for more examples
  2. See http://history.nasa.gov/Apollomon/Apollo.html

Dr. Joe Polastre is co-founder and chief technology officer at Sentilla, a company that provides demand-side energy management solutions for data centers and commercial facilities. Joe is responsible for defining and implementing the company's global technology and product strategy. Joe holds M.S. and Ph.D. degrees in Computer Science from University of California, Berkeley, and a B.S. in Computer Science from Cornell University. You can contact the author at joe@sentilla.com.

Must Read Articles