In-Depth

Minimize Energy Costs by Following the Moon

Exploiting the lower costs of energy during the night can help your data center cut its energy costs.

By Clemens Pfeiffer

Many organizations operate redundant data centers to satisfy business continuity needs, but very few take full advantage of this powerful configuration. Having multiple, strategically-located data centers enables application workloads to be shifted to where power is currently more stable and less expensive. Because power is invariably most abundant and least expensive at night, such a “follow the moon” strategy can result in considerable savings.

The cost of electricity now represents 25-40 percent of all operational expenditures in data centers, so CIOs are doing everything they can to minimize the utility bill. Or are they? There is only one way to fully minimize energy spend in data centers, and that involves the convergence of two capabilities that are routinely implemented for entirely different reasons.

Before examining both of these capabilities and the benefits of converging them, we offer a perspective on power. Both the U.S. Department of Energy and Gartner have observed that the cost to power a typical server over its useful life can now exceed the original capital expenditure. Gartner also notes that it can cost over $50,000 annually to power a single rack of servers. It is not just the servers consuming power. Every kilowatt-hour of electrical energy consumed by IT equipment produces heat that must be removed by the power-hungry cooling system. Attribute this change to the dramatic improvement in server price and performance (think Moore’s Law) or to increasing energy costs; either way (or both), it has certainly elevated the relative importance of power in the total cost of ownership equation.

Capability #1: The Tool

The first capability exists as means to minimize the growing energy spend within a data center, and involves the use of a Data Center Infrastructure Management (DCIM) system to improve power usage effectiveness (PUE) ratings in two ways: increasing IT asset efficiency and reducing “overhead” power consumption, particularly for cooling. The typical DCIM system continuously measures power consumption at the building-, circuit-, and device-level. Most DCIM systems also measure environmental conditions, such as temperature, humidity and airflow, throughout the data center.

Some offer advanced features such as auto-discovery, capacity planning, building management system (BMS) integration, sophisticated yet intuitive dashboards, and comprehensive reporting. The best ones provide real-time monitoring, advanced analytics, and the ability to automate processes in cooperation with load-balancing or virtualization systems.

With little gains to be achieved in the data center’s core and storage area networks, the best way to improve IT asset efficiency is to increase server utilization. Dedicated servers have average utilization rates as low as 10 percent. Consolidation and virtualization can improve overall server utilization to as much as 50 percent. Could organizations do better? Definitely, by using the means available in some DCIM systems to continuously match server capacity with actual demand. Every data center experiences a peak demand, whether daily, weekly, monthly or annually, and every data center is configured with the server capacity needed to accommodate that peak demand with an acceptable level of performance. The only thing all that excess server capacity is doing during all of those non-peak periods, when demand can be as much as 80% lower, is wasting energy—and money.

A DCIM system with the ability to shed and shift loads will normally also be able to power-cap or even power-down those servers that are not needed to satisfy the current level of demand. Runbooks (pre-defined operating procedures) are used to automate the steps involved in removing or adding capacity by de-/re-activating servers and/or resizing server clusters, whether on a predetermined schedule or dynamically in response to changing loads. Such dynamic or “stretchable” cluster configurations can achieve server utilization rates of 80 percent of running servers and reduce power consumption without adversely impacting application performance or service levels. In fact, with dynamic configurations, single applications can get significantly more capacity when needed compared to a static cluster configuration as all “spare” capacity can be allocated at any given time as needed.

Capability #2: The Configuration

The second basic capability normally has nothing to do with power consumption, but instead involves business continuity and disaster recovery preparedness. The UPS and generator(s) can protect a single data center from the typical utility power outage (often with the DCIM system shedding load to match generator capacity). What about those natural disasters that can have significantly more destructive and enduring consequences? Organizations must be prepared for the hurricane, tornado, earthquake, or fire that wipes out the data center itself or merely prevents the delivery of diesel fuel for the generator. For business continuity under these circumstances, it is necessary to operate multiple data centers, preferably at geographically-dispersed locations.

Although choosing the location of a separate data center has historically not involved power as a factor, this, too, is changing. Power availability, reliability, and pricing are now prominent factors for deciding where to locate data centers. Google, Facebook, and others are locating new data centers in remote areas such as Finland and the Pacific Northwest that offer cheap and abundant power (and, of course, high-speed fiber optic communications).

Following the Moon

How can DCIM systems and geographically dispersed data centers converge to further minimize energy expenses? Here is where the moon (actually nighttime) comes into play, but it is important to understand another consideration first.

The wholesale price of electricity becomes extraordinarily high during periods of peak demand, which inevitably occur late in the afternoon on hot summer days. For example, the cost per MWhr of electricity in Texas is usually in the $30-$60 range but on June 26 this year it spiked to $3,000 per MWhr. During these periods of peak demand, utilities have two options: charge a much higher rate for electricity or actually pay commercial and industrial customers to temporarily reduce their demand. At a minimum, every data center should be able to react effectively to these temporary “demand response” events with options ranging from letting the temperature rise to power capping or powering down servers with excess capacity.

High demand for electricity during the day is leading some utilities to make another change to their rates by actually lowering rates at night to encourage consumption when baseload power generation is under-utilized. This allows a data center, for example, to charge their batteries overnight and use them during high-price peak periods or it encourages them to shift and shed loads across data centers to reduce or increase power consumption dynamically. Based on forecasts or with predefined and regular schedules, the runbooks used can be fine-tuned to perfection to handle the savings as well as increase application reliability because everything is automated.

It is also significant to note that outside air temperatures are their lowest at night, which can substantially cut cooling costs. In fact, a well-designed data center operated exclusively at night during a finite “shift” may not even need a power-hungry air-conditioning system but could instead use an airside economizer system. The combination of lower rates and a significant reduction in overall power consumption can result in considerable savings.

There is more, of course, that CIOs can do to reduce energy costs, such as using energy-efficient servers with high transaction-per-second-per-watt ratings as measured by the UL2640 standard. However, following the moon using virtualized clusters is the best way to both minimize energy costs and get a better return on the existing investment for any organization with geographically-dispersed data centers.

Clemens Pfeiffer is the CTO of Power Assure and is a 25-year veteran of the software industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies. You can contact the author at [email protected].

Must Read Articles