In-Depth

The Power to Control Power: An IT Priority Now and Next Year

During 2012, data centers became more virtualized in a migration to cloud-based architectures, and power consumption became a key consideration for IT departments. In 2013, these trends will accelerate, further elevating the importance of power for IT.

By Clemens Pfeiffer

The days of IT departments taking power for granted came to an end in 2012 when CIOs began taking steps to reduce power consumption. A detailed study conducted by the U.S. Environmental Protection Agency in August 2007 revealed that data centers in the U.S. consumed 61 billion kilowatt-hours, adding some 40 million tons of CO2 to the atmosphere. That amount is twice what was consumed just six years earlier, and it is estimated to have doubled again by 2012.

Here is my take on the key trends the industry witnessed in 2012, along with my predictions for the three biggest trends IT departments will begin experiencing in 2013.

A Look Back at 2012

The need to control costs while satisfying new requirements will continue as works in progress at most organizations through 2013 and beyond. This transition is embodied by three major trends identified in 2012.

2012 Trend #1: Virtualization and "the cloud" became the new norm

The move to virtualized environments and private clouds is occurring faster than most industry analysts predicted. The large players are now moving into the public and private cloud market as evidenced by recent offerings from IBM, EMC, and VMware. Numerous start-ups are also competing aggressively for market share, and their many innovations will further accelerate this trend. The result will be the widespread adoption of virtualization, with its ultimate implementation as a cloud-based infrastructure.

Growth in virtualized and cloud-based infrastructures is often accompanied by higher rack densities, causing power to grow as a percentage of overall IT spending. Both the U.S. Department of Energy and Gartner have observed that the cost to power a typical server over its useful life can now exceed the original capital expenditure. Gartner also notes that it can now cost over $50,000 annually to power a single rack of servers, and this is driving next year's next major trend.

2012 Trend #2: Power became an increasingly important consideration in IT decisions

One obvious way to reduce power consumption is to buy more energy-efficient equipment. In 2012, most IT departments started to improve energy efficiency when adding capacity or during refresh cycles. To help IT managers make fully-informed decisions, Underwriters Laboratories (UL) created a new performance standard (UL2640) based on the PAR4 Efficiency Rating. PAR4 provides an accurate method for determining both absolute and normalized (over time) energy efficiency for both new and existing equipment.

To calculate server performance using the UL2640 standard, a series of standardized tests is performed, including a Boot Cycle and a vendor-independent Benchmark. The test results are used to determine the server's idle and peak power consumption, along with transactions/second/watt and other useful metrics. These new metrics enabled IT managers to compare legacy servers with newer ones, and new models with one another, when making purchasing decisions.

2012 Trend #3: Operating applications across multiple data centers became more common

Virtualization has gained in popularity primarily owing to its cost-saving advantage. However, virtualization can also enhance business continuity within and across data centers. Indeed, the best practice in disaster recovery involves having multiple data centers dispersed geographically to minimize downtime during widespread disasters or outages, so it is not surprising that more organizations are operating multiple data centers (whether owned or hosted) with centralized monitoring and management across facility and IT components.

Having multiple data centers under central control also minimizes disruption during demand response events, when utilities request (or require) a temporary reduction in power consumption during peak periods or potential upcoming natural disasters such as Hurricane Sandy. Typically, these peaks occur around the same local time of the day, but with data centers in different time zones, one's local load can be shifted to another data center, reducing risk and energy consumption as needed. By contrast, organizations with only a single data center may be forced to either pay more or shed (rather than shift) load during a demand response event or are affected by outages during long-lasting natural disasters such as some of the data centers in New York during Hurricane Sandy.

A related and rather innovative way to reduce costs is with a "follow-the-moon" strategy where the load is shifted to the data center where the rates for electricity are the lowest, which is invariably at night when ambient temperatures are also at their lowest, thus saving on cooling costs.

A Look Ahead: Three Trends to Expect in 2013

In 2013, the migration to virtualized and cloud environments as well as greater energy efficiency will continue and will be joined by three other related trends worth tracking.

2013 Trend #1: Power consumption will be a major factor in capacity planning

The CIO's worst fear is outgrowing a data center, and one of the biggest reasons involves stranded power (power that seems to be allocated but can never be used). Most IT departments configure racks based on the nameplate power ratings of servers minus an assumed percentage, most often 20 percent. A rack is considered "full" when the assumed total power consumption of the servers it contains matches the power distributed to it. However, nameplate ratings and other vendor specifications are notoriously conservative, resulting in available power and space being underutilized.

In 2012, we witnessed the advent of the UL2640 standard based on the PAR4 Efficiency Rating, which, as noted above, can be used to measure a server's transactions/second/watt energy efficiency. However, because UL2640 also accurately measures power consumption under peak application load, it can be used to minimize or eliminate stranded power by more fully populating racks of servers. The company I co-founded, Power Assure, estimates that most data centers should be able to increase overall server capacity by 40-50 percent using PAR4 measurements.

2013 Trend #2: IT and facilities departments will begin consolidating management systems

There is considerable overlap between the data center infrastructure management (DCIM) systems used by the IT department and the building management system (BMS) used by the facilities department. This overlap leads to costly duplication of monitoring and management efforts, and the use of two separate systems makes it difficult for the data center to participate in an organization's demand response and energy conservation initiatives. The overlap also leads to duplicate measurements and missed cross-functional optimization opportunities. A typical example is the deployment of wireless temperature sensors across the racks when many of the new servers provide inlet temperature information directly, giving more measurements than any wireless setup can ever provide.

DCIM systems have advanced and matured to the point they are now able to serve as the building management system and IT management system for the data center by integrating with whatever is there and complimenting it with integration, automation and optimization functionality across data centers as in the case of ABB Decathlon. Having a single system will also enable IT and facilities managers to better cooperate to minimize energy consumption, optimizing space and power use, and maximize capacity and reliability in the data center.

2013 Trend #3: DCIM-as-a-Service will grow in popularity

The same technologies that now make it possible for a DCIM system to monitor and manage a data center's environmental conditions and power consumption without special instrumentation, also make it easier to locate some or all of the DCIM functionality in the cloud.

As with other cloud-based applications, DCIM-as-a-Service will likely begin in hybrid arrangements with the basic monitoring and management capabilities being on site and more advanced analytics and forecasting running in the cloud. This will make it possible to tap into vast knowledge bases that are not practical for any single organization to maintain on its own, an implementation model used by Power Assure's EM/4.

With the increased availability of cloud-based assessment and consulting services, a few organizations will take the plunge more fully by outsourcing some or even all DCIM functions entirely, such as asset management, Power Usage Effectiveness monitoring, stranded power, or hardware refresh analysis.

With all of these changes, 2013 may well be remembered as the year organizations began getting empowered to fully control their power consumption.

Clemens is a 25-year software industry veteran; he has held leadership roles in process modeling and automation, software architecture, and data center management and optimization technologies. Before co-founding Power Assure, he served as founder, president, and CEO at International SoftDevices Corporation, focused on designing and implementing an enterprise process automation platform used by some of the largest data centers in the world. Prior to SoftDevices, Clemens also served as chief software architect for Hewlett Packard. You can contact the author at Clemens@powerassure.com.

Must Read Articles