In-Depth

Can Going Green Slow Data Center Expansion?

Proponents say “going green” can help companies save money, but a secondary benefit may be just as important: delaying data center expansion.

Big server manufacturers like to pitch Green IT as a win-win for everybody: enterprise buyers get energy-efficient gear that helps them reduce their power and cooling costs and lets them trumpet their ecological progressiveness to the world. Big server OEMs, on the other hand, get to sell lots of new Green Gear -- licking their chops as they envision companies doing large-scale rip-and-replacements of their existing infrastructures -- while also touting their ecological bona-fides to the world at large.

What's less clear is just how much companies will actually save by deploying new Green Gear. After all, microprocessor, storage, and network manufacturers tend to exploit gains in efficiency -- e.g., advanced chip fabrication processes (which produce smaller, cooler processors), improved aerial densities (which boost hard drive capacities), high-throughput ASICs, and so on -- to pack in additional processors, significantly expand storage capacity, or boost network bandwidth. The upshot, some skeptics have pointed out, is gear that can actually consume more power than its predecessors (see http://www.esj.com/enterprise/article.aspx?EditorialsID=2951).

Although ”going green” can bring financial benefits such as subsidies, rebates, tax credits, and (as now appears likely) compliance with an expected rash of "Green" policy proposals, proponents argue there's another, frequently overlooked area in which Green IT can be an unequivocal money-saver in the here-and-now. Along with the push to make gear greener is a focus on making it denser. It's the density angle that is attracting an increasing number of Green advocates. Although the cost-saving virtues of Green Gear aren't always so straightforward, they concede, the thinking is that Green-driven density can help companies -- or some companies in specific locales -- save money.

It's all about the cost of data center real estate, says Karl Freund, vice-president of global strategy and marketing with System z for IBM Corp. Big Blue has long pushed integrated power and cooling as one of System z's strongest value propositions (see http://esj.com/enterprise/article.aspx?EditorialsID=2415). Officials have also expounded on System z's all-in-one compactness as a good solution for the ever-expanding enterprise data center. With IBM's System z10 mainframe launch last week, officials explicitly pushed this argument.

"For IT, the real pain of all of this [growth] is that they have to build new data centers, and that's a pain that I think can vary depending on where you are," Freund argues. "If you're in Manhattan, if you're in Tokyo, then these things become critical, so building a new data center is a cost you want to avoid."

The irony, he says, is that many IT departments don't end up footing the bill for the energy their data centers consume. That money comes from elsewhere. "Often the IT manager doesn't pay the electricity. It's provided by the facility. So talking about how much money they can save on energy -- yes, that gets their attention. They do want to be good citizens. In terms of their [IT] budgets, it doesn't matter … [because] they're not paying their energy bills," he says. "What does matter is that they have to significantly modify their data centers, and 70 percent of the Global 1000 will have to significantly modify their data centers over the next five years, according to Gartner."

Data center modification -- or the outright construction of entirely new data centers -- is a decidedly non-trivial cost, Freund argues: "Yes, it would be nice to spend less on energy. A lot of [companies] will take whatever savings they can get, but what they're really interested in is avoiding having to build another $200 million data center. That's where the rubber meets the road."

The Server Side

IBM isn't the only vendor taking this line. Most large server vendors are also talking up the rising cost of data center real-estate 0 -- typically as a means to push their densest, most scalable computing solutions.

Hewlett-Packard Co. (HP), for example, is tops in blades, according to both IDC and Gartner Inc. That's no accident, officials say: HP engineers dense, highly scalable, highly manageable blade offerings -- with a proprietary power and cooling feature set -- but it developed these products largely in response to market demand. Back in June, HP delivered a new über-dense data-warehouse-on-a-blade, its HP BladeSystem for Oracle Optimized Warehouse. That product, says Rich Palmer, director of technology strategy and planning with HP, addresses several different requirements.

At the outset, he agreed, there's the obvious: integrated -- i.e., preconfigured and optimized -- power and cooling. "If you think about blades, they are self-contained inside of one of our own physical enclosures. We can put management parameters around the power and cooling inside that box. We can spin down fans [and] we can offload power when it's not being used. We have sensors all throughout this C-7000 [HP's BladeSystem enclosure] that do thermal management on the fly, so we can consistently deliver to the customer the most efficient operation possible," said Palmer at the time.

However, there's a density argument to be made, too, according to Palmer. "Our customers tell us that they can't just keep building new data centers [to house new gear] to meet their capacity requirements. This need [for capacity] is growing, and they [customers] say they can't possibly build enough new data centers -- based on the old scale-out model -- to meet it. What they're asking for are denser configurations -- blades -- which let them stack more [compute resources] into the same data center floor space."

HP doesn't have an all-in-one platform with the same cachet as System z, but it does have estimable mission-critical computing assets in both its HP-UX-based and Tandem NonStop-based lines.

Earlier this summer, HP outlined an ambitious strategy to bring NonStop to blades -- consistent, officials said, with a strategy of addressing customer demand for denser, more integrated computing resources. HP plans to market its new NonStop-based blades as part of an "Adaptive Infrastructure in a Box" product push. Industry watchers were duly impressed -- enchanted as much by the future potential as by the present practicality of HP's coup.

"[F]uture HP plans call for 'Adaptive Infrastructure in a Box' blade solutions in which NonStop works alongside Linux, Windows, or HP-UX blades, [but] for now we're talking about dedicated racks of NonStop gear that just -- by no means incidentally -- take standard gear, add ServerNet, and sprinkle a liberal dose of NonStop software pixie dust. Think of these as NonStop systems that happen to leverage the BladeSystem design, rather than standalone blade servers," says Gordon Haff, a senior IT advisor with consultancy Illuminata.

Even non-traditional players have ventured into the blade stakes. Unisys Corp., for example, announced its first blade systems earlier this year. Last month, Unisys fleshed out its blade line with several new offerings, boasting support for the latest Intel processors. The company’s installed base -- which is strong in government, utilities, financials, and other important verticals -- was clamoring for blade solutions, officials say.

"It is a sort of non-traditional [entry] for us, but it's something that our customers were demanding, and -- since we delivered our first [blade] systems [earlier this year] -- it's something we're seeing a lot of demand for. If you think about it, with blades, you get some of the same advantages that you'd get with our ClearPath [mainframe] systems: you get that built-in power and cooling advantage, you get superior virtualization and scalability. You get that one box -- that stack of blades -- to replace all of those individual servers," says Colin Lacey, vice-president of systems and storage with Unisys. Lacey, too, talks up the importance of shrinking -- or, at least, bringing to heel -- the data center waistline.

"What a lot of [customers] have done is pretty much just kept adding servers, whenever they had a need [for more capacity], just adding servers without really any planning," he continues. "There's a cost to that that goes beyond just the cost of managing and powering and cooling all of those [distributed] systems. There's a cost involved with those systems taking up all of that space.

Contra Data Center Density

Blade sales are soaring; in both Gartner's and IDC's surveys, blades are posting stronger-than-ever growth numbers. Ditto for surging mainframe sales, which IBM's Freund claims are likewise proof of the mainframe's everything-old-is-new-again appeal. "Revenues for System z increased 25 percent [in Q3 of 2008] compared with the year-ago period, where -- admittedly -- we had a bit of a downturn, mostly because of pent-up demand for z10 [which IBM introduced in January of 2008]," he says, "but we posted double-digit growth in all geographies. System z has seen quite a renaissance as customers realize the benefit of larger-scale consolidations and as they realize the benefits of putting applications and data closer together -- putting them on the same systems. These two things are driving demand."

Blades and (to a lesser degree, but, as IBM officials maintain, on a proportional scale) mainframes are flying off the shelves, driven -- advocates like to argue -- as much by a hunger for density as by virtualization, best-in-class power and cooling, and other features. Another ultra-dense data center idea -- the so-called data-center-in-a-box -- has been slower to take off, however.

Sun Microsystems Inc. kicked things off two years ago, announcing Project Blackbox, its datacenter-in-a-standard-sized-shipping-container initiative (see http://esj.com/enterprise/article.aspx?EditorialsID=2259).

Its promise, Sun officials maintained, was a pre-configured, highly dense, highly scalable, highly modular data center environment. Moreover, Sun claimed, customers could add data center capacity just by forklifting in new boxes. (Similarly, customers could scale-down data centers by unplugging -- or lifting out -- boxes.) Since then, IBM and HP have also introduced data centers-in-a-box of their own -- although HP's offering might as well be called a data center-in-a-double-wide, because it's about twice as big as a standard-sized shipping container (see http://esj.com/Enterprise/article.aspx?EditorialsID=3231).

The value-add, advocates argue, is obvious. After all, the data-center-in-a-box doesn't simply address the power and cooling issues that still bedevil many organizations; it deals an effective death blow to the problem of the ever-expanding data center, too. Who cares about skyrocketing demand (much less about floor space, building leases, or highly specialized data center blueprints) when you can just forklift in additional capacity? Even so, datacenters-in-a-box aren't exactly flying off the shelves.

"As with most things related to business, habits are hard to break. Organizations used to building, managing, and maintaining monolithic data-center facilities tend to stick with what they know, and stubborn executives tend to like best what confuses them least," said Charles King, a principal with consultancy Pund-IT, apropos of HP's Performance Optimized Datacenter (POD) announcement earlier this year. "That said, container-based IT solutions have been finding their way into corners of the market where the combination of quick deployment, easy installation, and robust performance trumps conventional thinking."

Even as IBM, HP, Sun, Unisys, and other vendors tout blades, mainframes, or mainframes-on-a-blade (i.e., HP's NonStop Integrity offerings) as a low-carb diet of sorts for still-expanding data center waistlines, some industry thought-leaders are taking a different tack. Implicit, after all, in Freund's argument is the notion that System z's density and scalability enable customers to make better use of their existing data center floor space -- instead of spending millions of dollars to build new data centers.

"The grid cannot provide enough energy to feed those data centers, to feed that capacity as we throw more and more multi-core servers at them. As a result, you have to turn someplace else. You can either build a new data center or you can get a new, efficient consolidated platform," he maintains, citing System z10 -- in both its Enterprise Class (EC) and Business Class (BC) versions -- as a space-conscious platform.

Biting the Bullet

Some industry leaders say enterprise shops are probably going to have to bite the bullet and build new data centers anyway.

So noted a recent research bulletin from Gartner, which made the case for a more dynamic, more resilient data center environment. The salient upshot, Gartner analysts argue, is that the data centers of today are "functionally obsolete;" going Green, then, involves transforming the static, reactive, and largely inert data centers of today into something quite different. "If 'greening' the data center is the goal, power efficiency is the starting point but not sufficient on its own," said Rakesh Kumar, vice president at Gartner, in the study. "'Green' requires an end-to-end, integrated view of the data center, including the building, energy efficiency, waste management, asset management, capacity management, technology architecture, support services, energy sources and operations."

In other words, Kumar argues, going Green involves a "conceptual" shift from the static status quo into a model that's patterned after a dynamic, resilient, inescapably "living" entity. He says legacy data centers -- including many of those constructed over the last half-decade -- are ill-suited to service a new class of ultra-dense, power-hungry gear (for example, aggressively multi-core processors and ever-denser storage arrays) that is being increasingly deployed.

"Data center managers need to think differently about their data centers. Tomorrow's data center is moving from being static to becoming a living organism, where modeling and measuring tools will become one of the major elements of its management," Kumar indicates. "It will be dynamic and address a variety of technical, financial and environmental demands, and be modular to respond quickly to demands for floor space. In addition, it will need to have some degree of flexibility, to run workloads where energy is cheapest, and, above all, be highly-available, with 99.999 per cent availability."

Must Read Articles