In-Depth

Utility Computing: Implementation Approaches

Instead of making big-bang investments, customers can reap the benefits of utility computing in more affordable ways.

Utility computing promises a host of benefits. In a utility-computing infrastructure, compute resources are highly virtualized, additional compute resources can be provisioned on an as-needed basis, workloads are highly distributed, and application instances allocated and de-allocated on demand.

Many vendors say these goals are realizable today, using off-the-shelf hardware and software, along with professional services, though you may have to standardize on hardware and software from a single vendor (or a handful of supported vendors), and it may require the purchase of new or replacement hardware for your existing infrastructure.

“A lot of capabilities are present, at least within certain scopes. For example, there’s some level of dynamic provisioning and load-balancing available within data centers, and in general, this base-level provisioning is quite widely used,” says Gordon Haff, a senior analyst with consultancy Illuminata.

Automation and Orchestration

In essence, utility computing describes an infrastructure in which compute resources are utilized more efficiently, and in which utilization rates as a whole are maximized. To a large extent, this involves two things: the dynamic provisioning of application workloads and the allocation or de-allocation of compute capacity on demand.

If a bank experiences an unexpected usage spike on one of its customer-facing applications, a utility-computing infrastructure should be able to respond automatically—and efficiently—by re-provisioning this workload to other (dormant or underused) compute resources. If all of the bank’s compute resources are being maximally utilized, a utility computing infrastructure should also be able to automatically bring new resources online. Once demand dies down, however, workloads should automatically be de-provisioned and additional compute capacity de-allocated.

Utility Computing in Mixed Environments

Utility computing's benefits may be easy to achieve in a single-vendor environment, but when you factor the complexity of the average enterprise into the mix, you may face greater challenges. “For the people who are doing it, they’ve probably standardized on as few [platforms] as possible,” Haff says. “And for something like true capacity on demand, this vision where you can sort of allocate this extra or unused capacity when you need it, this isn’t really even applicable for most [customers]."

Utility computing proponents disagree. “There has never been anything but a strong assumption that we’re going to work in these environments that are very diverse,” says John Lutz, vice-president of On Demand sales for IBM. Lutz points to IBM’s own Global Services division, which—in any given year—generates about half of IBM’s revenue. IGS operates in thousands of accounts, Lutz says, some of which aren’t even consumers of IBM’s hardware or software offerings.

As a result of this experience, Big Blue has “developed an enormous expertise in managing other people’s technology,” Lutz argues: “Everything we build runs on everybody who [has] any share, all of the prominent platforms. We are as likely to take on and optimize a data center that’s full of other people’s stuff as we are our own. We’re very broad-minded about that.”

Does this mean that the On Demand vision IBM has outlined—which includes dynamic provisioning of resources and pay-for-use capacity on demand—is attainable even in richly heterogeneous environments with investments in commodity x86 servers and legacy platforms?

“Yes,” says Lutz—with certain important caveats, of course. “There’s a sliding scale at work here. First of all, some platforms that get lumped into legacy are still getting a lot of investment, like our mainframe platform. It is harder with platforms that are stabilized, that aren’t getting any new support,” he concedes. “One of the target markets of Tivoli Orchestrator”—Big Blue’s recently acquired policy-based provisioning and management tool—“is going to be the most active sets of platforms, and I don’t know specifically how into the stack of relatively stable Unixes, for example, that will operate.”

Bill Cheng, director of provisioning and orchestration product with IBM/Tivoli, does, however. The Tivoli Orchestrator product that Cheng oversees incorporates technology IBM acquired from the former ThinkDynamics, a software vendor and integrator that developed policy-based provisioning technology for a variety of applications and platforms. As such, he says, it’s well-suited for managing compute resources in heterogeneous environments.

“That’s one of the reasons we bought ThinkDynamics: they weren’t an IBM provider, so they had to support other vendors. When we bought ThinkDynamics, they were able to do provisioning of a networking device such as Cisco, they were able to do provisioning of servers, they could do HP servers, they could do IBM servers, or whatever,” he asserts.

Tips for Implementation

Adapting one’s business as an on-demand or adaptive enterprise isn’t all, or even mostly, about making changes to your IT infrastructure. People and, just as important, business process changes will almost certainly need to be adjusted as well.

In such cases, turning a utility computing vision into reality may be aided by business consulting and technology services components, too. “You really can’t do this by thinking only about the technology. We go to a bank or a retailer and talk with them about how to make their whole enterprise more adaptive, more on demand, and the first thing they’ll want to know is about the process dimension,” says Lutz. “If you don’t understand the process flows, it’s hard to really impact their agenda.”

No one disputes this claim, though some observers say that IT needs to beware of big promises of short-term benefits. From a technology perspective (and irrespective of people or business process changes), some approaches require you to engineer a utility infrastructure by first ripping and replacing your existing infrastructure or, in many cases, custom-tailoring your management software to suit the highly idiosyncratic requirements of customers.

“Some of the flavors of utility computing that are out there are simply a new way to market professional services or outsourcing, and you really have to beware of those kinds of risks,” argues Louis Blatt, senior vice-president of Unicenter strategy for Computer Associates International Inc. (CA).

Dr. Time Howe, chief technical officer and founder of Opsware Inc., a company that has been developing data-center automation software since the late 1990’s, agrees. “Utility computing is really about lowering costs, increasing efficiency, and making their infrastructure more responsive to the business,” he says. “It is, on the other hand, seen by many vendors as a great vehicle for selling more hardware, software, and, especially, services.”

Instead, you may want to move forward at your own pace. CA’s Blatt argues that such an approach can mean taking baby steps—putting together an on demand or adaptive enterprise in a piecemeal fashion—or being more aggressive. “I believe that our customers ought to view utility computing as a destination, and if you look at it this way, they have to create a journey from where they’re at today to get there,” he says.

At Your Own Pace

Blatt argues that the first and most important step customers can take as they journey toward the on-demand enterprise is to standardize on a unified platform that lets them provision and manage their compute resources.

IBM’s Lutz stresses that Big Blue is committed to helping its customers transition to an On Demand infrastructure at their own pace. For aggressive customers, he says, this may involve significant investments in provisioning and workload management software, along with new servers and the expertise of consultants. For other customers, it can involve a gradual immersion, kickstarted—for example—by an investment in IBM’s Tivoli automated management solutions.

The essential take-away, says Opsware’s Howe, is that utility computing is a protean category that means different things to different people. For some customers, he allows, the “sexy” vision of utility computing—e.g., dynamic provisioning, capacity on demand, pay-for-use, and business process fusion –may well be worth the cost. But for most customers, it’s possible to realize the most salient benefits of utility computing—increased efficiency and reduced costs as a result of automation—in more affordable ways.

“Customers get focused on the sexier aspects of utility computing, like scaling up capacity to meet demand,” he says. “These are important, but not nearly of as immediate concern to most of the customers we talk to as the automation aspects of everyday tasks that they spend the majority of their time doing.”

That’s a perspective shared by Bruce Caldwell, a principal analyst for IT outsourcing with market research powerhouse Gartner Inc. Although today’s utility computing solutions are still immature, Caldwell says, they’re still delivering value. More to the point, Caldwell says, customers don’t have to make big-bang investments in utility computing to realize this value: In fact, incremental implementations can be just as productive, Caldwell concludes.

Outsourcing may be another approach worth considering. Few vendors expect customers to outsource their data centers lock, stock, and smoldering S/360 mainframe, of course. “The sort of all-or-nothing approach is definitely on the way out,” says IBM’s Lutz. “There will always be customers that want to buy hardware and software and implement it themselves. There will always be customers that want to outsource broadly, but we’re seeing people want to handle different parts of their businesses in different ways, whether by outsourcing or [by] doing it internally.”

Recognizing and Overcoming Problems

Outsourcing isn't without its political problems, however, especially as platform or application groups inside an IT organization scramble to protect their turf. “Somewhere, as a prime cause for the underutilization of all of these assets, is that they’ve been Balkanized, and often the reason they’ve been divided up that way, by function, by business, by geography, is because people are protecting their turf,” Lucas points out. “And so almost always, as you start to bring them together to fit the technical vision of getting it right, you almost always run up against some of these challenges at the human and cultural levels.”

There’s another challenge, says Opsware’s Howe. In the mainframe’s heyday, organizations typically spent considerably more on hardware resources than they did on human resources. The advent of commodity servers neatly inverted this trend, however, and chances are that most IT organizations today spend more on labor than they do on hardware.

“Hardware costs have plummeted, but as you get the effect of all these servers stacking up, you need more people to manage them, unless you have some kind of automation system,” he notes. By ratcheting up efficiencies, automating many once-manual processes, and encouraging autonomic—or self-healing—management, utility computing promises to mitigate this trend.

This can lead to a different kind of turf war—in which IT professionals believe they’re fighting for their jobs, Howe concedes. So to the extent that it’s possible, organizations should view the automation and efficiency improvements made possible by utility computing as an opportunity to free up IT workers to tackle more business-critical problems. “For example, if you look at the rate of application roll-outs in most organizations, it’s just so huge. That’s what you’d like your people doing. You’d like them developing and rolling out new applications, not restarting a service on a server,” he concludes.

With utility computing, it could be a win/win for IT and users alike.


Reconciling Differences

If you're a customer that want to to pick and choose from among the products of different utility-computing vendors, you'll find your task complicated by a lack of interoperable utility-computing standards.

Fortunately, there’s hope on the horizon.

Prominent open standards body OASIS recently assumed ownership of the data center markup language (DCML), an industry-driven effort (with support from 65 vendors) to develop nothing less than the lingua franca for information exchange in utility computing environments. When it’s finalized, DCML will provide a standard method for data-center automation, utility computing, and system management solutions to exchange information in data centers.

Under OASIS’ stewardship, DCML—like SOAP, UDDI, and several other prominent Web services standards—should eventually live up to its promise. The rub with DCML—as with all standards efforts—is that it’s going to take some time. Meanwhile, the DCML framework specification is available, which vendors can incorporate into their utility-computing management tools, if they want.

“Standards do take a long time, and I don’t think customers should wait for the standards to evolve in order to move forward with utility computing, or they’ll be waiting a long time,” says CA’s Blatt, who’s also president of the OASIS DCML working group. “One thing [customers] can rest assured of, is that if they really push the different standards organizations to interface with each other—they can really help [DCML] along.” He cites work that the Desktop Management Task Force (DTMF) is doing in the area of data-center automation.

The way things are going, there could soon be plenty of pressure from customers. By 2006, says Gartner’s Caldwell, about one in four companies will have hopped on board the utility computing bandwagon. And with multi-year, enterprise-wide implementations expected to average around $37 million each, these adopters will have made substantial commitments to the technology.

Must Read Articles