In-Depth
The Grid/Utility Computing Connection
These days, utility computing is ascendant, and IBM Corp., Hewlett-Packard Co., Sun Microsystems Inc. and other vendors have positioned grid computing as an important subset of their respective utility computing visions.
If someone told you about a technology that promises to help you better leverage your under-utilized compute resources by dynamically allocating workloads to idle or underused computers, you might think they were talking about utility computing. And perhaps they are. But they could just as easily be talking about grid computing, which—before utility computing exploded on the scene a couple of years ago—was at the cusp of its own hype curve.
These days, utility computing is ascendant, and IBM Corp., Hewlett-Packard Co., Sun Microsystems Inc. and other vendors have positioned grid computing as an important subset of their respective utility computing visions.
In practice, of course, the “grids” these vendors champion differ somewhat from the technology vision on which computational grids were first premised—at least in enterprise environments. Nevertheless, grid computing is often among the first steps customers take toward utility-enabling their enterprises. Used intelligently, observers say, grids can help customers make good on the promise of more effective resource utilization and application scalability, which is an important part of the utility computing vision.
“We’re definitely seeing interest from customers [in grid computing], especially in organizations that are huge consumers of CPU capacity—and could consume more if they had it,” says Louis Blatt, senior vice-president of Unicenter strategy for Computer Associates International Inc. (CA).
Tim Howes, CTO and founder of data-center automation specialist Opsware Inc., agrees. “I think [grid computing is] very, very complementary to utility computing,” he says. “But it’s important to remember that there’s this all-out vision of grid computing, and then there’s what people are actually doing today to get better utilization out of their hardware, which is deploying virtualization technologies. We’re seeing very little interest in the former and quite a bit in the latter.”
Grid History 101
Grid computing is an offshoot of distributed or parallel processing software originally developed for academic or high-performance computing (HPC) research environments. In its most general sense, it describes a topography in which pieces of an application workload are parceled out to (often highly) distributed client computers, which are linked to one another in the context of a computational grid. Each of the client machines uses its own compute resources—CPU, memory, and, sometimes, storage—to process a workload, the results of which are then sent back to a master aggregator.
Depending on their size, grids can achieve supercomputer-like performance. They’re typically tapped to crunch numbers for computationally-intensive applications, such as interest calculating programs (for the financial services industry) or product design and modeling software (for the automotive or aerospace markets). Grid technology has also had its proving ground in a variety of highly successful public computing projects, such as in the venerable DES Challenge—a distributed computing effort that succeeded in cracking first the 40- and then the 56-bit RSA export encryption standards—and the University of California Berkeley’s SETI@Home distributed computing project.
When grids first burst on the scene a few years ago, they were billed as a great way for companies to tap their unutilized (or underutilized) capacity of a particularly wasteful compute resource: the hot-rodded client computers that populated their corporate desktops. In fact, a number of vendors—TurboLinux, Entropia, United Devices, and Platform Computing among them—developed software designed to help companies do just that.
As it stands today, many of the companies implementing grids aren’t trying to harness the power of underused desktop computers, however—even though, with certain exceptions, desktop machines are more powerful than ever.
Instead, enterprise grids are typically deployed in a configuration that most IT professionals should instantly recognize. “One type of grid—the most common type, actually—is seemingly a high-performance compute cluster,” explains Gordon Haff, a senior analyst with consultancy Illluminata. “And in that sense, there’s a lot of grids out there. But in terms of really widely using idle resources in an enterprise, grid computing for this purpose is not that widely used, and it’s really unclear how widely used it ever will be.”
According to Haff, when IBM, HP, and other vendors talk about grid solutions for enterprise customers, they’re almost always talking about compute clusters. “They’re talking about clusters and management, based on certain well-defined policies, and that is something that is happening to some degree, but it’s still relatively immature,” he says. “But I think this vision of harvesting unused compute cycles from PCs is always going to be peripheral type of use.”
Promise and Practicality
Today, grid computing is not an ideal or even applicable technology for all applications. Workloads that involve large data sets and which can tolerate high latencies are the easiest to grid-enable. Most transaction-oriented applications, on the other hand, are all but unusable when exposed in the context of a grid. For this reason, the technology has been a relatively tough sell in the enterprise, which is rife with transaction-dependent applications and middleware.
Even its proponents acknowledge as much. “The [grid] solutions for transactional-oriented applications, these aren’t as mature as the kind of high-performance computing implementations. That’s very true,” says Ken King, vice-president of grid computing for IBM. Nevertheless, King says, there are plenty of common enterprise workloads that can benefit from being grid-enabled. “Look at your batch applications. This [batch processing] is something everybody has that can really benefit from grids. You look at the kinds of things you have in public sectors, with government, and financial markets with financial risk analysis kinds of applications, you look at industrial sectors. These are all sweet spots today for grid computing in commercial enterprises.”
In spite of the still-limited applicability of grids, IBM and other vendors continue to push them as important, if constitutive, elements of their utility computing visions. “We don’t see grid computing as a separate initiative from On Demand,” King asserts. “If you think about what On Demand is all about, it’s an approach to solving different kinds of problems, and for many of our customers, grid computing helps them do just that.”
Grid Standards
Over the last few years, many vendors have attempted to recast grid computing as a technology solution for enterprise customers. In the case of IBM and HP, both companies have done so in part by conflating grid computing with their own individual visions of utility computing, On Demand and Adaptive Enterprise, respectively. Grids continue to play an important role in Sun’s own utility-computing strategy, too, even though the Unix giant has drastically scaled back the scope of its N1 utility-computing vision.
About two years ago, IBM announced the first in a series of industry-specific grid solutions, a range of hardware, software, and service offerings designed for customers in the financial services, petroleum, aerospace, life sciences, and government markets. In one now-famous example, Big Blue worked with financial services giant Charles Schwab to implement a grid-computing solution that reduced the query response time on one of Schwab’s premier customer-service applications from four minutes down to 15 seconds. “Basically, this application was unusable the way it was, so we worked with Schwab, leveraging a grid-computing technology, and got that run time down to 15 seconds, which dramatically improved customer service,” King says.
The Schwab case was notable because it involved a dedicated compute cluster—powered by IBM xSeries servers running Red Hat Linux—instead of dozens or hundreds of distributed compute resources. At the time, grid computing was still yoked (in the popular imagination, at least) to the technology vision of exploiting under-utilized compute resources, especially desktop computers. But the idea of implementing grids in the context of the compute cluster, which provides a more manageable configuration of (usually) homogeneous SMP servers, has proven to be even more attractive—particularly for utility computing.
Indeed, to the extent that these so-called grid clusters can also dynamically allocate workloads and—using management software developed by IBM, HP, Opsware, and others—provision new compute instances, they essentially describe a form of On Demand or Adaptive—that is, utility—computing. “Sort of this idea of having this dedicated cluster [of SMP servers] and using standard protocols to distribute applications to load balance, and using these sort of rich tools to manage across these systems, that’s happening today,” says Illuminata’s Haff. “So if you’re asking [how feasible utility computing is today], these sort of grid clusters are probably the most common instance [of that].”
Over time, several industry organizations have formed to spur the adoption of grid computing solutions, including the Global Grid Forum (GGF) and the Globus Alliance. Last April, however, more than a dozen vendors—including industry heavyweights EMC, Fujitsu, HP, Intel, Oracle, and Sun—banded together to form the Enterprise Grid Alliance (EGA), one goal of which is to produce grid solutions for the enterprise.
Recently, EGA members EMC, Dell, Intel, and Oracle announced the first fruit of this effort, an initiative called Project MegaGrid that aims to develop a standard approach for building and deploying enterprise grids. According to Charles King, a principal with consultancy Pund-IT Research, Project MegaGrid is premised on the compute cluster vision of grid computing. As such, he says, it could prove uniquely compelling for some enterprise customers.
“Most [grid computing] efforts have utilized grid as the core technology for provisioning complex IT workloads across large, heterogeneous IT infrastructures. Project MegaGrid, on the other hand, has taken a far more targeted approach in developing grid-based replacements for traditional SMP solutions,” King writes. “By optimizing Project MegaGrid across their own technologies, the four partners are creating a methodology that could be deployed across existing infrastructures or purchased and provisioned as ‘off the shelf’ commercial grid solutions.”
Virtualization’s the Thing
Another case in which grid computing and utility computing are highly complementary is in the area of virtualization. The processors that power mainframe, RISC, and even commodity Intel servers are inordinately powerful for many common applications. This results in a high degree of efficiency for all but the most computationally intensive workloads.
In the mainframe world, where capacity is a pricey commodity, system operators wring more performance—and extra value—out of their Big Iron hardware by carefully husbanding capacity. This is done by virtualizing compute resources in the form of logical partitions (LPAR), which are operating system instances that run on a fraction of the system’s overall compute capacity. Today’s mainframe systems can support thousands of LPARs; thanks to technology trickle-down, some degree of virtualization has also been realized in RISC systems, and, with comparatively less success, in Intel-based systems, as well.
As a result, some observers see a future in which data centers are powered by compute clusters of Intel-based servers running some kind of virtualization technology. “Increasing the utilization of your hardware resources is a big part of [utility computing], and being able to virtualize capacity makes provisioning much easier. This is where [grid computing and utility computing] actually go together very well,” says Opsware’s Howe. “Again, there’s that all-out vision of grid computing, and then there’s what customers are doing today with virtualization products like VMWare.” In this way, Howe says, grids of virtualized compute resources are “absolutely a great way” to realize some of the benefits of utility computing, such as on-demand allocation and de-allocation of additional capacity—new virtual instances in a grid—along with dynamic provisioning of resources.
CA’s Blatt agrees. “We’ve actually worked with one customer to simplify grid computing to a point where it’s actually doable and practical. They’ve created an internal grid of compute capacity, and by understanding the utilization rate of their low cost hardware, they’ve taken their job management systems and can shunt jobs to CPUs that have low utilization rates,” he explains.
Another CA customer has tapped grids to both leverage its underused capacity and maximize—via virtualization technology—the utilization of that capacity. This has enabled them, in concert with CA’s On Demand management tools, to realize many of the benefits of utility computing. “They’ve been using a very nice product for managing virtual machines, so in combination for managing dynamic resource management, we can now deploy virtual machines to machines that have low utilization rates as the customer needs more services,” he concludes.