Gateway Offers Grid Computing Power

Gateway’s new Processing on Demand service highlights the increasing visibility of grid computing technology.

Gateway Computer announced last week a plan to exploit the idle processing power of the thousands of PCs that populate its Gateway Country Store showrooms. Gateway’s move highlighted the increasing visibility of so-called grid computing, a technology that purports to exploit the resources of dozens, hundreds, or even thousands of PCs loosely clustered over a network.

Gateway plans to unveil a new service, Gateway Processing On Demand, that will consolidate 8,000 of its PCs in a high-performance computing grid. The combined power of Gateway’s grid—14 teraFLOPS, or 14 trillion FLoating-point OPerations per Second—is said to approximate that of the ASCI White supercomputer that IBM delivered in 2001, which can perform approximately 12.3 teraFLOPs. Gateway’s Processing on Demand service is enabled by Alliance MetaProcessor, distributed computing software provided by grid computing specialist United Devices.

“We have a lot of idle processing power that we’re now able to harness,” confirms Premal Kazi, senior manager for Gateway’s Processing on Demand service, who stresses that Gateway is able to offer its grid computing service without incurring the staggering costs associated with many grid computing start-ups. “The beauty of this is because the capital assets we are using are largely accounted for in our PC business, we are able to deliver this at a much lower cost than start up companies that have to aggregate the hardware.”

The grid computing technology that enables Gateway’s Processing on Demand service had its proving ground in a variety of highly successful public computing projects, such as in the venerable DES Challenge—a distributed computing effort that succeeded in cracking first the 40- and then the 56-bit RSA export encryption standards—and the University of California Berkeley’s SETI@Home distributed computing project.

As technologies go, then, grid computing is for all intents and purposes a mature one. Most of the big grid computing players—Avaki Corp. (formerly, Applied Metacomputing), Entropia Inc., Parabon Computation Inc., Porivo Technologies, Sun Microsystems Inc., TurboLinux and United Devices, among others—first began marketing their solutions two or more years ago. Since that time they’ve been joined by established vendors such as IBM Corp. and Hewlett-Packard Co., (among others) who have also introduced grid computing initiatives.

During a late October event in which he announced IBM’s e-business on demand computing initiative, for example, CEO Sam Palmisano highlighted Big Blue’s work in the grid computing space. Palmisano touted IBM’s work with the University of Pennsylvania in the construction of the National Scalable Cluster Lab, a grid computing effort designed to advance the methods with which doctors diagnosis and screen mammograms for breast cancer. Over the last year, Big Blue has touted grid computing projects such as The United Kingdom National Grid, the Netherlands Computing Grid, TerraGrid, and a nationwide computing grid in conjunction with the United States Department of Energy.

Most of IBM’s work with grids thus far has focused on clusters of its Power processor-based pSeries servers. If Gateway’s gambit is any indication, some of the most exciting developments in the grid computing space are happening at the low-end, with enormous clusters of PC grids.

The typical desktop PC has changed a lot in 15 years: Where once the term “PC” described a machine powered by an 8-MHz 8088 or 80286 microprocessor and outfitted with scanty memory resources, today’s “PC” is more properly an entry-level workstation. That’s because it often sports a 1-, 2- or even 3-GHz processor, hundreds of gigabytes of hard disk storage and, occasionally, a gigabyte or more of memory under its hood.

As a result, suggests Bill Philbin, senior vice-president of product development and chief operating officer with grid computing specialist Entropia, today’s PCs have the potential to be computational workhorses. Unfortunately, he points out, most of the power of the average desktop PC is wasted on productivity applications that typically require only a small fraction of its resources. “We’ve done studies where five to nine percent of the actual CPU is being used at any one point, and that, of course, is when the user is actually around. It’s zero percent for the other 16 hours during the day.”

Not surprisingly, speculates Peter Jeffcock, Group Marketing Manager for Grid Engine software with Sun, many compute-intensive organizations have turned to grids as a means to augment the overall capacity of their environments. “In a grid, you can easily get 90-percent-plus utilization. That’s potentially a large increase in utilization. If you’re an organization that needs the extra capacity, and if you have the right applications for it, it’s a very sensible proposition.”

“If you have the right applications for it”—there’s the rub. For while the performance of grids, PC and otherwise, can in some cases rival that of some of today’s most formidable supercomputers, grid computing isn’t an appropriate solution for every compute-intensive application.

Because of the settings in which PC grids are often deployed (i.e., in office environments in which the idle processing power of PCs is exploited when it becomes available), most purveyors of PC grid computing solutions are loathe to tout the use of their products for communications-intensive applications. Acknowledges Sri Mandyam, director of product marketing with United Devices: “We’re focused on basically taking problems that traditionally consume a lot of compute time on single computers, whether it’s a single computer or a group of PCs, and focus on breaking those up into smaller chunks and distributing them in a grid.”

As a result, says Sun’s Jeffcock, most of the organizations that exploit grid computing typically hail from a variety of vertical industries. “There’s life sciences, bio-sciences, electronic design, general manufacturing work, mechanical design, along with some digital content creation and digital media manipulation.”

Sun first entered the grid computing foray when it purchased GridWare in 2000. Since that time it has marketed both a grid computing solution (the Sun GridEngine) as well as a pay-for-use version of the same product (GridEngine Enterprise Edition), which includes support for resource allocation policies. Sun currently supports a grid of more than 7,000 systems in-house, Jeffcock says, and has sold grids to Ford Motor Company and Motorola among other Fortune 500 companies.

Jeffcock says that Sun typically sells about 70 new grids a week, with a typical complement of 42 processors. Surprisingly, he acknowledges, Sun is moving both Solaris- and Linux-based grids on mixed SPARC and Intel hardware. “About 50 percent are Solaris-only, about 25 percent are Linux only, and about 25 percent are mixed Solaris and Linux.”

Unlike many pure PC grid players, Sun hasn’t strongly pushed the idea of so-called “Global Grids,” or grids that connect machines over the public Internet. According to Jeffcock, several issues—security and bandwidth foremost among them—need to be resolved before Global Grids can become viable computing models. “Security has two issues. One is the transmission, and that is essentially understood. It’s just a matter of addressing it. (There’s also the issue of securing the data once it gets wherever it’s going, which is the harder thing.) The other thing is bandwidth. Today, some companies will send CDs [that contain their data] overnight via couriers because it’s a more reliable method of doing it than sending it over the network.”

Many PC grid players have backed away from their positioning of as little as two years ago, when they pushed their products as enabling solutions for global grids. At the time, most grid computing vendors claimed that enterprise IT organizations shouldn’t be concerned because their solutions provide integrated security—including both local (client-side) and remote (over-the-wire) encryption capabilities—quality of service and availability features. In 2002, however, purveyors of PC grid computing solutions have changed their respective tunes. “In early 2001, we started talking to enterprise customers about taking the technology and putting it behind a firewall so that they could take the grids and manage the grids themselves,” confirms United Devices’ Mandyam, who notes that in addition to Gateway, his company has signed deals with healthcare giant Kaiser Permanente, as well as pharmaceutical firm GlaxoSmithKline.

Entropia’s Philbin explains that his company has similarly rearchitected its flagship DCGrid product. “Talking with customers, they’ve essentially told us that while they’re interested in grid computing, they’re not excited about the process of actually sharing their proprietary data over the Internet.” The result, Philbin explains, is the company’s DCGrid 5.1, “an enterprise product that sits within the firewall and allows…[customers] to take advantage of the processing power that’s already in their environments.”

The task of marketing grid computing as an exploitable service—similar to the Processing on Demand capacity that Gateway offers—is beset by other problems as well. First and foremost, explains Sun’s Jeffcock, no one has yet agreed upon an acceptable way of charging for access to a grid. “Our customers have told us what doesn’t work, but nobody’s been successful in figuring out how to price this. There would just be experience that says that a whole bunch of people haven’t been successful with it. Gateway is kind of unique because they’ve got an existing infrastructure that’s there and costing nothing.”

For the record, Gateway charges 15 cents per processor hour on the grid, says Kazi, who explains that when it came time to determine how to price Processing on Demand, his company copied an arrangement that many supercomputer centers have traditionally employed. “It’s aggregated for a customer on a monthly basis, and then at the end of the month we send the customer an invoice. There’s actually no history in the area of PC grids for solving large computational problems, but there has been some history of charging on the processor-per-hour basis in the supercomputer centers.”

The future of grid computing looks bright. First, both Sun and IBM have made it a key component of their respective N1 and E-business on Demand initiatives. Speaking with analysts and reporters during a recent teleconference, for example, Clark Masters, Sun’s executive vice president for enterprise systems products, stressed the importance of grids in terms of delivering on one of N1’s core promises: Maximizing resource efficiency across distributed systems. In addition, Masters indicated, Sun planned to exploit the power of grids in non-traditional applications such as data warehousing. Said Masters: “A lot of the commercial requirements and the high-end, high-performance technical computing [HPTC] requirements have merged. If data warehousing isn’t an HPTC application, I don’t know what one is.”

For its part, IBM has partnered with Entropia to exploit the power of PC grids in its customers' sites. Thanks to an IBM initiative called OGSA, says Entropia’s Philbin, Big Blue’s customers may be able to automatically expose PC grids to applications that require additional computing capacity. “There is an IBM initiatives around Web services, which is called OGSA, so that eventually a customer who wants to run any kind of compute-intensive application will be able to say: ‘Here’s the application, here’s the data, here are my requirements, I don’t care where it runs, just get it back to me in the timeframe I specify.”

United Devices' Mandyam believes the power of PC grids will only grow as companies refresh the desktop machines in their office environments. “From a teraFLOPS perspective, we’re going to have more power, [because] we have 3 GHz now and we’re going to go to 4 GHz in a year or two.”

Must Read Articles