Enterprise Grid Computing: Industry Verticals (Part 6 of 7)
We explore industry verticals that are are pioneers in grid adoption or illustrate significant, untapped benefits from grid adoption.
In last week's article we noted that not all applications have performance requirements that would merit running them in a grid environment. This week we single out a number of industry verticals, either because they are pioneering examples of grid adoption or they illustrate significant untapped benefits from grid adoption.
The financial services industry has been a pioneer in the application of high-performance computing technology, and today it is a leader in the application of grid technology. An application ten years ago deployed on a Paragon® supercomputer produced real-time quotes on many mortgage-based securities. Computations that took several hours to run on a mainframe could be run in seconds on a parallel supercomputer, allowing customer representatives to deliver quotes immediately.
Today, grids are promising for securities trading, for such tasks as performing risk and derivative calculations, trading decision support, “what if” analyses (to assist in building optimization strategies), and data mining. Grids can be equally useful in banking, asset management, and insurance, speeding up tasks such as risk analysis, fraud detection, and actuarial analysis.
Benefits of grid computing include fault tolerance through virtualization and geographical distribution through multiple service providers. Grids allow adjustments to meet a service-level agreement. For instance, when a computation needs to finish within a pre-determined time, the application can be designed to take advantage of parallelism. Once this capability is architected within the application, it is matter of scheduling the appropriate number of processors to ensure that the run time does not exceed a pre-determined interval. Furthermore, it is not necessary to wait until the next procurement period; the extra processors can be summoned from a service provider just for the duration of a run.
The interoperability among grid components is also applicable to legacy integration. Where it makes sense, pre-existing applications (including those running in mainframes) can be integrated into the new grid infrastructure, perhaps through the use of a Web services API. The grid middleware can track resource usage, allowing highly deterministic cost accounting.
Some of the thorniest challenges in this industry concern addressing supply-chain and enterprise resource-planning issues. Private-sector solution providers have thrived by providing services to this segment. Some of these applications could be implemented as grid sensor networks, where patients in a hospital are issued active RFID tags to keep track of vitals, treatments, and prescription schedules to reduce errors that can jeopardize the quality of patient care. These tags will also make it easier to implement regulatory mandates and to manage insurance claims, while minimizing the opportunities for fraud.
Tagging the most expensive drugs helps manage inventory, batch management, and expiration. It is easier to track a batch from production to consumption and to manage recalls and safety advisories. RFID tagging, combined with a grid infrastructure, can reduce counterfeiting, tampering, and shrinkage. In a grid system, the data processing is distributed. Data to be sent to the manufacturer is aggregated and processed locally to provably protect patient privacy.
Where health systems are consolidated (whether by mergers and acquisitions or by process reengineering), a grid-inspired distributed database for storing patient records may make more sense than using a more traditional consolidated, massive database, as is already happening under the auspices of the United States Department of Health and Human Services. The Office of the National Coordinator for Health IT (ONCHIT) was created in 2004 by executive order to facilitate the exchange of health information on a national level. This information exchange will be supported by Regional Health Information Organizations (RHIOs). RHIOs will aggregate data stored in hospital and physicians’ offices by providing an infrastructure for standards-based discovery and retrieval. When implemented, this project would be the largest data grid in the nation.
Government and Academic Research
For government labs and universities, grids make it easier for large communities to access clusters hosted in national laboratories or in regional computing centers. Standardized front ends and access APIs facilitate resource marshalling as needed, including combining the resources of several clusters.
The agility possible from this capability can support time-constrained calculations that can have immediate public benefits. For instance, the results of a predictive weather-modeling simulation can help the planning of emergency preparedness during a severe storm. As another example, running a real-time electric power system contingency analysis could help system operators take defensive measures to minimize transients that can bring the system down, preventing a blackout.
The increasing use of computer effects in feature films requires massive computational resources. Tasks include physics modeling and particle simulations, ray simulations, animation and character tool use, and compositing (combining scenes shot against a blue-screen background with actual backgrounds).
Until very recently, the largest rendering jobs were done on server farms based on RISC processors. The compelling cost advantage of nodes based on commodity processors has triggered a migration to these platforms, but a new server-farm deployment can still cost several million dollars.
If widespread deployment of the grid-computing model successfully decouples data and programs from execution vehicles—providing the ability to command thousands of nodes on a per-job basis—such massive investment would become unnecessary in many cases. It will be possible to securely ship and run a job that takes years of processor time and run it on massively parallel systems in a dramatically shorter period. The job could be done by a grid services provider that does not even own the grid, but acts instead as an aggregator for lower-level grid services in a rich ecosystem.
Small studios with big needs could pay only for the time they need, thereby converting an otherwise untenably large capital expenditure into a manageable operating cost. This ability could enable independent studios to undertake projects that would otherwise be impossible. Grid infrastructure, as opposed to individual investment, makes sharing a cluster across multiple organizations possible.
Electronic Gaming Industry
Game development bears some resemblance to film production, in that it often involves very large numbers of pre-rendered images. Grids can be equally useful in game operations, especially in the support of massively multiplayer games that may involve tens of thousands of simultaneous players. The traditional architecture of using a centralized server infrastructure is not scalable with the demands of the game or with the number of players who sign on. Game response times can therefore deteriorate during high-usage periods.
Under a traditional architecture, relief does not come until additional servers are purchased. Under a grid infrastructure, the gaming application can be designed to dynamically allocate additional servers, tracking the usage demand and ensuring that performance does not degrade.
The game data can be distributed throughout the grid to optimize behavior by locales. Likewise, the game can be designed to optimize the balance between local computations done at the customer’s client and computations performed at a service provider’s server. A customer with a powerful PC might have more computations done locally, yielding better game responsiveness. With a customer on the go using a PDA, the application may be designed to rely more on the servers.
The application designers will likely use the authentication, authorization, and billing services built into the grid middleware, with a corresponding reduction in development cost.
Enrique Castro-Leon an enterprise architect and technology strategist at Intel Solution Services, where he is responsible for incorporating emerging technologies into deployable enterprise IT business solutions.