News

NSF Awards $53 Million to Build Supercomputing Grid

The National Science Foundation (NSF) announced that it will distribute $53 million in order to build and operate the first multi-site supercomputing system -- called the Distributed Terascale Facility (DTF). The DTF will perform 11.6-trillion calculations per second and store more than 450-trillion bytes of data, with a comprehensive infrastructure called the "TeraGrid" to link computers, visualization systems and data at four sites through a 40-billion bits-per-second optical network.

The National Science Board (NSB) approved a three-year NSF award, pending negotiations between NSF and a consortium led by the National Center for Supercomputing Applications (NCSA) and the San Diego Supercomputer Center (SDSC), the two leading-edge sites of NSF's Partnerships for Advanced Computational Infrastructure (PACI). NCSA and SDSC will be joined in the DTF project by Argonne National Laboratory (ANL) and the California Institute of Technology (Caltech).

The DTF would begin operation in mid-2002, reaching peak performance of 11.6 teraflops by April 2003. The facility will support research such as storm, climate and earthquake predictions; more-efficient combustion engines; chemical and molecular factors in biology; and physical, chemical and electrical properties of materials. The DTF will join a previous terascale facility commissioned by NSF in 2000. That system, located at the Pittsburgh Supercomputing Center, came online ahead of schedule in early 2001 and is expected to reach peak performance of 6 teraflops in October.

The partnership will work primarily with IBM, Intel Corporation and Qwest Communications to build the facility, along with Myricom, Oracle Corporation and Sun Microsystems.

Each of the four DTF sites will play a unique role in the project:

  • NCSA will lead the project's computational aspects with an IBM Linux cluster powered by Intel's second-generation 64-bit Itanium family processor, code-named "McKinley." Peak performance will be 6.1 teraflops with the cluster, which will work in tandem with existing hardware to reach 8 teraflops with 240 terabytes of secondary storage.
  • SDSC will lead the project's data- and knowledge-management effort with a 4-teraflops IBM Linux cluster based on Intel's McKinley processor, with 225 terabytes of storage and a next-generation Sun high-end server for managing access to Grid distributed data.
  • Argonne will have a 1-teraflop IBM Linux cluster to host advanced software for high-resolution rendering, remote visualization and advanced Grid software.
  • Caltech will focus on scientific data, with a .4-teraflop McKinley cluster and a 32-node IA-32 cluster that will manage 86 terabytes of on-line storage.

The DTF project director will be Rick Stevens, who is a computer science faculty member at the University of Chicago and director of the mathematics and computer science division of ANL, a U.S. Department of Energy laboratory.

For more information, visit www.nsf.gov.

Must Read Articles