World's Highest Performance Interconnect Now Shipping in Volume

PathScale InfiniPath achieves industry's lowest latency

San Francisco, CA - August 10, 2005 - PathScale, developer of innovative software and hardware solutions to accelerate the performance and efficiency of Linux clusters is now shipping its InfiniPath(tm) HTX(tm) InfiniBand(tm) Adapter, the industry's lowest-latency Linux cluster interconnect for message passing (MPI) and TCP/IP applications. The University of California at Davis is an early InfiniPath customer and represents the very first AMD dual-core Opteron cluster deployed with PathScale InfiniPath.

The ultra-low latency and unprecedented messaging rate of the PathScale InfiniPath HTX Adapter improves MPI application performance and Linux cluster utilization.

The highly pipelined, cut-thru design of InfiniPath is optimized for applications sensitive to communication latency, the most difficult problem to overcome when migrating from large SMP systems to clusters. InfiniPath delivers superior Interconnect performance at commodity price levels by implementing a high-performance software stack and connecting directly to the AMD Opteron(tm) processor via a standard HyperTransport HTX slot. When combined with low-latency InfiniBand switching from Cisco (TopSpin), Silverstorm (Infinicon) or Voltaire, InfiniPath enables applications to reliably scale to hundreds or thousands of nodes.

PathScale has published new performance results including the Pallas Benchmark Suite and the HPC Challenge Benchmarks. These latest results validate the performance advantages of InfiniPath as the highest performance cluster interconnect for Linux-based HPC applications. These results can be viewed at

Among the first customers to adopt the PathScale InfiniPath interconnect is the Center for Computational Science and Engineering (CSE) at the University of California, Davis. CSE is implementing a 144-CPU AMD Opteron processor-based Linux cluster that leverages InfiniPath to run computational models and simulations related to physics, mathematics, engineering, biomedical diagnostics, and other processor-intensive HPC applications. This deployment consists of 36 server nodes from TeamHPC, a division of M&A Technology. Each server is equipped with two dual-core AMD Opteron processors and an InfiniPath HTX InfiniBand Adapter. They are interconnected with a Cisco TopSpin 270 InfiniBand switch. "We support scientists and academic researchers working to analyze highly complex physical and biological processes," said Bill Broadley, an Information Architect at UC Davis. "We require our compute resources to facilitate the best possible performance for our many communications-intensive applications. The PathScale InfiniPath Adapter is performing exceptionally thus far."

PathScale InfiniPath outperforms competitive interconnect solutions by achieving the lowest latency across a broad spectrum of tests that indicate how real applications will actually perform in HPC environments. InfiniPath has achieved an MPI latency of 1.32 microseconds (as measured by the standard MPI "ping-pong" benchmark), n1/2 message size of 385 bytes and TCP/IP latency of 6.7 microseconds. Using the Random Ring Latency test from the HPC Challenge Benchmarks on 32-processor systems, PathScale InfiniPath produced results ranging from 3X to 10X faster than alternative high-speed interconnects.

"PathScale's mission is to enable users to reliably and efficiently solve their most challenging computational problems. The performance results achieved on real world HPC applications and with key application benchmarks run on installations such as the new Linux cluster at UC Davis prove that PathScale InfiniPath is, without question, the world's highest performance cluster interconnect," said Scott Metcalf, CEO of PathScale.

"PathScale's innovative approach to high-speed InfiniBand interconnect reduces the workload required to process messages, thereby increasing the effective message rate to unprecedented levels. PathScale's InfiniPath hardware and software constitute the industry's first commercial grade InfiniBand solution, and establishes new standards for InfiniBand performance."

The new cluster at UC Davis CSE was designed and integrated by TeamHPC, a division of M&A Technology, Inc. based in Eudora, Kansas. "TeamHPC and PathScale have worked closely to test and implement a highly efficient, cost-effective, high-performance research platform at UC Davis that enables scientists, academics, and graduate students to overcome the performance bottlenecks of computing systems of the past," said Bret Stouder, Vice President of TeamHPC. "The combined performance of AMD Opteron processors and the low-latency PathScale InfiniPath interconnect along with complete integration solutions from TeamHPC opens a new chapter in high performance computing, where an economically priced system does not mean compromised performance."

About UC Davis CSE

The Center for Computational Science and Engineering (CSE) at the University of California, Davis is concerned with the development of computational models and simulations as an alternative means of understanding complex physical and biological processes, and to model and visualize entirely abstract processes encountered in physics, mathematics, engineering and computer science. Read more at

About TeamHPC

TeamHPC, a division of M&A Technology, specializes in High Performance Computing, and assembles and integrates all of its products in an ISO-9000: 2000 certified manufacturing plant. TeamHPC gives researchers access to their clusters for benchmark and application testing before products are shipped. TeamHPC also provides a 24-hour data center environment that allows researchers to host their computational machines at M&A Technology's headquarters in Dallas, TX. More information about TeamHPC is available at

About PathScale

Based in Mountain View, California, PathScale develops innovative software and hardware technologies that increase the performance and efficiency of Linux clusters, the next significant wave in high-end computing. For more details, visit, send e-mail to, or telephone 1-650-934-8100.

Must Read Articles