Get Off the Bus

I canremember, long ago, all roads being general-purpose, two-lane thoroughfares. Iwas growing up when the Interstate system emerged to accommodate theever-increasing volume of motor traffic between major destinations. While Idon’t remember much about that time, I specifically recall my parents callingthe change “revolutionary” -- a word that did not often cross the lips of eldersin the 1950s and 1960s.

For thoseconcerned with conventional desktop storage, another “revolution” is on theway: InfiniBand.

The elderlyPCI-bus has served as storage’s crowded two-lane highway since 1991. Disks instandard desktops and workstations have competed with other peripherals for aslice of the action on this bus for nearly a decade. While PCI’s maximumpotential of 533 Mbps of shared bandwidth seems plenty, today’s applicationsare stretching the limits to the breaking point.

In fact,today’s enterprise applications are often starved for data: I/O bottlenecks andlatencies often make otherwise well-designed applications appear sluggish. Withhuge databases – sometimes in the tens of terabytes – vendors have been forcedto build enormous proprietary clusters to cope with today’s demand fordistributed storage.

Whether theunderlying platform is IBM, Oracle, Compaq’s Tandem, or NCR, the new breed ofenormous application needs high-bandwidth, low-latency access to disk. Becauseof the weaknesses in legacy interconnection strategies, we’ve seen vendorsresort to proprietary solutions, such as Compaq’s ServerNET and IBM’s SPswitch.

It’s notbecause interconnection vendors haven’t tried! Legacy peripheral busses havebeen an alphabet soup of established and then discarded ideas. Do you forgetAT/ISA, EISA, MCA, NuBus, or Turbo Channel? PCI came along to replace them alland bring order to the world of attached peripherals. PCI not only brought us acommon standard, but it also brought us a common standard with excellentbandwidth.

But timehas exposed PCI’s weaknesses. PCI was architected from an old idea: theparallel bus. This simple, elegant strategy has been a workhorse in all areasof computing, but as disk systems grew and file sizes exploded, PCI began toshow its frailty.

Theparallel bus, a design of simple strength, is pretty weak at coping withmultiple, simultaneous requests for service. The disorderly nature of a verybusy parallel architecture naturally leads to inefficiencies and poor performance.The PCI bus was also, in most workstations, a single point of failure.Inserting a single failing PCI card could bring down an entire machine.

InfiniBandwill bring relief to storage system designers looking for a reprieve from PCI’sbandwidth and performance limitations.

The newstrategy starts with a separate host system, called the host channel adapter orsimply HCA. Each workstation gets a separate HCA and the system’s disks andother peripherals are outfitted with target channel adapters, or TCAs. Thehosts and targets can be connected directly or through switches. A switch canisolate a subnetwork of attached devices and the switches themselves can beconnected together by routers.

One of theinnovative things about this design is that the addition of new devices to thefabric of switches and routers means that more bandwidth is available to thesystem. Each links runs at 2.5 Gbps and it’s even possible to aggregatemultiple links to a specific device. For example, two 2.5 Gbps links collectivelyprovide 5 Gb of potential throughput.

Still, asgood as all this sounds, is InfiniBand more hype than help?

One concernis interoperability. Suppose I buy a workstation with one vendor’s HCA. Will itwork with disk system TCAs from any other vendor? It better. Anyone who hasstruggled with incompatibilities among vendors of SAN products knows howcritical compatibility is.

Just asimportant will be the ability to deliver a replacement for PCI at a price pointthat makes it an attractive substitution for the older bus. Obviously, veryhigh-performance capabilities will have price points to match, but many of uswill be looking to replace PCI in workstations with fairly traditionalrequirements. If there is an enormous premium for InfiniBand, PCI may stillhave some life in its old bones.

Despitethese nagging reservations, it’s clear that InfiniBand is on the way. As youread this, the InfiniBand Trade Association (ITA) is delivering the firstversion of the final InfiniBand specification. The ITA has also announced acompatibility/interoperability work group to help ensure compliance.

IBMannounced that it plans to ship HCAs for servers, TCAs for storage and networkdevices, and eight-part switches for InfiniBand in 2001.

Once theinitial products roll out, we’ll see if InfiniBand lives up to its obviouslyhuge potential. Can the new interconnection strategy be the revolution thattakes storage interconnection from PCI’s two-lane road to an I/O expressway?It’s not clear yet, but I can’t wait to take a test drive. --Mark McFadden is a consultant and is communications director for theCommercial Internet eXchange (Washington). Contact him at mcfadden@cix.org.

Must Read Articles