A Need for Speed

When speed is a concern, don't take the bus.

Essentially, that's the philosophy for what is widely anticipated as the next I/O industry standard -- the switched fabric InfiniBand. InfiniBand's primary purpose is to address the speed concerns that the bandwidth limitations in the shared bus systems of current Intel servers can't accommodate, particularly as server clusters continue to multiply.

InfiniBand is like a Ferrari when compared to the bus. The top-of-the-line bus architecture, the PCI-X -- which is scheduled for release this fall -- maxes out at 1 GBps. Intel servers using the InfiniBand backplane, expected to be available by the middle of next year, will be able to handle up to 6 GBps. per link with a throughput up to 2.5 GBps. And that's just for starters.

As InfiniBand continues to evolve, the need for external storage and communications devices may eventually be supplanted by smaller components within a server's I/O structure. The process of scaling up would become much more seamless and much less risky.

"It's really architected to handled multiple protocols, multiple traffic types all at once," says Bill Lynn, marketing manager for advanced strategy at Adaptec Inc. (www.adaptec.com), a storage solutions firm. "This inner processing holds a phenomenal amount of promise. A lot of people underestimate the complexity involved here. This is a revolutionary change, not evolutionary. This reaches all the way into the guts of the chips."

This is not just a pipe dream. Virtually all the major players in the industry -- IBM Corp. (www.ibm.com), Intel Corp. (www.intel.com), Compaq Computer Corp. (www.compaq.com), Hewlett-Packard Co. (www.hp.com), Dell Computer Corp. (www.dell.com), Microsoft Corp. (www.microsoft.com), and Sun Microsystems Inc. (www.sun.com) -- form the steering committee for the InfiniBand Trade Association (www.infinibandta.org). More than 170 companies, including Adaptec, have joined the ITA. This indicates the intense level of commitment to the future of InfiniBand. Adaptec was one of several companies that took part in the first formal InfiniBand presentation at the Intel Developer Forum Conference in August, just before the expected release of the InfiniBand 1.0 specification.

"With high-density rack mounts becoming phenomenon, the need for this sort of product was clear," says Tom Bradicich, co-chairman of ITA and the director of Netfinity architecture and technology at IBM. "Certainly the PCI bus system is going to be around for while. But the bus system does not scale well. It does not have good failure characteristics given the amount of I/O in today's world. InfiniBand has 64,000 end nodes, far more than the 12 to 15 you see in bus systems. The enemy was a fragmented I/O structure, and InfiniBand addresses that."

Does this mean InfiniBand is destined to be in every I/O system in a few years? Not necessarily, says Jonathan Eunice, an analyst at Illuminata Inc. (www.illuminata.com).

"The first two years of an industry standard's life is always a speculative period because there are no actual product applications to depend on," Eunice says. "There are always hopes and expectations, but there are no promises or guarantees. I think this is a no-brainer, but I can list 10 or 20 no-brainers that died or never went anywhere."

Eunice offered two examples. The FDDI (Fiber Distributed Data Interface) ANSI protocols for sending digital data over fiber optic cable were too expensive and got derailed by more practical alternatives. Intel I20, another product designed to eliminate I/O bottlenecks by using special processors to handle the details of interrupt handling, buffering, and data transfer, also was touted by top industry players. It was brought down by alternatives at the chip and board level, and also suffered when some industry leaders backed off from their support.

"The main way InfiniBand can fail is if the Ethernet folks develop a high-volume product before the InfiniBand technology flourishes," Eunice says. "Ethernet is already understood so well by the high-volume player, it has a natural advantage over any other multiple-gigabit alternative. If an Ethernet alternative to InfiniBand could be developed quickly, it would reduce the need for InfiniBand on the high end."

Eunice doesn't see this scenario happening because of the cost factor. The key ITA members have a strong history with the embedding of switches and mechanisms in silicon. The technical difficulty and cost involved in doing the same with Ethernet intelligence has proved too massive a hurdle to overcome for several years now.

"Let's say the high performance of InfiniBand versions -- while technically compelling -- take three, four, or five years to reach the market," Eunice says. "If more practical incremental extensions are developed to TCP/IP in that time frame, that's the kind of strategic threat that could impact InfiniBand."

While InfiniBand appears to be an answer for high-end users, some analysts wonder where everyone else fits into the picture. Simpler systems may not have the protocols necessary to accommodate what InfiniBand requires. Bradicich says that to date, IBM has developed chip sets for the high end, but has yet to complete sets for the lower range. So what if someone with a lower-range system is looking to dial up in the future? When can they get off the bus? What sort of system configuration costs will be involved?

"Certainly, we believe InfiniBand will be very popular with high-end systems. It offers capabilities that, to date, have only been found on mainframes as channel-based I/O, which are nowhere near as fast as InfiniBand," says Jim Pappas, director of initiative marketing for the Intel enterprise platform group. "But we think it will be very useful for other classes of servers as well in that it really addresses server density as needed."

Pappas says switched fabric architecture has actually been around for the best part of a decade, but has now evolved to the point where it can be used relatively inexpensively. He believes the numerous players involved will find solutions to address all levels of scalability in a cost-effective manner. This is an interesting problem for products like Windows NT/2000 databases and middleware, which aren't known for readily embracing clustering.

While it's difficult to gauge the effectiveness of migrating to the first incarnation of InfiniBand at different scaling levels, most people in the industry agree that this technology -- or something close to it -- is necessary.

"We're faced with a whole new set of problems in terms of reliability and scalability that we just can't get around," Lynn says. "A lot of the concepts within InfiniBand have been in the mainframe realm for awhile. This is definitely a way to get to a mainframe class of machine using standard components."

[Sidebar] Is Microsoft's Tie to InfiniBand Dependent on .NET?

Microsoft Corp. is one of the seven major industry players that make up the steering committee for the InfiniBand Trade Association, which is dedicated to the development of a new common switch fabric I/O technology to replace the speed-limited shared bus structure. InfiniBand is due to be incorporated into product releases around the middle of 2001.

InfiniBand has wide support: There are more than 170 companies in the InfiniBand Trade Association (ITA, www.infinibandta.org). Most analysts believe it will become the industry standard. Considering these factors, it would stand to reason that Microsoft (www.microsoft.com) is strongly committed to this high-speed technology that has been called the most revolutionary industry innovation in the last 10 years.

Well, maybe not.

InfiniBand is a technology where highly scaleable systems that embrace clustering will see the most benefits. Microsoft Windows NT/2000 databases and middleware don't embrace clustering. Microsoft system designs don't necessarily allow for multiple development. If effect, InfiniBand flies in the face of what has worked very well for Microsoft.

"I believe at this point Microsoft has embraced InfiniBand as more of a marketing strategy," says Jonathan Eunice, analyst at Illuminata Inc. (www.illuminata.com). "If the InfiniBand model is based on more and more connectivity, if InfiniBand is based on clustering being on the march toward pervasive deployment, then Microsoft has a problem as constituted."

Eunice points out that Exchange and BackOffice are obvious examples of tools that aren't cluster savvy. For the most part, the benefits of InfiniBand will be lost on existing Microsoft products right up to Windows 2000.

"The entire Microsoft software stack isn't cluster stupid, but neither is it cluster intelligent," Eunice says. "Microsoft has talked about moving toward highly clustered operations for five year now. But they haven't made much progress because they haven't needed to focus on it. They have a shelf full of successful products compared with their competitors. They've never been pushed toward clustering to any degree."

Michael Stephenson, lead project manager for the enterprise server group at Microsoft, says Microsoft products do embrace aspects of clustering.

"Microsoft today provides a number of clustering

technologies in Windows 2000 and other products that help customers increase the scalability of their systems and provide higher availability," Stephenson says.

"Windows 2000 Advanced Server and Datacenter alone provide Network Load Balancing," he explains. "This enables customers to cluster up to 32 systems to handle network requests. For instance, this could be used to cluster together a number of Web servers to handle higher capacities and provide higher availability. [Then there's] Cluster Services. This is failover clustering for applications that have state associated with them. This would typically be used on servers running databases. Two-node fail over is available in Advance Server and four-node in Datacenter Server."

Regardless of how much clustering current Microsoft products can do, it's easy to understand why the temptation in Redmond would be to produce more of the same.

"For a company the size of Microsoft, no matter how good its intentions with an initiative like InfiniBand, inertia is a tremendous barrier to overcome," said Eunice. "Microsoft has had so much success with its current deployment, it wouldn't be surprising to see it be reluctant to move on to something else. Let's remember, Microsoft was on board for Intel I2O [another alternative to the shared bus that was touted as the next industry standard] and many people believe their slowness of support had a negative effect on I2O."

I2O's lost momentum allowed alternatives to gain the edge. But there's a difference in this case. Microsoft announced a move toward its .NET approach, which happens to be friendly toward clustering. If .NET takes off as Microsoft hopes, InfiniBand will go along for the ride.

"We're talking about an approach with .NET where the reasoning lends to more components -- service providers and ASPs both favor smaller components, which are better suited to clusters," Eunice says. ".NET would be executing code and using services as components. This is the thinking of future models at Microsoft under this approach, but it's really just speculative at this point."

If .NET doesn't take off, will Microsoft pull back from InfiniBand? Stephenson insists the company is committed to the new I/O technology.

".NET is Microsoft's vision for building a new generation of Internet services, while InfiniBand is a new specification for doing high-bandwidth I/O," Stephenson says. "Microsoft is excited about the scalability and reliability gains that InfiniBand will bring to the .NET platform in the future."

Must Read Articles