In-Depth

Proliferating Protocols

Business applications must guide storage architecture, not vice versa

In the world of networked storage, Avamar Technologies' CEO Dr. Kevin Daly is known to be one of the good guys. Somehow he has managed to befriend everyone in the industry with his knowledge, diplomacy, and charm.

At Network Storage Strategies Day, a summit of sorts held against the backdrop of Networld+Interop in Las Vegas, NV, Daly provided an informative presentation on the “Tower of Babel” world of networked storage protocols. As evidence of the trend toward networked storage, he took both the academic stance (citing Bob Metcalfe’s “law” regarding the tendency of networks to increase in value, while declining in cost, as more nodes are added) and the vendor stance (citing IDC projections on the increasing prevalence of networked storage topologies), but noted the increasingly problematic trend toward protocol proliferation.

Daly waded pleasantly through the familiar arguments that IP storage would become predominant, probably displacing Fibre Channel SANs in all but a few hold-out environments. This was not a self-serving observation, by the way, since Avamar’s own products are currently optimized for use with Fibre Channel environments.

The interesting part of the discussion actually came in one of the final slides, when Daly began addressing the future of IP storage protocols. First, he discussed the potential for an IP protocol for SAN zoning: something akin to Cisco’s proprietary virtual SAN or VSAN protocol for its own SAN switches. He observed that additional network intelligence would need to be added to the existing IP SAN protocols to help bring them into full flower.

Truer words were never spoken. Unbeknownst to Daly, the folks operating the Storage Interoperability Lab (iLAB) at N+I, which demonstrated the connectivity of various Fibre Channel and IP SAN components, had run into some problems when passing IP storage traffic through the metropolitan area network set up for the show. When the IP storage traffic passed through the gateways into the MAN environment (which also supported traffic from several thousand e-mail stations and a host of other applications), it simply disappeared. Gone. Like Columbus’ fourth ship, the data had fallen off the edge of the world.

According to an iLABs spokesperson, the understaffed crew was certain that the fault was probably theirs. Bottom line: they had no luck in troubleshooting the issue with the resources at hand, and so no definitive finger could be pointed to the root cause. It may be nothing—perhaps a missed configuration step—or it may be a harbinger of what might happen if IP storage and other traffic are collapsed onto the same network without the benefits of a higher level protocol for traffic segregation.

The other concluding point in Daly’s presentation was perhaps even more important. He noted that the increasing use of networks to conduct storage I/O would lead inevitably to a bottleneck. Simply put, at present, network traffic must be translated once it arrives at a server bus or storage array back plane so it can be mapped onto memory within those systems. This network-to-system data copy process is potentially a stumbling block as the amount of traffic increases.

Daly pointed to protocols such as Remote Data Memory Access over IP (RDMA over IP) and to the Direct Access File System (DAFS) as the progenitors of a new species of “Zero Copy” protocols that would be needed to surmount this problem. Ideally, storage would be mapped directly between the memories of controllers and servers, changing the way that application and storage I/O works today, to overcome the chokepoint and facilitate greater throughput and lower latency.

He raised a warning flag in connection with these statements. From his perspective, such direct memory-to-memory mapping opened a potential Pandora’s box of data security issues. These would eventually need to be addressed with even more protocol alphabet soup.

Sanguine as always, Daly closed with the observations that while there was a real convergence occurring around IP networks and storage, he did not endorse the view of a universal, one-size-fits-all architecture for IP storage. Moreover, he wisely quoted the French dictum that, while a thing (such as IP storage networking) may work in practice, it may never work in theory.

Based in part on Daly’s commentary, questions arise about the future of networked storage topologies. It was unclear after listening to a panel discussion on networked storage architecture (Daly participated together with Hitachi Data Systems CTO Hu Yoshida and Crossroads Systems Technical Marketing Manager S.W. Worth) whether exactly SAN or NAS or some hybrid technology would become predominant modalities. Worth was all smiles at the thought that frequent changes in topologies and protocols would necessitate the continued use of bridge/router products to facilitate the transition from older topologies and protocols to newer ones. No one in the crowd seemed to share his enthusiasm.

All of the panelists came to a similar conclusion: in the final analysis, the business application needed to guide the storage architecture, and not the other way around. This was a recurring theme in later talks, including that of Steve Sicola. Formerly a Digital Equipment Corporation think-tanker (they dreamed up the SAN in the first place) and now the Vice President of Advanced Storage Architecture for Seagate Technology, Sicola noted that bifurcations were beginning to appear in the disk industry between large-capacity-with-slower-operation disk drives and fast drives with less capacity that were already beginning to show up in storage array offerings.

Another split was occurring, according to Sicola, between low cost ATA/Serial ATA disk and “enterprise-class” SCSI/Fibre Channel disk. He noted that architects would need to consider options carefully by focusing on application requirements and data life cycle requirements to guide their choices.

In the final analysis, from the lowliest disk and data protocol, to the loftiest conception of strategic architecture, networked storage in 2003 remains in a state of flux. For now, the experts agree that making the right choices comes down to knowing business requirements and applications first—all of which is another way of saying, “Forget the hype and use your common sense.”

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles