In-Depth

Fibre Channel Over Ethernet: The Promised SAN?

Will FCoE, which uses the same cabling as the next generation LAN, rapidly become the storage protocol of choice?

According to Mike Smith, the market for products that support Fibre Channel over Ethernet (FCoE) will be heating up this year. His theory is that, as companies with FC storage fabrics seek to consolidate I/O traffic onto the same 10 GbE pipes that they deploy to modernize LAN infrastructure, FCoE, which uses the same cabling as the next generation LAN, become the storage protocol of choice.

Smith, who heads worldwide marketing for Emulex, says that his company will be ready and able to support the transition. They are preparing to launch a product line combining 10 GbE network interface adapter (NIC) and Fibre Channel host bus adapter (HBA) technologies to support the mix and management of FCoE and LAN signaling over the same Ethernet cable. In the future, he offers, the same chipset and software stack used on Emulex's "converged network adapter" will be made available shortly thereafter to blade server manufacturers and server backplane makers who want to implement it directly onto motherboards.

Smith is among the first to tout FCoE as a replacement for server-to-storage attachment topologies that leverage a Fibre-Channel-only cabling plant. Most of the other vendors in the Fibre Channel camp have been more circumspect about declaring FCoE a replacement technology. Vendors have been characterizing the protocol in a manner reminiscent of their earlier statements around Internet Fibre Channel Protocol (iFCP) or Fiber Channel over IP( FC tunneling through IP or FCIP) -- or even iSCSI: FCoE is an inexpensive way to bridge traffic between multiple FC "SANs." The main difference between these other protocols and FCoE is that the latter doesn't use TCP/IP. Signaling occurs at the physical layer of the network -- within Ethernet frames themselves -- similar in concept to Coraid's ATA over Ethernet protocol.

FCoE, like everything in the storage interconnect world, has become politicized. If you think about it, the protocol does lend credence to what some FC detractors have been saying for a long time. The argument can be made that, since FCoE (like its precursor FCP) cannot be routed, it cannot be used to create a true network by any definition of the word network. Therefore, as with straight Fibre Channel, you can't use FCoE to create a storage area network (SAN), but only an aggregation of direct attached storage (DAS) configurations operating across a simple switch that makes and breaks connections at high speed. Such a topology is better referred to as a "fabric" than as a "network" (which explains why I have always referred to FC "SAN" with quotes around SAN.)

This debate over the meaning of "storage network" is mostly ignored by data center storage people, who have been using fabric protocols like ESCON and FICON for years in order to attach big-iron mainframes to big-iron storage. DASD was always connected via interpretations of DAS-like bus and tag cabling. To paraphrase one big iron adherent: "Who cares if Fibre Channel routes. Nobody routes storage in a data center environment."

Fair enough. If all you are looking for in your storage plumbing is a fast, dependable protocol for getting data from point A to point B, FC has been one of the fastest games in town for a long time. It's management story is not great, requiring (for the most part) a secondary connection to every device using IP for the purposes of collecting status and configuration data (in-band management is not an FC forte, strictly speaking). That said, the Emulex approach -- and FCoE's deployment methodology generally -- could make this a moot point. While using Ethernet-level signaling for doing I/O at 10 Gb per second, you could run IP-based management higher up on the network protocol stack and effectively combine management and storage on the same wire.

Smith has a few other things of note to say. First, he claims that 85 percent of large companies are now using FC fabrics for their storage infrastructure. Many of these are embracing server virtualization, but mainly to consolidate file servers and Web servers, not servers that are hosting mission-critical databases. Still, Smith says, the whole server virtualization phenomenon is "dragging FC fabrics in a direction that they weren't intended to go." Virtualized server environments on x86 architectures obviate the speeds and feeds value of FC; they reduce the performance of the interconnect.

Server virtualization is increasing the visibility of iSCSI, Smith notes, but iSCSI will not replace FC fabrics in the wholesale manner that some iSCSI advocates claim. "iSCSI is different. You use different techniques to provision, configure, and manage storage connected using the protocol than the techniques that most data center storage managers now use with FC."

By contrast, Smith argues, "FCoE works with the existing storage driver stack, with multipathing, worldwide port naming schemes, and so forth: it fits with common practices."

That said, analysts at the IDC project fairly flatline growth for Fibre Channel fabrics through 2010, whether based on FC cabling or FCoE,. By contrast, they predict more than a 60 percent increase in iSCSI adoption within the same timeframe. I note this, not because I believe IDC numbers, which have proven erroneous on many occasions, but because they seem to jibe in this case with the economy. Cost containment is likely to become the order of the day for everything IT and iSCSI, with its less-expensive infrastructure supports, free initiator software, and multiplicity of target storage arrays, is more likely to become a darling of storage purchasers across all sizes of business than will expensive FC HBA-based connectivity options.

Of course, for those with an investment in "legacy SANs," FCoE might find a friendly reception. At least you can ditch the expensive FC SAN switch for a less-costly Ethernet/IP switch, if you wish, and get a speed improvement over conventional FC fabrics to boot. More speed over a converged cabling infrastructure sounds like a win-win to those who have spent money when times were good on the most expensive way ever conceived to host 1s and 0s.

A significant and still-unanswered question is whether a sufficient understanding of network switching is present in the core data center world to support FCoE adoption. If the added-value case for FCoE is presented with emphasis on networks, it will probably turn off the big iron users -- especially those hardliners to whom the current crop of FC vendors turn to obtain negative perspectives on iSCSI (e.g., "iSCSI is an inferior solution preferred by those who don't need big iron storage"). Clearly, concerns over the relevance and resonance of networked storage to their core customer set is why many FC vendors are not making the case for FCoE as a replacement technology, but merely as a gateway technology.

Emulex is different, and by breaking with the "FCoE-as-inter-SAN-gateway" messaging of its peers and positing FCoE as a replacement protocol, it is moving into undiscovered territory that many other FC vendors have gone to pains to avoid. The marketers are struggling once again with the question that first presented itself when "networked storage" became the rage in the late 1990s: Who will buy a network storage protocol for enterprise storage: the data center crowd (who arguably know more about storage than about networks) or the distributed computing crowd (who arguably know more about networks than about storage interconnects)?

Emulex is rolling the dice here to see whether FCoE might resonate with both groups. It will be interesting to watch and see what happens. Your views are welcome: jtoigo@toigopartners.com.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles