In-Depth

Alacritech: iSCSI’s Quiet Champion

The company's president and CEO helped create SCSI earlier in his career, and the company has since overcome some interesting technical challenges.

A while back, I wrote an article for a print publication that solicited the “party line” from the vendor community regarding the “fit” for iSCSI in the world of storage today. A lot of big vendors (and a couple of smaller ones) completed questionnaires, but I have to say their responses became tedious and repetitive in short order.

Most stuck to the “safe answers” that one would expect from vendors seeking to nudge ahead into iSCSI while protecting their current Fibre Channel install base. iSCSI, they argued, was mainly suitable for connecting outlying servers to a FC fabric. In the case of a few NAS vendors, the protocol was deemed useful for enabling their products as NAS/SAN gateways (an idea first promoted in this column years ago). Basically, iSCSI could be added to NFS and CIFS support so that NAS guys could plug into (and later manage?) FC “SANs.”

In the preponderance of the responses, iSCSI was treated as the weaker cousin in the SAN world. Nobody, including early iSCSI backer Cisco Systems, was willing to suggest that the technology would ever become the dominant interconnect for SANs (though Cisco conceded that it had not abandoned iSCSI and was developing in its Fibre Channel switch products functions that could eventually be transferred to the iSCSI world.)

One response that was never received, evaluated, or printed in connection with that story—for reasons having to do with the vicissitudes of e-mail (that is to say, I either deleted it by accident or filtered it out of my e-mail as spam, or the e-mail was simply sucked up by a black hole that we all know lives in the hyperspace of the Web)—was from San Jose, CA-based Alacritech. As the company had been such a champion of iSCSI from the very beginning of protocol development, their absence from the discussion was disappointing.

Part of this had to do with pedigree. Alacritech is helmed by president and CEO Larry Boucher, formerly CEO of NAS pioneer Auspex, and before that of Adaptec during its heyday as the king of all things SCSI. Boucher was one of the creators of SCSI during his early career at Shugart Associates (now Seagate), so I had always given more credence to his company’s position on iSCSI and its vision of eventual iSCSI SAN dominance than I would have given to less-seasoned sources.

In fact, Alacritech spearheaded technology for off-loading the TCP/IP stack processing load (and eventually the entire iSCSI packaging and unpackaging workload) from server processors: two technical challenges that were originally thought to be barriers to mainstream applications of the protocol. A number of prominent FC vendors had long argued that TCP/IP could never be used as a storage interconnect because of the burden that processing the protocol in connection with chatty SCSI operations would impose on servers. Everyone agreed that some sort of TCP Offload Engine (TOE) would be required.

However, Boucher and company argued that you couldn’t surmount this problem using network processors, Power PC processors, or other technologies such as Motorola’s (now Intel’s) xScale architecture, in part because of the state processing requirements imposed by the protocol itself. You needed a dedicated architecture, which they created and patented as “SLIC,” then imprinted on custom ASICS that are the core of their products today. By contrast, many prominent HBA and controller manufacturers ignored Alacritech and dumped boatloads of money into trying to build a TOE with their preferred processor platform—and failed.

The last laugh may well have been Alacritech’s, but in my story, they didn’t get a chance to laugh at all. Until now. Here are just a few insights provided by Alacritech regarding the present and future status of iSCSI and SANs built on the technology.

Toigo: Fibre Channel fabrics do not seem to respond to Metcalfe’s Law of networks, which states that the value of a network should increase and cost per node should decrease as more nodes are deployed. Fibre Channel fabrics seem, in fact, to become more difficult to manage as they scale (in many cases eliminating many of the value gains promised by vendors) and, in general, remain the most expensive platform for data storage. FC fabric-per-port costs have been extremely slow to decline.

By contrast, per-port costs of GigE switches and GigE NICs have fallen dramatically in only a two-to-three-year time frame; 10GbE is expected to follow this pattern as well.

From a cost standpoint, does iSCSI have a better story to tell than Fibre Channel to price-sensitive consumers?

Alacritech: There are a number of areas where iSCSI allows different entry costs than Fibre Channel. With Fibre Channel, there is no commodity NIC, only HBAs, so customers don’t have the option of connecting servers with lower bandwidth requirements to the fabric “for free.” Ethernet-attach costs with an iSCSI software initiator is measured in tens of dollars, Fibre Channel, even with the newer SMB offerings are still in the hundreds of dollars a system.

The other side of it is the array cost. Customers can roll their own array using Windows and software from FalconStor or Stringbean, and have something serviceable, reliable, and cheap. A number of companies such as SuperMicro (and others) make chassis that take a standard motherboard and 15-24 SATA drives that are ideal for a “white box” array.

Solutions using enterprise class SATA drives like the WD Raptor 74GB can be built for under $10/Usable GB with RAID 5 protection, and solutions using Enterprise class capacity SATA drives like the Maxtor Maxline series with the same RAID 5 can be built for under $3/Usable GB. Both these prices include the use of an HBA in the box. This compares to a product like the Dell/EMC AX100 that is about $5/Raw GB fully populated with 250GB drives.

The industry has given mixed messages about the fit for iSCSI: Is it a data center technology because that is where the big switches are located, or is it an “edge technology” because workgroups and departments do not require the speeds and feeds of data centers? What is your take?

Yes. It’s not targeted at the high end FC SAN, but for applications like near-line storage, MS Exchange, D2D2T; it is an enterprise solution. For workgroups and departments, easy-to-use products using Ethernet also make it appropriate for use as a departmental SAN.

iSCSI standards do not seem to have been “held hostage” to proprietary vendor interests the way that FCP standards have been at ANSI. (It is an established fact that vendors can develop FC switches that fully comply with ANSI standards, yet fail to be compatible with one another.) From the consumer’s perspective, do you feel it's smarter to go with iSCSI-based technologies because of product interoperability?

iSCSI is much further along with interoperability than Fibre Channel was at this stage of its lifecycle. Interoperability is important, but we’re not seeing products like the HDS Lightning or EMC Symmetrix offering native iSCSI with enterprise SATA back ends. There’s still a market transition that needs to take place on the high end to say iSCSI is ready to replace Fibre Channel.

Some vendors seem to be suggesting that Fibre Channel is superior to iSCSI because of its end-to-end support of “native Fibre Channel drives.”

This is a big “So what?” The front end and back end are both Fibre Channel, but there’s a crossbar or cache dividing the two worlds. Fibre channel SAN transactions hit the cache—the cache controller then decides how and when to talk to the back end drives.

As an IP-based protocol, iSCSI is limited in terms of speeds to available bandwidth less overhead, which is generally interpreted to mean that the technology is capable of delivering roughly 75 percent of the rated speed of the TCP/IP network pipe in Mb/s or Gb/s. FC advocates have leveraged this as a major differentiator between FCP and iSCSI solutions. How meaningful is this speed difference today? How meaningful will it be next year with the introduction of 10 GB/s IP nets?

iSCSI products today can drive 90% of the rated speed of the pipe (110-112MB of 125MB)—75% seems like an extremely conservative number.

Back to the point, a D2D device running at a sustained 50MB/s is a great enhancement to a small tape library, and most Exchange servers need tens of MB/s rather than hundreds, so the bandwidth is adequate for today. With multi-port gigabit solutions and 10-gigabit solutions, the arrays will have the bandwidth to scale to support more servers per array.

Related to the above, how important is interconnect speed to applications? Haven’t we made do with much slower storage interconnects in the recent past?

Fast disk drives today run at 50-75 MB/s, so iSCSI provides storage networking at faster than local disk speeds. As mentioned [earlier], the bandwidth is adequate for a number of very common applications like MS Exchange.

Some vendors are “dumbing down” their Fibre Channel products to facilitate their deployment in SMBs. Is this your strategy, and what do you see as the benefits and drawbacks of such an effort?

Such a strategy gives the vendor something to sell while putting together an iSCSI strategy or going-out-of-business strategy.

We want to thank Alacritech for responding to our questions. As always, reader feedback is welcomed: [email protected]

Must Read Articles