Standards-Based Storage, Part II

Only when the pain and cost of the status quo becomes too great to bear will users start to make some noise.

In my last column (see, I raised questions about the relevance of standards to IT decision-makers building storage infrastructure for their companies. The column was framed as the first of three, each intended to deliver a different perspective on storage standards: that of the storage industry observer (me), industry vendors (this column), and the storage consumer (next week).

The timing for this series was dictated in part by our editorial calendar. When I told ESJ that we were going to delve into the wacky world of storage standards early this year, the first vendor to respond to this opportunity was Hewlett-Packard, a storage heavyweight with wares targeted at just about every segment of the storage market.

HP requested to be included in our story, presumably to shed light on its work in the standards realm both as an individual company and as a part of collective initiatives within the industry. Interestingly, when we circled back with them to do the interview and shared some of our initial critiques of storage standards (reported in last week’s column), they rescinded their offer to participate, explaining that “questions about storage standards should be directed to the Storage Networking Industry Association (SNIA).”

That struck me as odd because SNIA is a trade association, not a standards body. Moreover, while the group occasionally develops specifications, notably the Storage Management Initiative-Specification (SMI-S), which they then submit to real standards organizations for approval, SNIA’s interests in standards development seems limited to certain aspects of storage technology. For example, the organization doesn’t concern itself with developing data transport standards, data layout or file system standards, clustering technology standards, virtualization standards, or many of the other technologies involved with I/O with which a company like HP is intimately involved.

As a rule, SNIA’s standards work is a reflection of the objectives of its key members, the big-iron storage vendors. To some extent, that limits what technologies they are willing to consider.

This point is elucidated by Howard Goldstein, president of Howard Goldstein Associates, a technology and training company in Superior, CO, and a member of SNIA. He recounts, “Once I approached the IP Storage Forum of the SNIA and asked if they might consider embracing Storage over IP (SoIP) approaches under their umbrella. [Block storage transported over IP] is, after all, an approach that uses IP. One could even make a case for ATA over Ethernet (AoE), where there is no need to use IP but [only] the biggest daddy of them all, Ethernet standards.”

Goldstein says that their answer “of course” was no. “Not in their charter,” he recalls. “[They said] that it would require a charter rewrite that no one was interested in pursuing at the time. In fairness, they have been all about the iSCSI standard [SCSI over Internet Protocol, an IETF standard] and moving the marketplace to embrace that as a first move towards leveraging TCP/IP. Perhaps, in some time, they will have a change in heart. Their message is a marketing message by vendors who have rallied around their chosen standard.”

Standards Trail Products

Despite HP’s reluctance to comment for this column, there was no shortage of other storage vendors willing to step up to the mike. Tom Treadway, Adaptec’s CTO and distinguished engineer, provided a candid view of the standards process from his Winter Springs, FL office. While important, standards are less useful from a product manufacturing point of view, he said.

According to Treadway, the creation of new storage products, such as disk drives or RAID controllers, is subject to a “rotting cabbage” phenomenon. “Storage vendors want to build a product and sell it within a week because products have very short shelf life. The Serial Attached SCSI (SAS) standard is an example. It was written in a loose way. Timings weren’t nailed down and a lot of bugs were left to the manufacturers to fix.”

That explains, in Treadway’s view, why standards often trail behind products, “Everyone is rushing to get to the market with their products. It would take years to release products that were fully standards-compliant, in many cases.”

Adds Bill Franks, CTO of Zetera in Irvine, CA, “[Standards] aren’t always implemented on products in a consistent way.” Case in point, according to Franks, is IP multicasting support on network switches and routers.

Multicasting is used by Zetera to aggregate IP network-connected disk drives into manageable arrays. Each drive is identified by an IP address, and multicasting allows groups of drives to be part of a subscriber group, which is another way of saying that they become a virtual storage array. Zetera Z-SAN technology, in effect, makes storage virtualization a function of a network, rather than a function of an array controller or software add-in and enables building massively scalable storage at rock-bottom prices.

Forming arrays on the fly in an IP network is a neat trick and could be accomplished simply by using the Universal Datagram Protocol (UDP)—rather than the Telnet Communications Protocol (TCP), which prohibits multicasting—if only every IP switch supported UDP. Not all do.

“Universal Datagram Protocol (UDP) is part of the IP protocol suite,” Franks notes, “and it makes possible the multicasting of data to multiple port addresses. Zetera uses this protocol with IP to build low-latency, highly scaleable storage infrastructure. That said, the engineering problem we initially encountered was that UDP multicasting wasn’t implemented on all routers and switches by all equipment vendors. This was especially the case with lower cost switch products where vendors claimed that their products were UDP compliant, but they didn’t really support multicasting because there wasn’t a lot of demand for it before Zetera appeared in the market.”

When confronted with a deficient switch or router, Franks explains, “users had a problem when trying to configure storage for use over UDP and IP. They would have to configure the ports to be included in a subscriber pool by hand, or they would have to learn about communicating ports using a protocol like Internet Group Management Protocol (IGMP), which snoops on communicators to identify port addresses for inclusion in a multicast session. Any way you look at it, setting up a Z-SAN would be a lot of work.”

Working Around the Gaps

This problem required Zetera to work around the gaps in standards implementation. Franks created a protocol that “circumvents the issue by adding intelligence to servers so they know how to forward data to clients.” He noted that his workaround actually improved the transport efficiency of his protocol, an unexpected plus, but full implementation of standards on the switch products would have saved a lot of development effort.

In truth, Zetera’s Z-SAN is the first true storage networking protocol, despite the marketing hype around iSCSI and Fibre Channel. These standards-based protocols produce networked storage in name only, says Franks, “In reality, they are delivering direct attached storage over distance, not networked storage.” Ironically, the Storage Networking Industry Association views Z-SAN as a protocol that falls outside its charter.

“Everyone agrees that there should be open, adhered-to standards,” offers Diamond Lauffin, founder of The Lauffin Group, in West Lake Village, CA, “However, where vendors are pursuing a standard, it is usually for their own gain and not for users.”

Lauffin, who consults with vendors seeking to build distribution and reseller channels, says that real standards would be a blessing to the reseller integrator. Standards conformance would enable integrators to build storage that matches customer needs more exactly rather than having to identify the best fit from among an assortment of inexactly matched alternatives. “As a rule,” he says, “the channels lack the power to dictate to their suppliers the terms of standards conformance.”

Lack of Standards and Virtualization

The absence of uniform standards is part of what gives storage virtualization product vendors, such as FalconStor, a market for their products, according to Camberly Bates, chief marketing officer with the Melville, NY company. Bates says, “The primary purpose of standards is to provide interoperability and thus freedom of choice. That is the ability to select whatever is the best technology for the job at hand without painful migration or slow adoption. What we see is that in the absence of standards, customers will turn to an abstraction layer (i.e., virtualization) to gain the benefits of choice, easy migration, and management between disparate systems and technology.”

Bates adds, “Abstraction, or in our case virtualization, gives someone a single set of operational procedures no matter what is under the covers. FalconStor would like to see more widespread adoption of standards, as it allows us to quickly bring functionality to the market. For right now, I believe we will see a faster path to solving the interoperability issue with virtualization.”

While virtualization might provide an end run around a key deficit of storage standards, the failure to assure product interoperability means virtualization itself can also be characterized correctly as a “standards-free zone” in storage. Bates concedes this point, stating that her company has been working so hard to solve customer problems, there hasn’t been a lot of time for thinking about how to create a virtualization standard that might enable the products of multiple virtualization vendors to interoperate.

A Deepening Problem

The standards problem only worsens as focus shifts from the hardware layer (and the abstraction layer) to the storage management layer. According to Ken Barth, CEO of Tek-Tools in Dallas, TX, storage management standards, such as SNIA’s SMI-S, remain a work in progress from both the development and the implementation standpoints.

“Standards should provide interoperability, at least at some basic level,” Barth notes, citing present-day networks to make his point. “With Ethernet and IP networks, interoperability is a given. Boxes talk to each other and customers expect them to do so. I think that SMI-S offers the same sort of base functionality to enable better storage management, but the implementation is the problem. Not everyone who is implementing it is doing so in the same way.”

Barth, who previously worked for many years for network management software provider MicroMuse, correlates the current state of storage management with the early days of network management, “when you had to pay for MIBs to manage your network equipment. I see it as a natural progression to a standard, and storage is still very early in the process.”

Further complicating the standardization of storage management using SMI-S, Barth says, is the failure of SNIA to differentiate between implementations. “There are, as SNIA says, 450 products with SMI-S providers, but there are also significant differences between products that implement SMI-S to the fullest possible extent. SNIA should be tracking which products have done a good job with implementation because customers have a right to know which have done a full implementation and which vendors have gone part of the way but plan to do more in a future release.”

The Final Word

The final word of this installment goes to Barth, “Should a consumer care about standards? Yes. They bring about the ultimate freedom of choice in how storage infrastructure is built. Without them, you must buy into a proprietary system architecture that locks you in.”

At the end of the day, that is the key point: standards happen only when consumers demand them of their vendors. That generally happens only when the pain and cost of the status quo becomes too great to bear—or does it? We’ll ask storage consumers their views in the next installment. In the meantime, your comments are welcome at

Must Read Articles