In-Depth

Storage News and Abuse

Equal parts architecture and marketecture continue to dominate the releases coming out of the storage industry.

I was catching up with some recent press announcements after the first Disaster Recovery and Data Protection Summit held in Tampa last week (which was a great success, as some readers who were at the event will attest).

The first was IBM’s announcement of version four of its virtualization play: SAN Volume Controller (SVC) software. Big Blue boasted that it was adding four gigabits per second (Gbps) FC fabric support, plus increased interoperability with up to 80 different disk system models supported (“twice [that] offered by Hitachi”), and improved upgrade capabilities to enable current users to replace their current SVCs with new ones without disruption.

In the press event, they hung their hats on IDC reports that storage “replication”—“in which data is copied in real time as it moves across a storage area network”—grew nearly 25 percent over the past year. They pulled out all the stops, citing fear of the 2006 severe weather season and the outages that may be caused by future Katrinas as a key driver of future growth in copy-on-write via virtualization. With virtualization, it is possible to designate remote disks as targets for split write operations, thereby building some mirrored resiliency into the enterprise storage architecture.

The week before, EMC acquired Kashya, which makes a data replication appliance for FC fabrics. This move, as suggested in the IBM announcement, is widely regarded as CYA for EMC, since Invista (the company’s virtualization software) was the only virtualization play that could not do copy-on-write. The way I see it, IBM’s virtualization approach has a lot of merit, and is already used to manage over 15 petabytes of data. However, their ability to maintain this lead hinges upon more than an uptick in interest in disaster concerns—which have conspicuously failed to drive much in the way of data-protection product sales. IDC market projections have become notorious for their failure to materialize in the real world, as their release, then retraction, of optimistic spending projections at the beginning of this year will attest.

I’m all for building recovery into the infrastructure, and virtualization is certainly the way to go. The question is whether this will be done by a layer of software or through a network service such as IP multicasting, the direction that MIT Media Labs might be pursuing as it deploys a couple of petabytes of Zetera Storage over IP-enabled arrays from Bell Micro in the next few months. With the steady growth of implementations of 10x Gigabit Ethernet, I would think that people would be paying attention to what the network can provide by way of storage interconnect services and slowly distance themselves from Fibre Channel, with its “yesterday’s bandwidth tomorrow” value proposition. Failure to see the future dominance of IP-based storage makes crowing about improvements in FC fabric-based virtualization schemes sound a lot like rearranging deck chairs on the Titanic.

One e-mail that caught my eye was from Dell, touting the “Dell Effect in Storage.” This missive cited IDC numbers to suggest that they had outgrown their competitors in terms of storage sales as of Q1 of this year, with storage revenues up 30 percent on a year-over-year basis. They went on to cite three “strategic milestones” to support their claims of being the up-and-coming leader in storage.

The first achievement listed: “Introduced the industry’s first low-cost, customer-installable storage area network (SAN) under $5,000 in collaboration with EMC.” I found myself again asking the question: What applications do small to medium businesses, which usually operate Windows environments, have that would necessitate or justify the deployment of FC fabric infrastructure?

The answer, of course, is none. Selling FC SANs to unwary SMEs sounds more like an EMC trick than the “Dell Effect” in action. At one point, the Dell Effect meant something. It was once a positive force, driving down the cost of servers, commoditizing them, and facilitating their introduction into smaller firms. Selling SANs to SMEs is just a way to get them hooked.

Their second milestone: “Reduced the starting cost of networked storage by 91 percent in four years.” Without debating what is meant by “networked storage,” I had to wonder again how this was an example of the Dell Effect. At the disk-drive level, storage cost has declined 50 percent per GB since the early 1990s. Therefore, a reduction of less than 25 per year per year in the cost of Dell storage seems hardly worth bragging about. The vendor has not captured the falling cost of commodity disk, probably because it resells mostly EMC AX100 arrays, which cost twice as much (and in our lab tests perform half as well) as competitors such as Promise Technologies, SNAP Server, and Bell Micro’s Hammer Array line.

The third milestone: “Driving industry technology advancements with leadership in key storage standardization working groups including Storage Bridge Bay (SBB), Disk Drive Format (DDF) and Storage Management Initiative (SMI-S).” Support for standards is all well and good. However, Dell’s embrace of these particular standards raises questions about the company’s concept of storage and its target market.

SBB, if I am reading correctly what little has been written about it, is an EMC/Intel/LSI Logic initiative to “standardize” the controllers on entry-level storage platforms. (A number of other vendors have since joined in.) To quote one release, “The new working group will define mechanical and electrical interface requirements between storage arrays and the controller card that give the array its identity—identities such as JBOD (just a bunch of disks), RAID (redundant array of independent disks), iSCSI, Fibre Channel SAN, and NAS (networked-attached storage). As a result, a storage controller card based on the SBB specification will be able to fit, connect, and electrically operate within an SBB-compliant storage array.” In a nutshell: to change from direct-attached JBOD to RAIDed NAS, just swap the array controller.

As described, the idea makes sense—maybe. There are certainly many challenges in cobbling together the right pieces to make any array work. However, the spec, as I currently understand it, does nothing to guarantee that disk drives will work with controllers. That is a problem currently with many SAS arrays: controllers and disk electronics won’t work and play well together.

Seagate’s involvement with the initiative might hold out hope in this area, but it would run afoul of a lot of deeply rooted relationships between specific disk manufacturers and their preferred partners in the backplane and controller space. Moreover, with Seagate’s soon-to-be-released disk brick offering, which will have a commoditizing effect on all storage controllers, it remains to be seen how meaningful the standard will ultimately be.

Seagate seems to understand that storage controllers must be deconstructed, diminished to their minimal form, with their extra features being moved into network-based software services. Seagate’s “skunkworks” VP Steve Sicola seems determined to bring about the “peer storage node” envisioned in the original definition of a SAN, the one posited by Digital Equipment Corporation (his former employer) before the Fibre Channel Industry Association folks corrupted it. With that in mind, SBB is unlikely to make a meaningful difference.

DDF is another initiative—this time created by Dell itself, then nested into the bureaucracy of the Storage Networking Industry Association—that makes considerable sense on its face. Basically, the engineers are trying to find a way to solve the problem of migrating disks from one array vendor’s cabinet to another without having to change a lot of formatting on the drives. This is a good idea, too long in coming, and maybe too long delayed.

Every array vendor has its own proprietary disk format, sometimes for technically valid reasons (like making their proprietary RAID scheme work), but usually to ensure that they can mark up disk costs by 300 percent and prevent end users from sourcing their disk drives elsewhere. The DDF spec might help normalize formats, but I’m not holding my breath waiting for this to happen.

Doing things the old way offers a lot of mark-up value for vendors such as Network Appliance and EMC and I am not sure they really, deep down, want to change. That Dell would like to see this happen is easy to understand. It helps them sell cabinets. I’m not sure that the Dr. Evils out there are as interested in a unified disk format as is their mini-me super VAR.

SMI, if it were to ever become real, would really be aimed at large organizations with complex infrastructures presenting a multiplicity of name-brand hardware products. Is this Dell’s typical customer, or are they now trying to fish upstream? How support for SMI constitutes a milestone escapes me in any case.

So, as we move into the Mean Season, equal parts architecture and marketecture continue to dominate the releases coming out of the industry. I’m beginning to wonder whether the flak traffic I am receiving is really news or just abuse.

Your views are welcomed. jtoigo@toigopartners.com

Must Read Articles