In-Depth

Building Storage, Part II

The entry level price of "solutions-in-a-box" is rarely for a unit configured in a manner that most smart consumers are likely to deploy.

Last week we talked about horizontal and vertical storage scaling issues and the rise of the clustered storage paradigm. This week we look at two additional storage architecture trends that are currently appearing in new products from brand name and lesser known vendors. I refer to them as “multi-tiered storage in a box” and “disaster recovery in a can.”

Considerable virtual ink has been dedicated in this column to the topic of multi-tiered storage, which I regard as more nostalgia than real architecture. Multi-tiering certainly made sense when it was introduced into the mainframe data center back in the 1970s. In those days, there were significant price/performance differences between different storage modalities that suggested a tiering of storage based on cost. System memory (very expensive) was tier one, disk arrays (DASD) were tier two, and various flavors and uses of tape comprised tiers three, four, and beyond. One could argue (and we have) that applying this sort of scheme to open systems storage—where the price/performance distinctions between high-end (Fibre Channel, SAS, and parallel SCSI) arrays on the one hand, and low-end (SATA) arrays on the other are hardly compelling—is metaphorical at best.

We would contend that, in the distributed world, there are only two classes of storage, defined by their role rather than by their cost or performance: call them “capture storage” and “retention storage.” Put another way, there is one class of devices distinguished by their use in capturing data as rapidly as the application or end user creates it, and there is a second class distinguished by its use in storing data ad eternum once its “volatility” or update frequency has declined to near zero. Of course, in many shops, tape is still around, though largely relegated to the role of backup.

Tiered storage is simply not meaningful in the distributed systems world. Anyone who tells you otherwise is usually trying to sell you something, and there is no shortage of those who want to sell you something. EMC came into our crosshairs late last year for claims surrounding its DMX3 platform. They argued that their controller technology, their “value-add software” functionality, and their ability to add shelves of both high end Fibre Channel drives and something called “low cost Fibre Channel” or LCFC drives gave customers multi-tiered storage in a box. These claims struck us as silly then, and they still do today.

Truth be told, EMC did not originate this concept. I’m not sure who did, except that it was not one of the name-brand vendors.

Start-ups such as 3Par were talking “utility storage”—very much akin to multi-tier in a box—back in 1999. Xiotech began chanting the same mantra at just about the same time. Compellent joined the chorus about two years ago, about the same time as did newcomer Pillar Data Systems, following an infusion of cash from Oracle-boss Larry Ellison.

What all of these solutions have in common is a one-stop-shop value proposition: a single source of hardware, a single software license, and a single warranty or maintenance contract. The vendors all argue that theirs is the one true multi-tier storage in a box solution—modular in the extreme and fitted with customized (or customizable) software smarts to migrate data to different tiers based on policies created by the owner. To a one, the fine print reads that only the vendor’s gear can be used in the solution. If there is any latitude to use third-party gear, it is usually only in the realm of tape.

Whatever the so-called benefits of data or information lifecycle management via a single management interface, they seem to be offset by the mark-up the vendors place on the underlying componentry—both hardware and software—of their platform. It would be entirely possible to cobble together the same software functionality across a set of low-cost arrays to accomplish the same result at a lower cost and without the vendor lock-in. Software suites are increasingly available from companies such as BridgeHead Software and CA that will deliver most, if not all, of the benefits vendors tout in their homogeneous hardware/software platforms, and they afford the storage manager enormous flexibility in choosing the type, brand, and connection options their applications really require of their storage components. Storage administrators should not be afraid to consider the “roll your own” option before sinking a lot of cash into these solutions.

Disaster Recovery in a Can

The second trend in storage today is similar to the first. It seems like everyone and his kid sister has seized on the “disaster recovery in a can” appliance model to try to sweet-talk consumers into making the move from tape backup to disk-to-disk backup.

Last week I mentioned that the storage appliance model had been pioneered by NetApp: not surprisingly, they are leading the charge in “disaster recovery in a can” products. If you read the trade publications, you might have noticed the number of stories over the past few weeks featuring the latest offering from Sunnyvale: the StoreVault S500. Ostensibly targeted at SMBs with its starting price point of $5K, StoreVault could be described as NetApp snapshot protection for the everyman.

Looking more closely at the value proposition for the box, however, things begin to get dicey. For one thing, the entry-level price is NOT for a unit configured in a manner that most smart consumers (especially those in the SMB market) are likely to deploy.

In its bare bones setup, the 2u box ships with four 250 GB SATA drives and some software, including a kind of “ONTAP Lite” operating system. The manufacturer specs say that the unit supports RAID 4 (which NetApp CTO Dave Hitz does not recommend for high capacity SATA drives due to long rebuild times) and RAID DP (a proprietary RAID scheme).

Now, using either form of RAID quickly reduces the amount of capacity available to actual data and snapshots down to a narrow sliver of the total raw capacity of the array. Add to that the much-touted support for Global Hot Sparing (that is, the ability to hold a drive in reserve as a hot spare should any of the others fail), and you further subtract from the initial aggregate capacity represented by the four 250GB SATA drives in the box. So, if my math is correct, you are actually paying a premium price for a 250 GB drive.

NetApp’s product manager for StoreVault, Drew Meyer, acknowledges that the $5K bare bones product is probably not what folks will end up buying; they will opt for more. He insists that the product, despite its limited capacity, limited auditability, limited manageability, limited support, and apparent total lack of investment protection, is what many focus groups have told him they want.

He makes these additional observations about what small- to medium-sized consumers are telling him: “I find myself continually resetting my assumptions about the problems these customers are facing and the degree of sophistication that they have time for. This customer is still figuring out what to do about his DAS, debating the merits of expanding a Windows server with internal disks or more SCSI attached storage, and learning what a snapshot is. He’s far from simple—in fact, he’s a wiz at making stuff continue to work, but is very interested in technology that makes his life easier and wants thoughtful solutions that save dollars over the life of his investment.

“Bottom line: Snapshots, FlexVols, RAID-DP, NDMP and multiple simultaneous protocol support all add up to two things: (1) It just works. The data is there, no questions asked, due to an immense range of proven datacenter technologies; and (2) I can buy what I need today and get more tomorrow, while putting off tomorrow’s arrival and holding onto cash for today.”

I am not sure that I agree with Meyer’s assessment, since I typically find the SMB to be a far savvier and price-conscious consumer than his cousins in Global 2000 firms, but that is the theory that NetApp is following. In my humble opinion, considering how the price may well ratchet things up well above the baseline $5K price point, storage admins might certainly want to consider an alternative like STORServer. VP of sales and marketing Ellen Rome and I had a chat about her product line the other day, which comprises a build-to-order set of backup, archive, and disaster recovery devices, all with IBM Tivoli Storage Manager (TSM) inside.

Says Rome, “STORServer is built to order in Colorado Springs and ships with a set of policies that fit the majority of SMEs.” Included with every box: two to three days of installation support and skills transfer. Prices start at $15K for a five-client license, a TSM server, a 1 TB disk complement (expandable to Petabytes), and a tape autoloader.

Another option will be to combine generic disk with a soon-to-be-released offering from Crossroads Systems based on its recent acquisition of Tape Laboratories (TapeLabs), whose terrific virtual-tape product I reviewed in a previous column. Crossroads had its moment of fame doing Fibre-Channel-to-parallel-SCSI bridging back at the beginning of the Fibre Channel fabric craze. The new union with TapeLabs has the potential to introduce some storage hardware agnostic disaster recovery/data protection solutions that will be worth watching.

We will be watching them here. Your opinions are welcome: jtoigo@toigopartners.com.

Related Article

Must Read Articles