In-Depth

Beware the Ides of March … and Storage Panaceas

None of the popular storage strategies—FC fabric, Big Iron, multi-tiered storage, or storage consolidation—has any intrinsic value, regardless of what a vendor may say.

I have been covering storage for ESJ now for several years—at least since the beginning of the so-called network storage revolution. In retrospect, it occurs to me that users have been deluged with a seemingly inexhaustible supply of marketecture from the storage industry purveyors.

It began in the late 1990s with SANs, or what the industry came to call SANs after brutalizing the original concept defined in a Enterprise Network Storage Architecture (ENSA) white paper by a group of Digital Equipment architects who went to work for Compaq after the acquisition of the company.

SANs were, and continue to be, a great idea. I would cut off the pony tail I have been growing since the days of ENSA if a real SAN appeared in the marketplace. What we have instead are Fibre Channel fabrics, which industry parlance has called “SANs” even though the underlying protocol is not a networkprotocol, but a channel protocol.

Several weeks ago, I challenged a fellow from Cisco on this exact point. His response was that I needed to be flexible in how I define the term “network.” I said that I had learned my definition of network at the knee of Cisco years ago: networks were interconnections of peer nodes, not dumb peripherals or targets or “slaves” talking to intelligent servers or initiators or “masters.” Cisco taught me that.

His rejoinder was a metaphor for his company’s product positioning: though a long-time advocate of networks, the company is now a leading vendor of FC switches. Sure, he said, Fibre Channel didn’t constitute a network in the old school definition of the term, but, like Cisco, I had to move beyond purity of technical terminology and to embrace a broader definition of networking. He reminded me that Cisco itself had seen the error of its ways and changed from being technology-focused to a “customer-focused” company. In other words, FC had become a network protocol because Cisco is a network company. Leave the semantic hairsplitting to academicians and CERN; to survive, a company must sell whatever is in vogue—even if it means calling Fibre Channel fabrics “SANs.”

My fundamental problem, however, has less to do with the degradation of the terminology than with the marketing for FC fabrics that declares them to be the one-size-fits-all infrastructure choice—which they clearly are not. Applications determine the correct infrastructure choices, not the marketing departments of a handful of gear peddlers. FC SANs became the panacea for open systems storage in most Fortune 500 companies, providing still more proof of the P. T. Barnum dictum about the ability to fool all of the people some of the time.

Much inappropriate infrastructure has since been deployed and consumers have struggled to fit their applications to ill-suited storage plumbing. According to one survey, SANs are the number three cause of downtime today. Management of this infrastructure remains elusive in many cases. Costs for storage have accelerated rather than decelerating in the manner promised in the glossy brochures.

Some vendors have tried to improve the SAN story by offering bigger, smarter arrays. The current war of words between Hitachi Data Systems and EMC, much discussed in this column, is one example. Many vendors seem determined to continue to push the Big-Iron mantra—itself another panacea.

Big-Iron thinking rests on a couple of fundamental precepts. The most important is that it is easier to manage a few things than it is to manage many things.

This is true. In fact, to the extent that “SANs” have delivered any value at all, it is usually a function of the homogeneity of the infrastructure. To avoid interoperability problems that plague so-called FC SANs (and that continue to require semi-annual interoperability plug-fests at the University of New Hampshire and elsewhere a full 10 years after the introduction of Fibre Channel), all of the arrays connected in a fabric tend to come from one vendor (and its cadre of partners).

What this suggests to me is that any benefits attributed to SANs built on Fibre Channel (a brain-dead protocol with no in-band management to speak of) are more properly attributed to hardware homogeneity of SAN nodes. With fewer devices from fewer vendors, management actually improves as a function of the tools provided with the vendor on its array. The same benefits could have been realized just as readily, and with less pain, by daisy-chaining a bunch of monolithic big iron arrays with the same corporate moniker than was accrued by putting them in a fabric to begin with.

From this perspective, FC SANs have merely provided the means to increase the price of FC disk from about $89/GB to $189/GB, courtesy of the plumbing. HDS’ TagmaStore deployment model, which entails connecting servers to ports on the front of the box and ancillary arrays to ports on the back of the box, accomplishes basically the same thing—but without the SAN plumbing or price tag. TagmaStore has a crossbar switch in its head that makes it act like a front end processor—think mainframes—for back end storage peripherals. Many of my larger clients are revisiting the idea of using IBM z/OS mainframes to front-end their storage: a very similar model.

My problem with Big Iron is the lock-in that it usually entails. Big Iron is an appropriate platform for hosting the data from some applications, but not to all. It is ridiculous to use expensive disk for hosting rarely accessed or infrequently updated data.

That little piece of common sense has not escaped corporate bean counters. The concerns they have expressed to their vendors have not fallen on deaf ears. The vendors have introduced in the past 18 months two additional panaceas to make the Big Iron SAN thing more palatable. One is “multi-tiered storage” and the other is “consolidation.”

The multi-tiered storage doctrine holds that data should be migrated from one tier of storage to a less expensive tier of storage as it ages and its access frequency declines. On its face, this seems pretty smart. The problem is that the vendor usually prescribes multiple “trays” of storage inside the same Big Iron frame as the implementation model. They usually want to sell you additional wrap-around software, often wedded to their own proprietary array controller, to facilitate moving the data from tray to tray or cabinet to cabinet within their product.

Multi-tiered storage meant something in the mainframe world. You had two or three tiers of system memory, then big disk storage arrays (DASD), then a couple of flavors of tape: one active (used like disk for live data), the other for backups and archives. You also had highly granular hierarchical storage-management tools delivered with the mainframe operating system itself to facilitate data migration across storage classes.

Storage in the distributed world is a very different creature. The metaphor doesn’t really apply.

We are told, however, that multi-tiered storage is the way to go. That might be true if the frame vendor wasn’t jacking up the price of the disk drives in its arrays by 300 percent or more above the disk vendor’s MSRP. That price hike, combined with the cost of the wrap-around software that may or may not provide a granular method for selecting data for migration, makes the cost model for multi-tiered storage decidedly unappealing in most cases. (Granular data migration helps reduce the likelihood that the data that has been migrated will need to be de-migrated later. The efficacy of current hierarchical storage management software varies significantly.)

If I were doing multi-tiered storage, I would want to use platform-agnostic software riding across the cheapest second- and third-tier storage I could buy. I would want to capture the declining cost of the storage disks themselves and pick and choose the best data migration tools I could find, which today do not come from the three-letter acronyms of storage.

The other panacea that has been proffered by the industry in the last few months is consolidation. Up front, this also makes a certain kind of common sense. Recently, I chatted with someone who explained that his company was well served by consolidating frames—first, just to know where all of the spindles were located; second, because the cost of supporting frames in branch offices declined significantly. In short, he disagreed vehemently with my assertion that, as a rule, the economic gain rule that applied in server consolidation does not apply in storage consolidation. Not true, he said. “We consolidated all of our storage onto some XYZ frames and we are seeing a lot of cost savings.”

I didn’t want to mix it up with the fellow, but as he explained his new infrastructure, he added that data-caching appliances from Tacit Networks had been deployed out to all of the branch offices where storage arrays had been prior to consolidation. This was done to offset complaints from end users about the significant delays that had accrued to accessing files once they had been consolidated onto spindles at the home office.

Nothing against Tacit Networks, whose products we have described in previous columns, but the “solution” should not have been needed in the first place. If you really believe that networked storage is the future, as I do, then you must believe the 80/20 rule of networking which states that most accesses to data (80 percent or more) are made by the folks who create the data (the local cadre of users). Smart network design leaves local assets where they belong, in the proximity of the workgroup or on the subnet where they are accessed.

Consolidating spindles does little more than create choke points that must be circumvented by ingenious tricks such as deploying appliances to perform local file caching and synchronization. Tacit makes good money selling the fix to a problem that should never have been created in the first place. I would think that a lot of math needs to be done to determine which strategy truly delivers the best price/performance mix: decentralizing and networking storage, with all of the pain that entails from a local support standpoint, versus centralizing storage and deploying workarounds to handle the problems created by this strategy. The fellow may have done the math and chosen the best solution for his environment, but I kind of doubt it.

Bottom line: Julius Caesar was warned to beware the ides of March. My advice this week is in a similar vein. Beware storage vendor panaceas. None of these strategies—FC fabric, Big Iron, multi-tiered storage, or storage consolidation—has any intrinsic value, regardless of what the vendor may say. They all help generate income for the vendor, but they tend to disappear as fashions every six months as new metaphors are introduced by the industry.

Do the math for yourself to decide which, if any, of these panaceas make sense for your environment. Consider the problem strategically, rather than tactically. Maybe what you need to centralize isn’t your storage but your storage management.

Responses are welcome at [email protected]

Must Read Articles