In-Depth

Storage Year in Review and a Preview of 2012

From islands of storage to the consumerization of storage, IT storage administrators had their hands full this year. What challenges will they face in 2012?

Each year, we try to offer a unique take on storage trends past and present -- one that varies a bit from the year-end retrospectives you may be reading elsewhere that mostly echo and re-echo analyst and vendor sound bites to form a coherent, if inaccurate, narrative. This column will not cover the familiar turf.

We will not try to assess the long-term impact of disasters in Japan on ASICS availability for HP 3PAR arrays or ruminate about how flooding in Thailand is being used to excuse a 3x price increase in disk drives. Nor will we cover the impact of industry consolidation that has simply changed the logo on the bezel plates on most storage rigs to HP, Dell, IBM, NetApp, or EMC. We won’t discuss the wisdom or timing of FlashSSD manufacturers seeking to push their modality for high cost/high speed storage into new tiering arrangements targeted, quizzically, at budget-constrained data centers around the world. Nor will there be much discussion about that not-so-killer app that everyone is talking about -- big data -- or that not-so-inevitable-next-step in computing -- clouds.

In short, we eschew storage marketecture and prefer to dwell on storage architecture. So, here's our take for 2011 and our guesswork about developments in 2012.

2011 Trend #1: Islands of storage persisted

First, looking at buying behavior of storage consumers, two facts emerged. For one, buyers have not changed their ways much from what they were in pre-recession times. Despite the opportunity offered by the economy to right the errant path that found companies deploying overpriced, value-added, software-laden islands of storage for the last decade, the market for monolithic storage persists.

Although the early part of the year seemed filled with promise that companies would return to purpose-building infrastructure with an emphasis on coherent management, once the second quarter showed a bit of promise for improvement in the general economy, many companies seemed to revert to buying the industry's equivalent of the gas-guzzling SUV: complex arrays that automate many storage management functions at the level of the rig but that increase the OPEX costs of heterogeneous infrastructure by making it more difficult to manage overall.

The axiom of "going with the brand you know" seemed to prevail in larger shops, though the level of comfort with decisions seemed a bit diminished. Industry consolidation was accompanied by increased consternation as users of Compellent, 3PAR, Isilon, and other acquired companies troubled over their fates, which were now controlled by the very three-letter brands whose products and services they had originally eschewed in favor of products from smaller firms with big ideas.

The objectives of large vendors for acquiring the wares of the "knee nippers and ankle biters" in the market varied from one buyout to the next, but the marketing around the purchases typically held that big vendor XYZ now offered "a deeper bench of products" that would all be managed (eventually) by the vendor's proprietary management software. Truth be told, the vendors haven't delivered the goods for common management across their existing products, so we frankly doubt that a common management meme will emerge in the near term to cover all of the increasingly diverse product lines.

This is a big disappointment, especially in light of the amazing work that had been done by Xiotech (now XIO) with RESTful management, part of the W3C's open Web Services standards suite. By the start of the 2011, every vendor was on record with public statements embracing RESTful management of their rigs and promising it in future versions. Not one has made any meaningful progress. The reason seems simple: consumers have largely gone back to their earlier pattern of not demanding a coherent management meme as part of their product selection criteria, so the pressure is off the vendors to prioritize such an effort.

To its credit, the latest Hitachi Data Systems storage management tools are quite good and are finding increasing use in both HDS and heterogeneous storage infrastructures we have examined this year. IBM Tivoli also made significant strides in terms of interface and function. However, both products perpetuate a closed approach that has long limited the efficacy of storage resource management (SRM), while a RESTful approach would open up and simplify cross-platform management.

Open standards-based RESTful management is important for any company seeking to bend the cost curve of storage administration. Better management translates to a capability to manage more kit with fewer administrators, and usually results in better metrics for first time fixes when problems arise, better prevention of gear-related failure events, and lower likelihood of a career-limiting infrastructure meltdown.

Bottom line: although it seemed sensible to expect the promise of delivering better service levels with lower cost of ownership to have been a key to success in these times, EMC proved to be smart in its decision to spend more money to build brand loyalty than it did to improve its products and their management. Perhaps they understood a second trend in the storage market -- what some pundits called the "consumerization" of technology.

2011 Trend #2: Technology "consumerization" a slippery slope

This consumerization trend has been building since the hype around server virtualization reached fever pitch a couple of years ago. In many firms, server hypervisors have been viewed as a means to consolidate hardware/software stacks so that fewer administrators can manage everything. You now find application or database administrators provisioning their own hardware resources (storage included) using the tools provided by the hypervisor vendor -- and often without any real understanding of the infrastructure itself.

The situation is analogous in many ways to the Mercedes Benz TV ad touting the advantages of a vehicle safety system that alerts drivers when they are veering into an adjacent lane, falling asleep at the wheel, or failing to brake in anticipation of the slowing traffic ahead: in short, providing a safer driving experience for persons who are too busy texting, eating, or applying eye-liner to actually drive a car. Similarly, the net impact of server virtualization in many firms has been to shift responsibilities for storage administration from those who actually understand storage technology and engineering to those who barely know enough about an app to be dangerous. Storage consumerization, the idea that any user can be a storage administrator, is a very scary one indeed.

Yet, at this year's VMworld, VMware engineers were "boldly asserting" that they would deliver a "storage hypervisor" in the coming year -- another microkernel added to a mess of microkernels that are already held together by spit and bailing wire -- that will finish the job of "simplifying" storage infrastructure so that anyone can manage it. Gone would be the days of arrays that perform even the simplest tasks of RAID or LUN presentation: like the Mercedes Benz ad, the new storage hypervisor will make storage administration easy even for those who are too busy to actually learn anything about storage.

The statement was a validation of sorts for the multi-decade development efforts of firms such as DataCore Software, which released its long-awaited SANsymphony-V storage hypervisor this year to much acclaim. DataCore Software and a few other "storage virtualization" offerings in the market have been steadily improving their technology for presenting complex storage infrastructure as simple, reliable resource pools whose capacity can be allocated and de-allocated dynamically in response to application needs. Products such as SANsymphony-V demonstrate what is possible with state-of-the-art storage hypervisor technology, including the coherent meta-management of capacity, performance, and data protection processes.

However, DataCore also underscores a prerequisite for a truly well-managed storage resource, including a solid foundation of underlying plumbing and a hardware kit that is well managed and monitored at the gear level. Like server virtualization, storage virtualization doesn't fix the genetics of the underlying infrastructure, it only masks them from view. To its credit, DataCore stresses the need for hardware layer management even in the presence of their storage hypervisor, while server virtualization peddlers tend to minimize the importance of physical layer management, perhaps because it distracts from their promotion of benefits that can be achieved with their server hypervisor. It should come as no surprise that the vicissitudes of faulty hardware management have been discovered the hard way by many firms or that the unanticipated impact of server virtualization on infrastructure has been seeing server virtualization initiatives abandoned en masse in 2011.

Bottom line: despite the "consumer-friendly" feel of virtualization technologies, they don't eliminate the need for knowledgeable server or storage personnel who know as much about the cabling, the switch port assignments, the controllers, and the RAID sets as they do about the OS and applications. Fortunately, the storage hypervisor vendors aren't following their server hypervisor peers in suggesting that their wares provide a path to a leaner data center workforce, capable of operating mission-critical applications with a minimum of expensive and knowledgeable personnel. The truth is that problems have always existed in infrastructure and virtualization usually exacerbates them.

2011: A Yawn; 2012: The Mayan Apocalypse?

Taken together, the continuing preference for monolithic technologies, the lack of attention to storage management, and the dumbing down of the operational requirements for delivering an efficient storage service have made storage something of a yawn in 2011. There may yet be an opportunity for positive change in 2012, however, given the continuing (and, in most cases, worsening) economic trends in general and the introduction of several innovative ideas.

Starting with what we know: December 2012 won't see the Mayan apocalypse as described by Maud Worcester Makemson in 1957. A correct reading of the inscriptions on the stele at the ancient city of Coba provides a better interpretation of the Mayan Long Count calendar and places the end of the current Great Cycle of the universe ("20 units above the b'ak'tun") at 41 Octillion years in the future (4.1 x 1028). That is quite a few hardware refresh cycles from now, to be sure, and it is just one more reason why businesses can't afford to build storage infrastructure without concern for future costs.

Data will continue to grow in the coming year: another certainty. Most of it will be stored in the form of files: again, a certainty. This growth will drive the need in companies large and small for more capacity -- IDC says double, Gartner says triple the capacity deployed today in companies deploying server hypervisors. Given the state of fiscal affairs, the cost of this increased capacity may be viewed as unsustainable, so once again we are going to be optimistic that organizations will be forced to become smarter about storage.

It may be too much of a reach to believe that data management will finally make its way into the discussion. As reported here, up to 70 percent of the capacity of every disk drive in use today is occupied by data that doesn't need to be on disk. Forty percent belongs in an archive, and 30 percent needs to be reviewed and, in most cases, deleted. Putting in place the right archive technology and the cadre of data management skilled professionals to cull the gems from the junk may be too much to ask of 2012, but there are a few things that should help deal with the deluge.

One is what I call NAS on steroids. This involves taking a small disk cache/server such as Crossroad Systems' StrongBox appliance and using it to stand up a file system such as the Linear Tape File System (LTFS) in front of a tape library. The configuration provides an extraordinarily capacious repository for petabytes of files that occupies far less space than a disk array and consumes very little utility power. Access to the repository can be made as an NFS or CIFS/SMB share -- hence, the NAS reference. Although response time to file requests may approximate the time required to download a seldom-accessed PDF file from the World Wide Web, the repository is essentially a storehouse for rarely re-referenced data, making performance less important.

For capture storage -- that is, storage that is rated to handle reads and writes of active data -- a mix of SAS and SATA (like today's infrastructure but smaller in terms of spindle count) may well fill the bill. Pooling the two resource levels -- faster, lower-capacity disk, and slower, higher-capacity disk -- under a storage hypervisor would drive down cost (elimination of vendor lock-ins, delivery of value-add software services across all rigs, etc.) and drive up IOPS by leveraging less-expensive server DRAM as a cache.

Provided that the underlying hardware is well managed, deploying this infrastructure in 2012 could make a meaningful dent in both the expense and the efficiency of storage operations going forward. A plus: you don't need to sacrifice what you have already deployed.

We will write more about this architecture in the coming year. For now, thanks for reading and, as always, we welcome your comments: jtoigo@toigopartners.com.

Must Read Articles