In-Depth
Storage 2010: What’s Past is Prologue
The 2010 history books will show that many of the developments in storage have been driven by economics rather than pure innovation. Our storage analyst, Jon Toigo, takes a look back at the significant storage news of 2010.
Usually near the end of the editorial calendar year, I get an e-mail requesting me to tie up the threads of the columns written over the previous 11 months in order to provide a historical narrative about the past, and to do a bit of prophesying about the future. Often, this is more difficult than it sounds -- but not this time.
The 2010 history books will show that many of the developments in storage have been driven by economics rather than innovation. Brand-name vendors have been buying up the small fries -- often at obscene overvaluations -- in part to avoid the expense of internal R&D to reverse-engineer features that seem to resonate with hamstrung consumers. They also want to extend the reach of their product lines into small and midsize businesses where they have not previously plied their wares with success.
We saw Adaptec go to PMC-Sierra, enabling that company to compete more effectively with LSI. Emulex bought Serverengines to cover a source of supply of critical 10GbE ASICs technology. IBM bought Storwize to gain compression technology. Dell bought 3PAR and Ocarina to become a player in the crowded field of one-stop-shop thin provisioning and de-duplication arrays -- apparently taking no notice of EMC's failure to realize any significant return on investment from its massive overspend on Data Domain the previous year.
EMC did buy Greenplum in order to deliver a custom appliance (including a storage platform) for data warehousing. IBM followed suit with the acquisition of Netezza, a leader in that space.
Big Blue's other acquisitions played into its goal to make SONAS work, a storage solution intended to facilitate hosting of data from distributed computing workloads that are increasingly being consolidated onto mainframes in those shops that have them. Isilon Systems was acquired by EMC to deliver a similar solution, though EMC contextualized it as a "cloud" play -- a sideways confirmation of this columnist's view that clouds are mainframes.
In addition to the persistent noise about server virtualization and cloud computing, decibels approached a deafening pitch around all things Flash. Consumers were told not to worry about wear issues, read disturb, performance degradation, and other problems endemic to Flash-based solid state discs (SSD): just start deploying the technology as a really fast (and really expensive) alternative to magnetic disk. As the latest bright shiny thing in storage, a lot of product was sold to early adopters both as individual components to retrofit existing gear and as new features of branded arrays.
The real efficacy of Flash SSD, many are discovering, is not as a magnetic disk replacement, but rather as additional cache that can be used judiciously to perform discrete functions such as speeding up the scan of file trees to locate specific data within a scalable NAS platform (as in the case of Isilon) and for swapping out "hot" data (data that is frequently accessed for a period of time) from magnetic disks until it cools off. Compellent and Xiotech are among an increasing number of vendors pursuing this path.
From a storage media standpoint, the important news came at the beginning of the year. Toshiba demonstrated a manufacturing-ready technology for producing bit-patterned media that, when wedded to existing Type 1 Perpendicular Magnetic Recording and standard Gigantic Magneto Resistive read/write heads, could yield a 2.5 inch SATA drive with a 40 TB capacity inside 24 months. That was good news, given that the rate of disk capacity improvement in standard PMR drives would likely max out in the next year or two.
Of course, rumors ensued almost immediately of a Hitachi-Toshiba-Fujitsu alliance intended to dominate the storage market. Although disk technology developers have traditionally licensed their breakthrough technologies across the industry, there was speculation that Toshiba would try to keep its bit patterned media technology within a small circle of Japanese manufacturers -- denying access to market leaders such as Seagate and Western Digital. These voices died down until this past month, when several stock analysts began suggesting that Seagate might become a takeover target for Toshiba, which had just finished its acquisition of Fujitsu's disk business. Whether a leveraged buy-out of Seagate happens remains an open question, but all the talk is likely to delay the introduction of very-high-capacity PMR drives into the market in 2011.
That buys time for Fujifilm and IBM to bring high-capacity tape into the market, based on Fujifilm's new Barium-Ferrite coating. Like bit-patterned media for disk, BaFe helps to dramatically increase the capacity of tape. It provides Type 2 perpendicular recording capabilities, automatically aligning bits in a perpendicular fashion on tape substrate so that more bits can be squeezed together without losing data integrity. In January, the two companies were envisioning up to 30 TB of capacity on a standard LTO cartridge, leapfrogging disk drive capacities and increasing the storage densities of tape libraries several orders of magnitude beyond disk arrays in the same footprint.
Of course, the success of media improvements in tape is inversely related to the marketing hoopla of disk array vendors. It should be a no brainer that the forthcoming improvements in tape capacity, combined with a more than 700 percent improvement in tape media and subsystem reliability over the past decade, will drive up the adoption of tape -- for backup as well as for archive -- in the coming year. Yet, these improvements are all meaningless if companies continue to abandon tape in favor of de-duplicating Virtual Tape (aka disk) Subsystems with WAN-based replication: the offering du jour of most of the disk subsystem vendors in 2010.
A concerted campaign of disinformation is being waged at present by at least one major disk vendor to nail the tape coffin shut. Although I have seen a few fact-based presentations from consumers regarding the vicissitudes of the de-duplicating VTS (describing their failure to deliver anything close to the promised data reduction ratios), it is worth noting that videos of these presentations have been "disappeared" from the Internet with the same alacrity as victorious candidates disappearing their political opponents following a Third-World election. Without the promised 70:1 reduction delivered by de-duplication engines, which are usually joined at the hip with a proprietary array product from a storage vendor, consumers are paying dearly for disk drives when they purchase a de-duplicating VTS. More consumers are realizing that the "Aladdin argument" used to sell them these products ("Phenomenal cosmic power; itty bitty living space!") is not borne out in reality.
The same sort of censorship applies to real-world critiques of functionality such as on-array thin provisioning and on-array storage tiering that has been used to push overpriced gear into consumer hands over the past year. The storage array industry didn't require a ruling like Citizens United to propagandize their wares: they leverage codicils in warranty and maintenance agreements to gag consumers from saying anything disparaging about their products once they have bought them.
There are green shoots of hope, however. This past year, Xiotech showed the world what can be done when you remove "value-add" software from proprietary controllers and open the platform up to third-party, independent software vendors to provide functions such as thin provisioning or de-duplication off box. They also demonstrated the worth of open management via REST and Web Services, demonstrating the management of over a Petabyte of storage using just an iPad or iPhone. To their credit, the company decided to share its technology -- for free -- with the world at cortexdeveloper.com/.
Additionally, Spectra Logic and several other vendors formed the Active Archive Alliance and began touting the need to archive data in order to address the root cause of storage capacity demand and cost. Many firms have begun to realize that data management must be combined with storage management to bend the storage cost curve. Managing data better translates not only to reduced capacity requirements, but also improved regulatory compliance, better disaster recovery capabilities, and greener IT operations.
We have seen some new archive software companies such as Dataglobal join the cadre of existing players, including FileTek, QStar Technologies, BridgeHead Software, and a few others to spell out the value of their wares within a more business-savvy context. "Do more with less" ought to translate into "deliver better service at lower cost."
A recession, it is said, is a terrible thing to waste. The Great Recession provided IT a unique opportunity to realign itself with business needs, but much of that opportunity has been squandered by companies that continue to buy what the vendor wants to sell rather than what their business processes really require.
The only sure thing for 2011 is that, for as long as ESJ.com continues to publish, we will continue to call out the marketecture while promoting smarter storage ideas. Thanks for reading in 2010. Your comments are welcome at jtoigo@toigopartners.com.