In-Depth
Starting 2011: Hot Air and Hotter Storage
Are most of the announcements around hot storage innovations lately anything beyond hot air? Our storage analyst, Jon Toigo, finds one bright spot.
To be honest, I tend to view the beginning of a new year as an arbitrary thing. What makes January 1 better than April 1 as the appropriate date to change the last digit of the year? All things considered, the middle of Winter seems to be the worst time to force me out of my warm home office and into the frigid weather just to purchase a new desk calendar. As for archiving last year's information and doing other year-end maintenance on infrastructure and data, couldn't that wait until Spring when my energy levels are higher than they are in the shorter hours of the season?
Add to this that every storage vendor on the planet seems to want to use the annual date change as a justification to clog my inbox with press releases and to ring my phone off the proverbial hook with invitations for important briefings typically used to brag about their previous year's success or to spin tales of rosy growth or important product innovations planned for the year ahead. Most of this is just noise.
Case in point: What does it mean when EMC claims to shatter speed records with the latest iteration of its de-duplication rig -- a "breakthrough" announcement preceded with a ridiculous FedEx mailing to analysts, pundits, bloggers, and trade press writers a few weeks ago containing a cardboard mock-up of a broken record album that, when assembled, advised us to stand by for word of a "record shattering" (get it?) speed improvement? The actual announcement, timed for the new year, proved to be just ho-hum.
Fact checkers in the blogosphere and industry competitors quickly countered with notices that EMC de-dupe speed claims were (1) not supported by independent tests and (2) were not so much breakthroughs as catch-ups with other products already in the market from other vendors. Gideon Senderov, product management and technical marking, Advanced Storage Products Division, NEC Corporation of America, observed that his scalable data de-duplication storage solution, HYDRAStor, had already achieved and demonstrated live throughput speeds much greater than EMC’s supposed “record shattering” benchmark and that NEC (his piece of new year news) was going general availability with a new iteration of HYDRAStor in February that was 50 percent faster than its previous rig and 40 percent less power hungry. With the third-generation HYDRAstor HS8-3000, NEC’s top throughput number goes from an already leading 99 TB/hr up to 148.5 TB/hr, extending its lead to almost 6x higher than EMC’s latest “record” number of 26.3 TB/hr. HYDRAstor also does so with a modular scale-out approach in increments of 2.7 TB/hr, versus EMC monolithic scale-up approach of dual-controllers that is less granular and also cannot scale any further.
The other bit of "breakthrough" news from Hopkinton was the announcement of its decision to "deconstruct" its Clariion and Celerra arrays, essentially by moving value-add functions from the array controllers of these boxes into a standalone server head with "unified" management. Targeted to SMBs, these new VNX and VNXe products were also touted as a technological breakthrough, "3 times simpler and 3 times faster" than the previous products.
My response to the news was involuntary, "Well, duh!" Hadn't Xiotech externalized all controller "value add" functionality except for what needed to be done at the disk layer? Even NetApp was slowly moving storage apps off the array. The storage virtualization software purveyors such as DataCore Software have been standing up array controller functionality outside of the array itself for the past 12 years.
The difference in the EMC announcement was, of course, that EMC was doing it. Competitors were quick to point out that deconstructing the controller didn't do much to deconstruct the business model: the new head unit will only work if you buy (what I consider overpriced) EMC-branded shelves of commodity disk drives with big-dollar maintenance contracts from Hopkinton. This was, in fact, a move backwards in time when Clariion was essentially a proprietary PC, with its controller functionality staged on the PC motherboard and drives mounted inside a locked cabinet.
Also, there was no mention of how "unified management" would be accomplished. I was dubious about EMC taking the high road already embraced by Xiotech, that of a truly open and standards-based management paradigm in Web Services and RESTful APIs. These are truly marvelous innovations to which Xiotech will be announcing many additions next week. I will cover them here.
Real Breakthrough
The really important news that seemed to pass largely unnoticed was DataCore Software's benchmark test of virtual desktop hosting. Perhaps this press release fell prey to understandable cynicism spawned by the flood of other "Virtual Desktop Initiative (VDI) benchmarks" released by storage vendors in late 2010 -- most of which consisted of self-serving hype about the number of virtual desktops that could be hosted on this or that array -- for some cost that was usually significantly below the acquisition price of a physical desktop system, of course: say $50 rather than $400.
Considering the huge potential market for desktop virtualization itself (over 400 million desktops are deployed in businesses worldwide), the fact that businesses would likely realize significantly more from virtualizing desktops than they would from virtualizing servers (by many orders of magnitude), and the need that desktop virtualization would create for more storage capacity (a lot more storage capacity -- and performance), it is understandable that storage vendors would seek to get a piece of this action, but their benchmarks were rather silly on the whole.
To a one, the storage hardware vendors expressed the VDI merits in terms of large numbers of virtual desktops that could be accommodated on their rig. This led to an interesting -- and largely irrelevant -- costing model: the price of the rig was divided by the number of virtual desktops it could host in order to arrive at a cost per virtual desktop. By the end of the benchmark report, it was clear to anyone with a brain that storage vendors were simply using numbers of virtual desktops that could be stacked up on their platform to justify the cost of their platform -- rather than to show any real value derived from VDI itself.
That approach begged the question, "So what if I can host 5,000 desktops at a sub-$100 price per desktop using XYZ gear?" Aside from insurance companies and research labs, most companies don't have 5000 desktops and wouldn't virtualize all of them in one fell swoop even if they did!
Ziya Aral, CTO and Chairman of DataCore Software, sought to learn different things from his benchmark exercise. Since DataCore sells storage virtualization software rather than hardware, he wasn't seeking to learn how many virtual desktops one would need to host to amortize storage hardware cost. Instead, he sought to understand what the requirements were for a virtual desktop in terms of memory and disk storage resources and to discover what the real-world costs would be to host a desktop in a virtualized storage environment comprised of raw disk drives rather than expensive brand name arrays.
Aral also set parameters for the test that were more germane to real world IT planners, pegging the number of desktops being virtualized to 250. He wanted to "find computer science" in the process, establishing the foundations for a reasonable approach to growing DVI support infrastructure that would deliver predictable outcomes in terms of both cost and performance. That way, a business could roll out its DVI strategy in an incremental and predictable way.
The resulting paper -- Benchmarking a Scalable and High-Availability Architecture for Virtual Desktops -- is worth a read.
Storage is currently a significant impediment in server virtualization strategies, with survey after survey suggesting that server virtualization initiatives are stalling out when they are only about 20 percent complete. Perhaps with desktop virtualization we can learn from the problems encountered in the server hypervisor world and make smarter storage infrastructure decisions before we start looking for desktops to virtualize. DataCore Software has provided some important metrics and measurements to make smarter choices up front.
That's the first thing in this new year that I have found worthwhile. Most of the other announcements around hot storage innovations are hot air.
As always, I welcome your comments and feedback: [email protected]