In-Depth

2013 Storage Trends

Our storage analyst, Jon Toigo, takes a look back at the major events and trends in storage this year and asks what this means for 2013.

Recently, in a newspaper (yes, they still exist) I saw a single-panel cartoon that provided the idea for this column. The cartoon depicted a sheet of paper stapled to a utility pole, its lower third shredded vertically to form a row of detachable strips, each emblazoned with a phone number. The document was instantly recognizable as a pre-Craig's List solicitation that you might still find in grocery stores, laundromats, college student unions, etc.

In this case, one of the strips had been detached, leaving a gap. The main body of the posted document read, "REWARD OFFERED FOR MISSING STRIP CONTAINING PHONE NUMBER!"

What did this bit of humor have to do with a traditional end of year "what just happened and what's ahead" column? Perhaps it was the absurdity embodied in the cartoon punch line. I, for one, am getting tired of reading the prognostications about how the happenings of the past 12 months have any predictive value whatsoever in divining what might happen over the next 12. In truth, 2012 seemed to be anomalous, or even absurd, in many ways.

As 2013 rolls out, I am still waiting to hear what really happened in terms of storage capacity growth. Readers may recall that 2011 ended with IDC telling us that external storage capacity deployed worldwide totaled about 21 Exabytes and was slated to grow at about 30-40 percent per year for the foreseeable future. That would put us at about 40 Exabytes in 2014.

By late spring, IDC revised its estimate to consider the impact of server virtualization on storage demand. They said that storage would henceforth be growing at a much more alarming rate of 300 percent per year -- a significant difference from the original estimate.

Meanwhile, Gartner, not to be out-predicted by their rival, offered an estimate of storage capacity growth in virtual server environments that more than doubled IDC's number, climbing 650 percent by 2014.

Both firms ignored their own findings in January 2012, showing that server virtualization software deployments had stalled, reaching only about 17 to 20 percent of servers worldwide -- so it was unclear whether their 300 percent and 650 percent growth estimates reflected both virtualized and non-virtualized environments collectively or applied only to those shops that had drunk the VMware, Microsoft, Citrix, or Oracle brands of virtualization Kool-Aid and were proceeding with server virtualization initiatives.

Truthfully, I was less interested in the accuracy of the global predictions of storage growth rates than in the fundamental assumption it advanced about the impact of server virtualization on storage capacity demand. Apparently, if you intend to "vmotion" or "cut and paste" virtual machines from one physical host to another, you will need another copy of the data on the disks serving each alternative hosting environment. Server virtualization has a multiplier effect on storage capacity requirements.

Despite this self-evident truth, no one seems to know what the new storage capacity demand actually is or how much additional storage capacity companies will need to field. No one seems to be measuring real rates of data growth in order to design a "fact-based" provisioning model. Instead, we seem to be continuing the practice of buying more capacity when problems arise, or when we are refreshing our infrastructure, or when storage kits are cheap.

That last reason leads me to another absurd element of 2012: the presumed disk drive shortage. In 2011 we did have a prolonged monsoon, aggravated by a tsunami, in Thailand, where many disk drive components are manufactured. Almost instantly, word came down from the few remaining disk drive manufacturers (circa September/October 2011) that manufacturing would likely be delayed and that shortfalls in projected disk shipments should be expected.

However, by late October and early November, the same vendors were reassuring the market that all problems were resolved and that manufacturing processes were being tweaked to produce even more units than before. Yet, the rest of the world didn't seem to get the news.

If you turned from the financial page of the newspaper you were reading to its advertisements for your favorite computer store, you would see that the prices of disk arrays and computers had jumped as their internal disk drives doubled or trebled in price. It was obvious that the idea of an impending drive shortage had been seized upon by every participant in the IT finished goods supply chain as an excuse to mark-up products.

I'm not the only person who highlighted this fact. Disk prices got so ridiculous by January 2012 that a reputable financial analyst asked a spokesman for a drive manufacturer during an annual earnings phone call exactly how long the company would be able to sell its disk drives at hugely inflated rates before customers rebel. In fact, the analyst used the words "price gouging." The spokesperson declined to answer the question.

At the end of 2011, the disk makers had delivered at least 20 million disk units more than the 51 million that industry watchers had predicted for the year pre-tsunami/pre-monsoon. Yet, the specter of drive supply shortages, and the natural human greed response, kept prices inflated through at least the mid-summer months of 2012. Thus, earnings data from disk goods vendors are largely irrelevant to discussions of the future.

It also calls into question most of the forward-looking pronouncements made by analysts and their vendor clients regarding the future of Flash or solid state disk (SSD) storage on the one hand, and storage clouds on the other. For a time, the high cost of disk drives was portrayed as making disk alternatives such as Flash memory-based SSDs more affordable -- and therefore more reasonable.

It wasn't that any real fixes had been delivered to address the inherent problems with such devices, including memory wear (250K writes to a cell, then the cell and its group are marked bad), read disturb (reading a cell disturbs the adjacent cells) or non-linear performance (written once, cells need to be erased before being rewritten, resulting in a 50 percent drop in IOPS). The industry simply "brute forced" fixes to such issues: delivering far more capacity on a device than advertised, then using the spare capacity to substitute for failed, damaged, or already-written capacity to address (but not solve) the aforementioned issues.

Although more memory on the device kept prices high, the price inflation around hard disks (premised as it was on an imaginary supply shortage) kept the Flash SSD dream alive. It gave the appearance of a more level competition between the technologies.

In somewhat the same way, the high cost of disk and the perceived risk of supply chain interruptions benefited the "cloud storage" peddlers as well as the Flash SSD crowd. Cloud storage provider Amazon S3 drove pay-per-GB storage prices down to 12 cents per gigabyte in 2012, which, by comparison to 20-cents-per-GB for a SATA hard disk (or 120 times that for a SATA or SAS disk in an array), made it seem to be a bargain. The more that inflated disk prices were baked into the equation, the better outsourced storage sounded.

Add in some distressing information that began to enter the marketplace of ideas in early to mid-2012 regarding the vulnerability of disk drives and common RAID schemes based on analyses of bit error rates (one in 90 disks has unrecoverable errors) and drive failure rates (1500 times more frequent than previously thought), and storage clouds looked better and better.

This "disk and RAID are vulnerable" narrative got a full head of steam by the early summer of 2012, becoming a fixture at conferences and trade shows in the latter half of the year. Storage cloud peddlers were quick to jump into the gap. Popular (non-technical) business publications ran articles sporting the headline that storage clouds were poised to jump 346 percent in revenues owing to all of the adoptions.

The trend actually reflected the opinions of a whole 19 survey respondents, and the increased revenues expectation relied on a flimsy calculation that showed total revenues of $100 million climbing to a little more than $1 billion by 2015. Considering that the disk drive market alone was close to $31 billion last year, this cloud storage growth curve really didn't impress me. Moreover, the referenced study showed that over 80 percent of cloud storage was actually storage capacity deployed by other cloud service providers for their own consumption and not for sale as a service.

More marketecture than architecture, most "public" storage clouds delivered little that could not be provided more cost effectively or in a better managed way using virtualized, REST-fully managed storage infrastructure overseen by a few bright administrators in one's own facility. If you listened closely, this truth was at the heart of marketing around "private" storage clouds: vendors described "private" storage clouds as managed storage infrastructure, providing both coherent storage resource management and the delivery of storage assets and data management functions as services to application workload. By mid-year, most "private" storage cloud vendors were quietly scrubbing their literature and their Web sites of any mentions of the term "cloud." One might hope that this trend would continue in 2013.

In addition to Flash SSD and storage clouds, one other technology got a boost from artificially inflated disk prices and disk vulnerability reports from IEEE, Carnegie Mellon, Google, etc. A comparatively minor innovation in tape technology, the creation of new (but not unprecedented) file system for tape, the Linear Tape File System (LTFS), made overnight rock stars of a few folks at IBM who had, for years, labored unnoticed in what might be thought of as "God's Waiting Room" in the storage world: magnetic tape.

Although there are still many analysts whose Tarot cards are telling them that tape is dead, life in the fact-based world is telling a different tale. Using LTFS and specialized head-end technologies (such as re-tooled content management systems, archiving systems, extensible file systems, etc.), tape is proving to be a sufficiently robust medium for hosting less-frequently accessed files using a familiar network-attached storage (NAS) model.

Bolstering its appeal, the TapeNAS solution features the highest density storage possible with current storage technology, using the least amount of physical space on the raised floor and generating the smallest possible power and heat dissipation profiles. For retrieving and streaming long block files, there is nothing faster, and from a bit error perspective, tape is significantly more robust than disk.

One more thing: tape media is significantly more durable than either disk or Flash SSD. I take delight in noting this given the truly enjoyable spectacle we have witnessed in 2012 as leading analyst houses reverse their previous positions and now see a runway for tape technology that they never saw before. Gartner, for one, has been twisting itself in knots as it endeavors to disavow their 1990 "finding" that one in 10 tape backups fail on restore. It was as absurd then as it is now.

Does any of this mean anything for 2013?

Probably not.

What we do know is that 2013 is likely to see a sorting out -- either by economic reckoning or by mergers and acquisitions activity -- of the balance of the storage market. This year we saw an overall contraction in the number of storage players, with some companies being acquired and others shuttering their operations. Today, we are beginning to see some start-ups out there, and some are finding funding.

Big ideas include 10GbE-based storage to replace both Fibre Channel and SAS fabric protocols, the implementation of erasure coding and similar techniques as a means for writing data to storage grids in a more resilient way, the delivery of storage value-add services as a function of a holistic engine (a virtualization layer, a file system, etc.) rather than embedding these functions on an array controller, and continued efforts to blend storage technologies -- Flash and hard disk, tape and disk -- to optimize the capabilities of both.

In 2013, the storage world may become a more interesting place, with speed records considered together with power consumption metrics, and with disaster recovery baked in rather than bolted on to infrastructure. On the other hand, it may simply become more absurd with the perpetuation of an "Omnia et orbis" (disk for everything) meme that ultimately bankrupts an organization with its ever-growing CAPEX/OPEX demands.

That's why I feel like, just like the sheet of paper stapled to that comic utility pole, the analysts should offer a reward to anyone who returns their phone number. Your views are welcome: jtoigo@toigopartners.com

Must Read Articles