Cloud Storage -- Here Today, Gone Tomorrow
Storage clouds are rapidly losing what little buzz they once enjoyed in the industry. But other cloud-based services are beginning to appear.
Not all clouds have a silver lining. Storage clouds, if you are paying attention to the blogosphere and industry trade press, seem to be on their way into the footnotes of tech history.
I say "again" because we have seen this before. Contemporary cloud storage offerings are strikingly similar to the storage service providers (aka SSPs) before them -- a fringe subset of the application service provider (aka ASP) craze at the end of the dot-com era. Then, as now, the model for acquiring storage "elbow room" from external resource providers seems to be based on an unsustainable business model.
Pushback is beginning to appear everywhere. Although vendors may have wanted to characterize their offerings as a "Controlled Locus for Offloading Unmanaged Data" (CLOUD, get it?), other, less flattering, definitions of CLOUD have begun to take hold. To many of the storage administrators I talk to, commercial storage CLOUDs mean the "Career-Limiting Outsourcing of Underutilized Data storage assets," and represent a strategy advanced by "Con artists Leveraging Obfuscation to Undermine Datacenters."
The critique of cloud-adverse storage professionals seems to be grounded in three things. First, cloud storage providers have not succeeded in convincing them that they can provide any better service levels than can be obtained from locally administered storage infrastructure -- and have chalked up more than a few epic failures over the past 24 months. There have been widely reported cases of cloud infrastructure breaking down within service providers, sometimes with significant loss of customer data. In other cases, accessibility has been interrupted for hours or even days. This has occurred even in the most high-brow, brand-name shops such as Amazon S3. The consensus of the cloud-adverse seems to be that they could have managed such outages just as effectively, or more effectively, if they occurred in a homegrown infrastructure.
A second critique stems from security concerns. Although internal IT is arguably just as prone to hacking, information theft, denial of service attacks, and the vagaries of external storage service provider infrastructure, the fact remains that provider shops have an Internet-facing front door that practically dares ne're-do-wells to test their mettle. If you were a hacker wannabe seeking to make your bones, would you go after Joe's Plumbing Supply or Amazon S3? Events such as the widely publicized Chinese hacking of fighter aircraft specifications stored in the DoD storage cloud offer do not mitigate these concerns.
Finally, there are simple, practical matters of technology. In networking terms, transferring data to and from the cloud runs afoul of latency and jitter. Moving 10 terabytes of data across an OC 192 connection (a facility that none but the most well-heeled organizations can afford) takes a minimum of 2.25 hours; moving the same amount of data across a T-1 requires more than a year.
Those are nominal transfer rates that may be grossly inaccurate predictors of the actual transfer times for data traversing a shared WAN pipe. Plus, they represent distances "as the crow flies" despite the fact that data rarely moves through a WAN in a straight line. When you factor in Open Shortest Path First-type traffic routing, in which the path for data through the WAN is determined not by shortest distance to target but by least number of router hops, your data in NY may be bouncing off the Azores in route to Chicago.
Moreover, a shared WAN typically accrues latency as a function of router queuing, on-ramp processing, buffering, and other factors that throw transfer speed ratings out the window. Ask anyone in Sacramento who is trying to send data to San Jose -- a route that may entail nine different carriers: it practically makes the case for IPoAC (Internet Protocol over Avian Carrier -- delivery of data via carrier pigeon).
WAN jitter and latency are gating factors in the efficacy -- or lack thereof -- of cloud storage, unless the data you are parking there is stuff you never re-reference. That is, in fact, the pitch made by many cloud storage salespeople in closed-door meetings. They argue that you can use the external storage to host data that is clogging up your local storage, that hasn't been accessed or modified in the last 90 days or five years, but that you are afraid to delete. (It is above the pay grade of most storage admins to try to classify any data, so everything is retained forever.) At 17 cents a gig (plus network bandwidth and retrieval fees), the storage cloud can be a cost-effective toxic waste dump for these bits.
This isn't as cheap as tape, which storage professionals have been using for years to backup and offload sleeping data, and it certainly won't compete, cost-wise, with next-generation disk that will shortly feature 40 TB capacities in a 2.5 inch format by leveraging bit-patterned media processes demonstrated by Toshiba a year and a half ago. Both tape and disk are poised to deliver mass storage at .0005 cents per GB within a year or so.
Bottom line: storage clouds are rapidly losing what little buzz they once enjoyed in the industry echo chamber, but all is not lost for these purveyors.
As with SSPs before them, storage cloud companies are beginning to offer services other than a big disk drive in the sky. Cloud backup providers are popping out of the woodwork, as are cloud-based storage analytic services. These will soon to be followed by cloud archiving services and data warehousing on demand services. IT administrators will need to evaluate these services with the same critical eye they have trained on storage infrastructure on demand to determine whether they deliver the same or better service levels, security, and cost/performance value as homegrown solutions -- but for some storage cloud providers, this could be a silver lining.
Your comments are welcome: firstname.lastname@example.org.
Jon Toigo is a 30-year veteran of IT, and the Managing Partner of Toigo Partners International, an IT industry watchdog and consumer advocacy. He is also the chairman of the Data Management Institute, which focuses on the development of data management as a professional discipline. Toigo has written 15 books on business and IT and published more than 3,000 articles in the technology trade press. He is currently working on several book projects, including The Infrastruggle (for which this blog is named) which he is developing as a blook.