Report Card: Storage Clouds

Without an open standards-based management framework, the current flirtation with storage clouds will likely move to the footnotes of tech history much as storage service providers did a decade ago.

This month's columns will feature a series of report cards on storage-related technology initiatives announced in late 2008 and early 2009. This week, with all the announcements by IBM, Symantec, and others, it seems appropriate to begin the series by looking at storage clouds.

Storage as a cloud-based service is a multi-headed beast of marketecture and architecture. Last year, when the hype cycle around cloud services ramped up, little specific terminology was offered to describe what a cloud actually was. The problem persists even after a year of announcements and press coverage.

ASP Failure Breeds Skepticism

To establish context, let's look at "application clouds" -- the precursor to the current storage cloud discussion. Application clouds conjure up the notion of a network-based application service product reminiscent of application service providers (ASPs) promulgated without much success in the dotcom days. In 1999, I did a book-length critique of ASPs after reading analyst encomiums promoting the idea and interviewing hundreds of would-be ASPs about their business models. The conclusion I reached at the time was that ASPs were not going to succeed, despite all of the theoretical advantages in terms of cost savings over locally-hosted, shrink-wrapped applications. The bases for this conclusion were four-fold and persist today.

First, the security wasn't there to support ASPs. Concerns about the Internet were significant within most businesses -- a perspective that, if anything, has grown in recent years in the face of obscene levels of malware attacks and information leakage.

Second, as a practical business matter, the only way an ASP could make money was if it could leverage economies of scale by getting all of its customers to share infrastructure rather than being forced to build each customer a custom kit. In the late 1990s, this was something that companies seemed disinclined to do. Perhaps a down economy and the pressures it creates on corporate IT budgets have engendered a different consumer view this time around. Time will tell.

Third, and a compelling roadblock, was the matter of service-level agreements (SLAs). In the 1990s, consumers were rightfully concerned about issues of SLA enforceability, whether meaningful redress was available for botched SLAs. This is a sticky matter even in cases of conventional outsourcing and one that is repeatedly raised at cloud conferences today.

Finally, there was a nearly complete absence of standards that would provide the means for ready integration of services from multiple providers. Given that almost every cloud provider today has eschewed de jure standards in favor de facto ones -- proprietary reasons being seen as the key to locking in consumers and locking out competitors as much as guaranteeing the interoperability of software and hardware stack offerings in the cloud infrastructure -- little has changed from the previous flirtation with ASPs-qua-clouds.

Apparently, some who read the book seem to have forgotten what I wrote, or they have developed a bad case of cognitive dissonance as a function of the current do-more-with-less economy and the seductive marketing of the pro-cloud vendor community.

This bit of retrospective is only offered to provide context to explain why I have been skeptical of clouds from the get-go. With this in mind, I have remained quiet while watching the development of cloud rhetoric and architecture advanced by advocates -- until now.

From Application Clouds to Storage Clouds

Shortly after the introduction of application clouds in 2007-08, discussions expanded to include "storage clouds." This was similar to the way that the ASP discussion in the late 1990s engendered the SSP (storage service providers) discussion at the dawn of the Millennium. Another parallel to the past: almost immediately, storage cloud discussions fractured into discussions of two distinct variants -- "public storage clouds" and "private storage clouds" -- just as earlier SSPs had quickly formed two distinct camps focused on Internet-based and corporate network-based storage services, the latter being simply the addition of remote management services to a Fibre Channel fabric you already had deployed in your own shop.

In my conversations with storage cloud vendors, a public storage cloud is usually described as a large storage resource living somewhere in cyberspace and shared with consumers either directly or through an intermediary that provides a front-end gateway and customer service portal. Some of these services are "pure plays" -- storage geeks such as Nirvanix, that is all about the spindles, or maybe Google, with its ranks of SATA drives velcro'ed to cookie sheets -- while others, including Amazon with its S3 offering, seem to have backed into storage clouds in an effort to share out the storage infrastructure behind its otherwise application cloud-focused play in order to create another line of business revenue. A third subset of storage cloud vendors is offering storage-related application services such as archive or on-line backup aimed less at providing a "big hard disk in the sky" than a specific data management or data protection solution.

Private Clouds: An Oxymoron?

Private storage clouds are a different consideration. In some respects, private cloud vendors are the stepchildren of the public storage cloud world who have done some homework and determined that a lot of businesses, not to mention small governmental organizations such as the U.S. Department of Defense, are turned off by the insecurity of public cloud storage but have geographically dispersed IT and user communities that could benefit from networked storage.

Start-ups such as Parascale, as well as established storage hardware and software vendors, have been quick to jump on this bandwagon. In some cases, this is just a recontextualization of traditional storage infrastructure models, with the addition, in a few instances, of "value-add" features such as support for server virtualization environments or SLA monitoring/customer relationship management front ends.

Parascale is a case in point for the latter. Spokespersons for the company have provided an overview of their software stack, which enables infrastructure scaling and management that is similar to traditional storage infrastructure management software architectures that companies have been buying from hardware vendors such as HDS and software players such as DataCore Software and Falconstor Software for years. Parascale argues that they are exceptional for two reasons: their software stack is "pre-integrated" and it is topped by an SLA function set that enables resources to be allocated to individual customers discretely and associated SLAs to be monitored individually. The latter, they argue, is what distinguishes Parascale from the more established storage infrastructure virtualization players such as DataCore, FalconStor, and Symantec.

Symantec, by contrast, is simply re-casting its evolving storage virtualization and file system products -- Veritas Storage Foundation, a virtualization play and journaled file system, and Veritas Cluster File System -- as a new product, FileStore, that can serve as either a public file repository as part of the company's software-as-a-service (SaaS) offering, or as a private storage cloud set up and configured inside of companies that prefer to operate their own internal cloud service. Symantec's claims of "diverse workload support, industry-leading performance, and near-linear performance scaling" are comparable to any implementation of spindle virtualization with a common file system and have little to do with clouds per se. The company also boasts data protection through integrated Netbackup software and security with Symantec anti-virus products, plus HSM and archiving with Dynamic Storage Tiering and Enterprise Vault, as additional ingredients of a state-of-the-art cloud.

By hosting their proof of concept on Xiotech's Intelligent Storage Element (ISE) backend hardware, which is already among the best performing and most scalable storage rigs in the business, Symantec's SPEC.org performance test numbers were predictably off the charts. They bettered performance from the previous leader, NetApp, according to August 2009 numbers by 47 percent in terms of throughput and 14 percent in terms of overall response time.

What Defines a Real Storage Cloud?

Although a well-designed storage infrastructure, overlaid with virtual controller and value-add software services that scale independently of hardware, is smart design, the question remains: is the resulting resource by definition a storage cloud? It is shareable, of course, and accessible via standard network access protocols such as NFS, HTTP, FTP, and CIFS/SMB. Plus, Symantec is claiming that it is secure, based on its own end-point security products that can be integrated with the software stack. The company is also touting a homegrown methodology for "managed outcome delivery" that, although not driven by any software components in particular, is supposed to make SLAs more germane to business users. But do these characteristics translate to cloudiness?

When you read the literature, clouds are supposed to avail resources cast as services to users. Selection of services should be standards-based so that users can deliberately and dynamically mix and match them in a way that best fits business application requirements in a brand-transparent way.

Put another way, without an open standards-based mechanism for obtaining real-time information about the status (availability, current burdening, and perhaps cost/performance metrics) of services, including granular status data on the specific value-add functions delivered by the service, the very selection of cloud-based resources is problematic. Going further, how do you mix resources from different "clouds" to purpose-build virtual-aka-cloud-based storage infrastructure that best fits business needs when the various "clouds" are based on proprietary hardware and software components rather than participating in an open standards-based management framework?

Some may argue that this common management framework that proved so impossible to build in non-cloud storage infrastructure, driving up cost of ownership and making clouds interesting in the first place, becomes less relevant when you outsource (that is, rely on others to operate and manage) your infrastructure to a cloud service provider. However, if the record of outsourcing has proven anything, it is this: you can't outsource a problem. Outsourcing works for tasks that have become routine and standardized, enabling firms to redirect their IT staff to other matters. Payroll processing is a good example. Outsourcing a problem doesn't correct the problem; in fact, it usually makes it worse.

Without an open management framework for cloud storage, and regardless of the underlying goodness of the software/hardware stack presented by cloud vendors, companies may find that they are locked in to another proprietary kit just as assuredly as they would be if they bought EMC Centera or IBM XIV or any other currently touted stovepipe.

The Bottom Line

One amusing development in clouds at the beginning of this year came with the release of a document by IBM seeking to define an open standards-based cloud. The manifesto from Big Blue quickly evoked a hostile retort from, of all vendors, Microsoft -- sort of a "my cloud is more open standards-based than your cloud."

The irony was that the little snit underscored a key feature missing from all of the current storage clouds in the market: an open standards-based management framework. Without one, this current flirtation with storage clouds will likely move to the footnotes of tech history much as SSPs did a decade ago.

Current report card score on storage clouds: D-.

Your response is welcome: jtoigo@toigopartners.com