In-Depth

Getting Smarter about Storage (Part 1 of 3)

Our storage analyst, Jon Toigo, explores a model that, if fully realized, would be light years ahead of what your favorite hardware vendors call "smart storage" today.

The storage industry seems quiet these days, with most vendors still making the same tired "all-in-one-kit" storage architecture pitch we've heard about for the past few years. They will tell you that they are going with what works, and that the model continues to resonate with many companies that have pared back dedicated storage staff and now rely on server administrators or even applications managers to oversee storage resources, as well. Often, the server or app manager doesn't know anything about storage, so the vendor's promises of smarter storage to automate such tasks as storage capacity allocation and management sounds appealing.

Some vendors argue that this model is also a harbinger of the next evolution of storage: cloud-based storage services that will delegate the entire responsibility for management to a third-party provider in the ether somewhere. They are scrambling to equip their boxes for cloud-based service delivery in the hopes of carving out a footprint in the cloud before IT starts unplugging locally installed arrays altogether.

Interestingly, the two trends -- more stovepiping of storage functionality on an array controller and the growing appeal of storage as a service models -- are actually diametrically opposed. To make storage services useful, arrays need to be deconstructed -- with functionality separated from commodity disk drives and their controllers. Otherwise, first-generation storage service providers will likely find themselves in the same situation as their predecessors in the late 1990s who used "brand name" storage rigs to appease consumer concerns about their infrastructure performance and resiliency.

In other words, they will be driven toward an internal model that reflects local storage today: too much stovepipe functionality inhibiting coherent capacity management across an ever-growing infrastructure and requiring more staff to manage each customer's rigs. The combination of overpriced rigs and labor costs will make cloud storage services more expensive than locally controlled and managed storage infrastructure.

The alternative is to separate "value-add" functionality from the array controller, reducing boxes of disk drives themselves to a commodity building block of capacity. Staging these functions as services delivered on appliances or routers -- separately from the arrays themselves -- provides a much more cost-efficient model for managing growing capacity in a cloud service, just as it does in a local infrastructure. With de-duplication and thin provisioning (among other services) set up as independent and shareable value-add services that can be applied selectively to any data from any source and that can use any target disk set (or even tape), economies of scale can be realized by the service provider.

Combinations of independent services could be made to offer different levels of, say, data protection or compliance or archiving. These services could be provided as an à la carte menu of options, adding additional value at additional cost to the basic capacity service that the cloud provider delivers.

This model was not observed by the storage service providers in the late 1990s. For the most part, it doesn't seem to be part of the game plan of their cloud-based cousins emerging today -- and it is not generally a practice observed in many locally deployed storage environments today.

It isn't that the tools aren't available. A basic solution can be found in storage virtualization technologies, such as those offered by Datacore Software or FalconStor Software. The latter has been developing a virtualization approach for years that provides functionality normally installed on value-add array controllers in a robust and independent software layer.

In its pure form, FalconStor could enable a firm to deploy much less expensive RAID rigs or JBODs instead of pricey storage stovepipes. However, recent announcements from the company seem to reflect a trend toward joining the software to an array to create a virtual tape library with de-duplication branded as a FalconStor product. We saw this coming when some of FalconStor's previous partners -- array makers themselves -- either fell by the wayside, were purchased, or decided to build their own value-add rigs.

Datacore remains, by contrast, a purer play -- in the sense of remaining a software-only complement to storage hardware rigs (about which they are completely vendor agnostic). Datacore basically virtualizes any storage shares that can be seen by a server, largely ignoring the operational details of the underlying rigs themselves. Their abstraction of the physical world enables some cool things -- such as the ability to offer data protection as a service across any vendor's boxes rather than relying on each physical stand of disk to use its own on-controller functionality to replicate data to another box via a back channel of some sort.

Arguably, FalconStor has gone farther to work with the array vendors to leverage their value-add components, which puts them in a position of being beholden to the individual array vendors for on-going access to their APIs. Their customers like this about FalconStor.

By contrast, Datacore seems to prefer to let the hardware people work with Microsoft to iron out server/storage compatibility issues. Datacore leverages all of that API level work by simply taking the resulting server mounts and pooling them into a virtual storage service. That saves considerable expense in development and insulates them from any vendor infighting in the storage world. Companies not invested in "name-brand" storage vendor gear with lots of value add (and even many who are) seem to like the Datacore model, which has added to the company's coffers even during the current recessionary economy.

Virtualizing the back-end infrastructure makes sense because it deconstructs the storage stovepipe model. It moves the complexity of storage value-add services from array controllers into servers and routers, arguably making the storage less prone to failure (it is the array controller software that breaks most often, not the spinning rust), and hosting the functionality itself where it belongs (software running on a server where it can be shared by applications and/or human users -- like any other business app).

The future sustainability of storage virtualization, at least the way that it is done by DataCore, will depend on two things. First, the stability of the underlying infrastructure remains an issue. Whether you want to add capacity to a share or refresh the hardware itself or troubleshoot faults in the hardware infrastructure, you need good hardware management tools.

Virtualization doesn't make hardware issues go away, though it may reduce the complexity of hardware array controllers and the cost of boxes themselves. You still need to be able to manage the tin, to see the infrastructure in operation, so you can build, change, and fix it when necessary. Simply put, the hardware platform needs to be reasonably stable for virtualization to work, and management is needed to keep it that way.

The other gating factor in the future success of storage virtualization has to do with its application facing side. It used to wow people to see the storage virtualization server simply serving up a share that an app or end user could access and use just as readily as they could use a physical hardware device cabled directly to the backplane of the application server. Although no small accomplishment, the ability of the virtualization engines available today to deliver storage to apps is old news. The next level will be to add technology for delivering combinations of functions and resources to apps on a selective, granular, and policy-driven basis.

An example of a policy-driven storage service goes something like this: data from mission-critical application X needs high-speed primary storage with concurrent replication to a lower-speed retention storage repository where it will be exposed to an archive process and also to a disaster recovery process that copies the data both to a de-duplicated, near-line repository and to an offsite DR storage target. Data stored in the primary disk will expire in seven working days, leaving the archival copy and the local and remote DR copies intact for a period of seven years. In the process, lots of audit trails need to be generated and indices created to facilitate e-discovery and retention/deletion policy reviews.

For the policy described above to work, we need to be able to keep up-to-date information about the availability of storage hardware resources and their condition (queue depths, capacities, error rates, etc.) and the availability of value-add services and their states (waiting times, jobs in queue awaiting processing, etc.). This needs to interface with the policy engine that associates resources and services with specific policy directives. It also needs to interface with an analytics engine that examines trends in data from business apps so that future requirements can be anticipated and provisioned before they are needed. Of course, the whole thing needs to work in real time with data streaming from application and end-user sources.

Fully realized, the above model for storage would be light years ahead of what your favorite hardware vendors call "smart storage" today. Are we getting anywhere nearer the delivery of these capabilities to market? In next week's column, I'll look at some promising developments. Until then, your comments are welcome: jtoigo@toigopartners.com.

Must Read Articles