In-Depth
From Clouds Comes Clarity in 2012
What can bridge the state of virtualization today and the unfettered reality we seek from the cloud? A true storage Hypervisor may hold the key.
By George Teixeira, CEO and President, DataCore Software
Some people read palms or tea leaves to predict the future. I'm looking at clouds.
Clouds are what all sorts of people are talking about these days. I was at dinner the other evening and overheard a conversation at the next table between two senior citizens and what was likely a more tech savvy 20-something grandchild and his companion. When one asked the grandmother where she keeps the photographs she was showing them, she confidently answered, "They're in the cloud." Now, I'll bet she wouldn't have been comfortable entrusting her precious photos to me, to you, or to a company that might not be in business in the future. That unease would likely be exacerbated by the complexity of any assurances as to how these photos are stored, made available, or protected.
Yet, she is quite comfortable with "the cloud."
That's how powerful the cloud metaphor has become. It uses a level of abstraction (the picture of a cloud) to represent in a simple, nonthreatening, "I don't need to think about that" way the complex hardware and software components, and internal network connections, that are actually required to provide the services delivered. When people refer to cloud computing, what they are really talking about is the ability to simplify IT by abstracting the complexity of the data center from a bunch of individually managed elements into a service that is offered as part of a holistic "cloud."
This simplification through abstraction is also the cornerstone of virtualization. In fact, the clamor for the cloud is both a compliment to the attributes of virtualization and a criticism of its progress to date. Virtualization is the key to cloud computing because it is the enabling technology allowing the creation of an intelligent abstraction layer that hides the complexity of underlying hardware or software. In the call for clouds, I hear an industry being challenged: "OK, we see what is possible through virtualization, so fill in the missing pieces and deliver on the promise already."
Software is the key to making clouds work because, in the cloud, resources (e.g., server computers, network connections, and desktops) must be dynamic. Simply put, only software can take a collection of static hardware devices that are not flexible and create from them flexible resource pools that can be allocated dynamically. Hypervisor solutions, like those from VMware and Microsoft, demonstrated the benefits of working devices as software abstractions at the server level (and to a lesser degree the desktop) and the importance of interchangeable servers that now have become the norm. It really does not matter whether a Dell, HP, IBM or an Intel server is the resource involved, that is a secondary consideration subject to price or particular vendor preference. From this experience, the market has become familiar with what is possible with virtualization.
Virtualization gives us greater productivity and faster responses to changing needs because software abstracted resources are not static and can be deployed flexibly and dynamically. It also gives us better economies of scale because these resources are pooled and can be easily changed and supplemented "behind the curtain" to keep up with growing and changing user demands. Yes, with a hypervisor we have freed our servers and desktops from their physical binds.
Still, it's just a taste of freedom; a removal of the handcuffs, but not the ankle chains; merely a lengthening of the leash, because eventually you hit the end and get yanked back to reality by the confines of your storage. Even amidst the great, industry-wide, liberation-through-virtualization movement of recent years, the answer when it comes to storage, unfortunately, has been to continue building traditional physical architectures that severely limit and, in fact, contradict the virtual infrastructures they are intended to support. They propagate locked-in, vendor specific hardware "silos" instead of decoupling storage resources from physical devices with a software abstraction layer.
In my view, that is the big hole in the ground that we keep falling into while desperately scanning the skies for clouds and a user experience free from physical architecture and hardware. Just what is it that can bridge the state of virtualization today and that unfettered reality we seek from clouds?
The answer: a Storage Hypervisor. In 2012, this critical piece to the fluffy puzzle will fill that gap in virtualization's march forward, clarifying how to bring that cloud future home to our storage today.
A Storage Hypervisor enables a new level of agility and storage hardware interchangeability. It creates an abstraction layer between applications running on servers and the physical storage used for data. Virtualizing storage and incorporating intelligence for provisioning and protection at the virtualization layer makes it possible to create a new and common level of storage management that works across the spectrum of assets including server attached disks, SSDs, disk systems, and storage in the cloud. Because it abstracts physical storage into virtual pools, any storage can work with any other storage, avoiding vendor hardware lock-in ensuring maximum ROI on existing resources and greater purchasing power in the future.
What is a "true" Storage Hypervisor? It's a portable, centrally managed, software package that enhances the value of multiple and dissimilar disk storage systems. It supplements these systems' individual capabilities with extended provisioning, replication, and performance acceleration services. Its comprehensive set of storage control and monitoring functions operates as a transparent virtual layer across consolidated disk pools to improve their availability and optimize speed and utilization.
A true Storage Hypervisor also provides important advanced storage management and intelligent features. For example, a critical Storage Hypervisor feature is automated tiering. This feature migrates and optimally matches the most cost-effective or performance-oriented hardware resources to application workload needs. Through this automated management capability, less-critical and infrequently accessed data is automatically stored on lower-cost disks, while more mission-critical data is migrated to faster, higher-performance storage and solid-state disks, whether those disks are located on premises or in the cloud. This enables organizations to keep demanding workloads operating at peak speeds while taking advantage of low-priced local storage assets or pay-as-you-go cloud storage. The Storage Hypervisor management layer makes it easy to incorporate new disk devices into existing data centers, providing enterprises, and, a fast and easy on-ramp to cloud resources, among other benefits.
It's clearly time for storage to acquire these cloud-like characteristics. Clouds are, after all, supposed to be pliant and nimble -- and that is what we need our storage to be. The whole point of cloud computing is delivering cost-effective services to users. This requires the highest degree of flexibility and openness, as opposed to being boxed-in to specific hardware that cannot adapt to change over time. That's the goal and it is what is driving such an interest in clouds. Hypervisors for virtual servers and desktops have mapped the way -- illustrating how portable software solutions can virtualize away complexity, constraint, and hardware-vendor lock-in. Only a Storage Hypervisor can do likewise for storage.
That's why 2012 will be the year storage goes virtual and the market learns Storage Hypervisors are the next level in flexible storage management. Already, they are being widely deployed and are enterprise-proven. A true Storage Hypervisor turns multiple, dissimilar, and static disk storage systems into a "what I want, where I need it, when I need it, without complexity" storage infrastructure. It gives our storage today what we've been looking for in the clouds of the future: a highly scalable, flexible infrastructure with real hardware interchangeability and the next level of virtual resource management. This is what is required to create virtual data centers or so-called "private clouds" and make practical the incorporation of external cloud services.
George Teixeira is CEO and president of DataCore Software. The company's software-as-infrastructure platform solves the big problem stalling virtualization initiatives by eliminating storage-related barriers that make virtualization too difficult and too expensive. You can contact the author at [email protected]