In-Depth

Storage as a Virtualization Platform for Cloud Computing

Storage-centric approach provides better performance, higher availability, and lower cost than switch or server options.

by Lee Caswell

Following Cisco's announcement of "Project California," which puts the networking giant directly in the server market, architects across the IT world have heatedly debated compute versus switch-centric virtualization approaches. The stakes are high. If Cisco is successful it could cannibalize a large portion of the $58-billion server market shifting billions of dollars away from current server manufacturers.

Meanwhile, a third and potentially more lucrative, architectural control point, namely storage, has been overlooked as a means of delivering server virtualization.

For application environments characterized by high capacity needs and I/O-intensive workloads, a storage-centric approach offers higher performance, lower cost, and higher availability than either switch or server alternatives.

A conventional storage array is a poor platform for integrating server virtualization technology because there is not enough compute horsepower available. Most SAN and NAS designs rely on at most two RAID controllers, which are normally proprietary CPUs with limited processing power. Higher-end systems swap in higher-performance x86 CPUs, but these controllers are completely consumed providing RAID protection and storage virtualization for up to hundreds of drives. In addition, each controller must be prepared to take on the full array when its partner controller fails.

Newer Scale-Out Storage Systems Change the Game

By contrast, newer scale-out storage systems spillover with x86 resources. In these scale-out systems, x86-powered storage appliances form the hardware backbone of a storage area network that looks and is managed as a single pool of capacity and performance. Specialized software running in parallel on each appliance aggregates the capacity and performance of the appliances and protects data against any hardware failures.

Appliances in a scale-out system can have two x86 CPUs per storage appliance with an overall ratio of one x86 CPU for every six drives. With such available processing power, running server virtualization on these available x86 CPUs, lets standard server applications share common hardware with shared storage.

For the first time, users can consolidate server applications into a storage platform for the well-known benefits of server consolidation, namely reduced hardware acquisition costs, higher-availability virtual resources, and reduced management costs.

Lower Acquisition Costs

By running a server virtualization layer on each scale-out appliance, a server application can run on each appliance and each application has access to the entire shared capacity and bandwidth of the underlying array or appliances. Server instances can be started, stopped, and managed as would any remote server.

By eliminating external physical server hardware, users directly reduce physical server costs along with the drag-along costs of server power, cooling, maintenance, and rack space build-out. The "green" benefits of such an approach are compelling because power reductions up to 48 percent can be realized over separate server and storage implementations.

Higher Availability

Scale-out storage architectures inherently protect data in the case of appliance failures by distribution parity across appliances. An appliance failure is treated as a simple RAID event and data writes and reads continue unaffected.

With integrated server virtualization, virtual machine images running on an appliance can be automatically restarted on another available appliance in the array if an appliance fails.

Since virtual machine boot images are protected by the underlying array, the restart of the application and the re-establishment of server virtual LAN connections and virtual MAC addresses happens without user intervention. This completes in minutes without the need for complex server clustering software, dedicated standby hardware or specialized appliance interconnects.

Easier Management

Virtualization generally simplifies management by replacing physical devices with logical entities. Logical entities can be scaled more easily, workloads can be balanced automatically, and system attributes can be changed dynamically since virtualization manages the connection between a workload requirement and physical resources.

A common platform that provides both server and shared storage resources across physical appliances connected by standard Ethernet networks is the ultimate simplification and commoditization of hardware resources that has been predicted for years. The approach offers linear scaling of compute and storage resources, protection against physical failures, and an investment protection plan that is ideal for the future of the private cloud platform, where resources must be delivered at will and managed without concern for physical component limitations.

Conclusion

Since virtualization offers such dramatic benefits, and customers should look carefully at the predominant workloads of their environments and select virtualization platforms that best meet their requirements. For capacity-rich and I/O-intensive workloads, use of storage-centric platforms based on newer scale-out architectures can be dramatic and should be central to planning for further data center consolidation and cloud computing infrastructure construction.

- - -

Lee Caswell is the founder and chief marketing officer at Pivot3. You can reach the author at leec@pivot3.com

Must Read Articles