Next-Generation Storage: Think Virtual
With improvement in storage virtualization products’ features and reliability will the technology choices get any easier?
Need more bang for your storage buck? The answer may be "virtual."
Today, many organizations with massive storage needs are turning to storage virtualization. Others are also choosing it to help transition to a utility- or grid-computing model, which allocates processor, memory, network bandwidth, and storage resources on demand. The grid then recycles assets as they become free. The goal, says Rob Sadowski, the senior product marketing manager for EMC’s forthcoming Storage Router, “is to simplify a complex infrastructure for the resources that need to consume it.”
Enter virtual storage, where “regardless of the number of different physical systems you have, they appear as one logical system,” says Tony Asaro, a senior analyst at Enterprise Strategy Group, based in Milford, Mass. and Portland, Ore.
The benefits of this “single pool of storage,” he says, are “greater scalability, easier management, and [greater] capacity utilization.” Managing it all is easier because behind the scenes, storage virtualization software and hardware maintains the appearance of the single logical system.
Yet adopting storage virtualization, a relatively new type of technology, also brings interoperability challenges. Often, different vendors’ products can control each other, but otherwise usually can’t pool their disparate data-management capabilities.
Adopters may also pay a premium. “Our research found that the biggest issue is cost,” says Asaro. This is a special issue since “there isn’t a guarantee of return” on the technology. Still, “the customers with the most pain are willing to make the investment, and our research has found that it has paid off.”
On average, adopters of storage virtualization reduced their spending on storage hardware by 24 percent, storage software by 16 percent, and SAN administration by 19 percent. Asaro predicts these savings will increase as companies better learn to use virtual storage. In fact, “ESG believes that storage virtualization, or what we classify as intelligent storage networks, will reinvent SANs, because they significantly reduce cost and complexity.”
Virtual storage can be easier to manage beyond just the software management interface, notes EMC’s Sadowski. “In today’s storage environments, a lot of things you do to manage the environment, when you are changing configurations,” or introducing a new array, “require you to take downtime.” Using EMC’s Storage Router, a storage virtualization product due out by mid-2005 that uses Brocade or Cisco switches, he notes, “we can bring a new array in without downtime.”
Storage Virtualization Options
The good news is that storage virtualization “is not an all or nothing proposition,” says Asaro. Organizations “can implement storage virtualization in different degrees, or use it for certain functions.” Already, users are running the gamut “from very limited uses of storage virtualization to re-architecting their entire SANs.”
One challenge, however, is deciding which company’s approach to adopt. ESG says the big players are IBM, which already has over 1,000 customers for its SAN Volume Controller product, then Cisco, with its MDS switch that uses Veritas and IBM software. Many analyst firms rank Veritas (now set to merge with Symantec) third. Other big players include Computer Associates, EMC, Hitachi, HP, Sun, and Unisys.
When evaluating products, Asaro recommends organizations test reliability, then square that with scalability needs. For scalability-conscious organizations, he notes Hitachi’s Universal Storage Platform has “the most amount of ports, processors, cache memory, and capacity support of any storage virtualization platform” so far.
Most products tackle virtualization at the network level, using routers and switches to track data and its state. Hitachi, however, does things differently. The problem, as Hubert Yoshida, vice president and chief technology officer at Hitachi Data Systems, sees it, is applying efficient intelligence to incoming data. “Sitting in the network, you don’t know what they application intent is, or the storage layout.” So Hitachi has a storage controller, sitting in front of its storage arrays, with enough cache “to mask some of the performance delays of the array.”
One benefit of this approach, he says, is if the controller fails, storage can be reconnected to applications in native mode, since the data’s state is written to storage, not saved on a switch or router.
While many analyst firms say storage virtualization is still in an “early adopter” phase, overall interest is running high. According to a recent survey of over 300 IT professionals conducted by ESG, 96 percent of companies with 10 TB or more of storage are “interested in implementing intelligent storage networks within the next 24 months.”
In that time, storage virtualization products’ features and reliability will no doubt improve, but deciding which technology to use might not get any easier. Analyst firm The 451 Group says the market is hot, noting that over $4.2 billion of related mergers and acquisitions activity has occurred since December 2000, with more to come. The firm expects the bigger players to acquire companies, especially in the file system virtualization, NAS aggregator, and modular iSCSI storage arenas.
Today, storage virtualization is largely the provenance of the storage-challenged. Within two or three years, however, The 451 Group estimates the technology will mature and become mainstream.
New standards may help drive that, including the forthcoming Fabric Application Interface Standard (FAIS), which creates a standard API for storage applications in storage networks. Still, it’s “not baked yet,” notes Yoshida.
Mathew Schwartz is a Contributing Editor for Enterprise Systems and is its Security Strategies column, as well as being a long-time contributor to the company's print publications. Mr. Schwartz is also a security and technology freelance writer.