In-Depth

Virtualization Platforms: Almost a Believer

DataCore does what vendors ought to be doing: they put up their customers as proof that, whatever your intuition tells you, their stuff delivers.

Like most people who watch the industry, I have always been a bit suspicious about sticking things in the data path between servers and storage devices. Especially things like virtualization platforms, which by definition, interfere with I/O by introducing a virtual to physical mapping system.

About a year ago, the question was raised by an IT manager for a large health insurance company in Michigan whether such a virtualization scheme, offered in products from vendors such as DataCore Software in Ft. Lauderdale, FL, Falconstor Software in Melville, NY, and several others, really made sense. The fellow observed that passing all of the I/O requests from over 800 servers through an in-the-wire virtualization server to a backend collection of arrays with a combined 180 terabytes of storage capacity was just asking for trouble. “Queuing theory alone would suggest that you would have massive contention for the resources of the virtualization server and you would create a choke point in no time flat,” speculated the manager, who said he had laughed sales reps for the virtualization companies out of his office.

I have to admit that, while I admired some of the braniacs at DataCore and elsewhere who were working on this stuff, there was a fundamental flaw in their thinking linked to the server hardware and server bus. Steadily improving processor speeds might be able to keep up with I/O processing demands, but the bus of the run-of-the-mill server would soon become saturated with I/O traffic and choke throughput across the box, introducing that most hated of all end product: storage latency.

I extrapolated from personal experience with host-based virtualization products that created havoc with tape-to-disk restore, that most virtualization products were still in their infancy and were more trouble than they were worth. Why go to all the trouble to configure such a platform if it was just going to create another layer of complexity to manage? Why deploy it if you couldn’t really do what needed to be done with virtualization: cut and paste pieces of storage from one disk to another, irrespective of the proprietary obstacles placed in your way by array vendors, in order to achieve true capacity allocation optimization?

However, after reading a pretty impressive paper from DataCore on I/O performance of in-band virtualization appliances (available for download at http://128.121.236.245/forms/form_request.asp?id=inband), I have to admit that I am almost a believer – at least when it comes to performance issues. Not to give away the story in the paper, let’s just say that DataCore makes a pretty persuasive case that it beats the I/O bottleneck issue by conforming its products to what I like to call Randy Chalfant’s three laws of performance.

Randy is a friend of mine at StorageTek, which is also a virtualization player, and says that performance is a function of three things:

  • High-speed server CPUs

  • Parallelization

  • Shorter data paths

Platforms optimized to leverage all three of these features tend to perform their desired function with alacrity.

DataCore makes a pretty good case that its offering is, architecturally-speaking, in line with Chalfant’s Laws. Using clusters of specialized Storage Domain Servers (SDS) featuring high speed processors and optimized, parallelized buses, they pass I/O traffic through their boxes faster than my two-year-old can empty a can of Coca-Cola into my laptop keyboard, and without all the screaming and hair pulling afterwards. Sophisticated caching approaches shorten data paths that must be traversed by virtualization processes.

But what I really like about the paper is that DataCore does what vendors ought to be doing: they put up their customers as proof that, whatever your intuition tells you, their stuff delivers. I would be interested in hearing some feedback from users of virtualization products that are not solicited through DataCore’s press folks to get an untainted viewpoint, but for now, I like what I see.

The issues that continue to plague virtualization go beyond the ability of a DataCore to resolve, however. They are linked to the narrow view in the industry that cooperation is a bad thing. For as long as array vendors continue to differentiate their platforms from competitors by customizing things like SCSI inquiry strings for no reason but to prevent cross-platform resource integration and management, there can be no completely successful virtualization solution in a mixed storage environment.

But, to the guys at DataCore, my hat’s off to you. I am almost a believer.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles