In-Depth

Rumors and Rumblings at SNW

Toigo sorts out fact from fiction

Well, Fall Storage Networking World is now over , the kids are back in school after a four-day hiatus at Disney World, and it is time to make sense of some of what I saw and heard at the show—a few rumors and one or two new releases that merit attention. Here’s the list.

As usual at this conference, there was a lot of buzz about rumored acquisitions. I rarely report these statements because they’re mostly a reflection of anxiety or inebriation; but one in particular fascinated me.

“Have you heard that IBM is buying Veritas?” asked one wide-eyed vendor representative. I quickly calculated the odds on this one and dismissed it out of hand. What would IBM need Veritas for? Yes, Volume Manager is widely installed (though not without its headaches); and, yes, Big Blue could probably come up with the bucks if it wanted the install base represented by the acquisition. But it clearly doesn’t need VM when it has its own Storage Virtualization Controller, does it?

Let’s do the math. Veritas makes its money primarily from its backup products, which financial analysts regard as a mature market (e.g., one with limited growth opportunity). The company’s storage resource management products are not selling well from what I read in the financial reports and the corporate mantra, from what I gleaned at its Vision conference this summer, seems to be shifting from storage and toward utility computing.

The jewel in the Veritas crown, in my humble view, is Application Performance Manager. This is a truly nifty piece of code from Israel (developed for the Israel Defense Forces) that provides tremendous visibility into application software performance by displaying all of the little calls that are spawned by any application process and noting the ones that seem to be lagging. With some work, by no means accomplished yet, you might be able to make this information useful by keying it back to infrastructure.

Without this correlation function, however, the information from APM is just so much more data and not at all useful for determining the exactcause of the slow performance of the process or subprocess (e.g., an overburdened server, network bandwidth constraint, poor data layout on storage, etc.) However, it could provide a nice troubleshooting tool or even a thermostat for allocating computing, networking, and storage resources when needed by applications. I’d venture to guess that IBM already has some pretty good code of its own to do this.

Bottom line: I wouldn’t put a lot of stock on this possibility, though my sources seemed pretty convinced that it was in play.

One thing I did note was the joint QLogic, Microsoft, HP event that announced the new Fibre Channel SAN for the Masses. I was involved in a usability test at Microsoft on this QLogic/Microsoft effort a couple of months ago. I was under non-disclosure until now, but today the NDA gloves came off.

Here’s my take, for what it’s worth. During my experience at Microsoft, I saw something truly remarkable: the roll-out of a small Fibre Channel fabric in about 20 minutes (except for a slight interruption having to do with the recognition of an additional NIC installed for fabric management). In the end, a stable fabric had been established between a Microsoft server equipped with a QLogic HBA and three storage targets from different vendors.

A bit more detail: I traveled to Redmond and after a briefing on Microsoft’s newly developed storage features, was escorted to a test lab where I was put in the driver’s seat for a test: after installing some software from QLogic on a Microsoft server, wiring some storage targets to a QLogic switch, then doing a few more steps which have since been incorporated into a wizard (my recommendation) to set up the interface with the server, and, voilà!, as Microsoft’s Claude Lorenson might say, instant FC SAN.

I was impressed by the simplicity of the deployment (except for that NIC card thing which I have experienced in my own test labs when installing multiple NICs into a Windows server). Clearly, Microsoft and QLogic had accomplished what they had set out to do. Technically speaking, it was a huge improvement over past methodologies for FC deployments.

But, as a practical matter, I found myself asking the same question I was asking privately at the SNW announcement: Why do smaller businesses (the target audience for the technology) want or need a Fibre Channel fabric? What application could they possibly have that requires it, even if you take away some of the installation burden?

Just because you can do something technically doesn’t mean that it has any real value. We may be able to write data to a bubble of gas in a liquid suspension, and demonstrate that the data will remain viable for a half second or so, but what is the practical value?

Much as I would like to be on Microsoft’s Christmas card list, I have to say that if one of my small to medium-sized clients needed a scalable topology to support a business app in a Microsoft environment, I would probably point them to Microsoft’s iSCSI initiator (free), a generic NIC card, a cheap GigE switch, and a purpose-built storage platform from somebody like Adaptec. SMEs, as a rule, simply do not field applications that need the horsepower of FC fabrics or the headaches that might go along with scaling and zoning them over time.

The HP guy observed that FC fabrics were needed by SMEs for backup consolidation. That might be true, though I could find other backup consolidation techniques that wouldn’t require an FC fabric at all.

Instead of putting a user on the podium to provide a case in point, HP’s poster boy was a one-man IT department for a small, exclusive, prep school who made no such case. In fact, he seemed less than enamored with FC fabric technology, and unable to make a valid business case for the solution based on real application requirements. To hear him talk, he just needed a lot more storage space for the digital video footage and megapixel still images that overindulged students insist upon storing for posterity, regardless of the expense, on his limited infrastructure. To me, he seemed pleased and relieved by the fact that he got all the hardware and software for free from the three vendors in exchange for being their product pitchman. What would strike me as a real breakthrough would be Microsoft using its muscle to force all FC switch makers to work together to develop real interoperability standards in order to field truly open fabrics. Perhaps the work with QLogic is a small step in this direction.

Comments are welcomed. jtoigo@intnet.net

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles