In-Depth

You Had Me At I/O: A Love Story

The time for application-centric storage performance monitoring is now

While I am not a Tom Cruise fan, per se, there is a line from his film, Jerry McGuire, that has become as identifiable as Bogart’s “Here’s looking at you, kid.” The memorable line is spoken by the attractive female lead, who concedes with affection to Cruise, “You had me at ‘Hello.’”

Today, rarely is a storage deal consummated between a sales person and prospective customer immediately following an introduction. It seems that despite the ceaseless courting of customers by order-hungry storage vendors, most vendors just aren’t “feeling the love” from customers that they enjoyed back in 2000.

The reasons are simple. Having failed to deliver on nearly every aspect of the Enterprise Network Storage Architecture (ENSA) vision, substituting in place of “a dynamic and intelligent storage pool” a clumsy fabric of compatibility-challenged switches and HBAs, the vendor’s current approach, though filled with flowery words and sugar-coated value propositions, just isn’t resonating.

If you think I am exaggerating, read a SAN brochure some time. Vendors have dedicated a lot of ink to articulate a value proposition for their technology, stating that FC SANs will do everything from enabling the consolidation of servers and reducing the propensity for storage downtime, to providing the realignment of IT with business and safeguarding the CEO from 20-year prison sentences.

The problem with this appealing (if fictional) SAN value proposition is that hardly anybody reads anything without the words “Harry Potter” or “Bill Clinton” in the title. Moreover, virtually everyone I know is following the diet du jour that has them abstaining from unnecessary carbohydrates, including sugary sweets. (If I’m lucky, they find time between all the unsolicited e-mail to travel to this site and read what I’ve written here—and I’m not selling anything.)

In their failure to execute on promises, all that vendors seem to be successful in doing is alienating their customers with half-baked technology, vendor infighting, and boundless hype. Maybe what is needed is less “relationship building” and more product performance. Just maybe the slow adoption of Fibre Channel fabrics is a consumer response. Customers are telling vendors (to paraphrase the pop song), “If you want to be with me, you have to do the J-O-B.”

An example of the right approach for cultivating consumer interest can be taken from some correspondence I received recently from Tom West of HyperI/O. What he wrote in his e-mail actually made me want to contact the guy to learn more about his technology. You might say, to paraphrase the movie quote, “He had me at 'I/O.'”

West’s e-mail began in a familiar way, with complementary remarks about this column and its stalwart stance in favor of the storage technology end user. However, it quickly evolved into a deeper discussion of points that we had covered in our last piece on Information Lifecycle Management (ILM). West agreed that there is a huge gap in any ILM scheme that lacks granular data on access frequency: you need to know how often a file is accessed, and whether it is modified or referenced, in order to pick the appropriate platform on which to store the data itself. Said West, “Problem is that no one is tracking I/O speeds and feeds.”

West, whose burgeoning company develops products in the I/O measurement space, noted that he sought to contribute his technology to SNIA’s Storage Management Initiative Specification (SMI-S), an effort with which he is in philosophical agreement. But, he notes, his efforts have met with utter silence from that body, silence that he has interpreted as disinterest. His question (and ours) is why the storage industry association doesn’t perceive the important connection between storage performance and storage management. He said he didn’t want to speculate about their reasons.

Instead, he told us about his product, hIOmon, which is essentially a performance analysis software utility that enables us to measure and monitor disk I/O performance at the individual file level quickly and easily. The product can provide file I/O operation performance metrics on both a detailed (“somewhat akin to a MVS GTF trace in mainframe parlance”) and summary basis at the level of a discreet file. It can do its magic in either in real-time or "replay" (historical) display modes. In effect, hIOmon helps provide an answer to the question: "How fast are my files?"

In addition, the latest hIOmon release introduces support for "process-based" file I/O operation-performance metrics. That is, discrete file I/O performance metrics can now be collected together and associated with a specific process. That means you can use the tool to discern, characterize, and highlight the behavior of the data generated and processed by specific applications based upon actual file I/O performance. Using this new hIOmon feature enables you to tell what specific files are associated with a particular process as well as the particular processes associated with a specific file. Users can tell with just a glance how their applications are performing (from a file I/O performance perspective) using a top-down approach.

I have no commercial involvement whatsoever in West’s products, which are discussed in greater detail on his Web site ( http://www.hyperIO.com). I may buy a copy shortly so I can assess the usability of its Java GUI and command-line interface in my lab. On its face, it seems like a sensible tool to use in my own storage assessment information collection effort.

Bottom line: I like the concept … a lot. I hope that HyperI/O’s is just one of many products that will come to market shortly, because I believe that such utilities are vitally needed to address both strategic and tactical storage planning requirements.

For one thing, reliable I/O performance data is hard to come by from vendors. While some publish SPEC.org or Storage Performance Council test results, these results are subject to considerable “performance engineering” and are only useful or relevant to the extent that the test conforms to the real-life situation (the actual applications and configuration parameters) in your shop. The best way to learn about I/O speeds and feeds is to test equipment yourself, under actual workloads.

However, organizations that perform such testing (and that number is falling fast, given how many IT departments have lost their testing resources to budget cuts) are often close-lipped about results. Part of the reason is that some vendors have written what may be appropriately termed “gag orders” into their product warranty. Customers are restricted from speaking publicly about the performance they see in their platforms on pain of losing warranty coverage. In other cases, customers do not speak of their performance numbers because they fear that poor results will suggest some lack of skill or knowledge on their part.

Whatever the reason, limited objective data on performance mitigates our ability to compare solutions on an apples-to-apples basis and to predict what benefits will accrue to the selection of product X versus product Y. Without performance data, there can be no intelligent strategic storage-product selection and no reasonable ROI or payback analysis with which to vet solutions.

There can be no Information Lifecycle Management either. We need to know how many times a file is accessed (regardless of whether it is modified) if we want to decide whether it is a suitable candidate for migration to another storage tier or for archiving. None of the ILM schemes out there does this right now. Products such as HyperI/O’s promise to provide a potential fix to this problem, moving us closer to true ILM and capacity-utilization efficiency.

Tactically, I/O measurement, mapped to discrete business processes and their supporting applications, would be of enormous value in storage infrastructure optimization and management. This data would better enable us to tune storage to meet the special needs of applications on a one-off basis. Moreover, it would be of enormous assistance in localizing and troubleshooting chokepoints in networked storage.

The time for application-centric storage performance monitoring is now.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles