In-Depth

Managing Storage: What's Missing (Part 1 of 2)

Old-guard storage vendors believe storage management is the same as resource management. We explain why that concept is out of date.

Last Summer, a Big Five (or is it now Big Three?) accounting firm issued a report on corporate tech spending that stated that companies had failed to apply the tried-and-true principles of 20th century industrial management to 21stcentury tech infrastructure management. Simple things such as inventory management processes and controls had been largely overlooked despite the fact that IT resources, which are increasingly seen as commodities, are potentially well-suited to such management techniques.

That statement resurfaced in my mind last week in San Francisco, CA, where I was the emcee at a storage event hosted by Bear Data Systems, a large storage integrator in the Bay Area. I listened to vendors (including Network Appliance, Cisco Systems, and Symantec) talk about their wares in 15-minute stage presentations. As each company provided a different take on storage management from the perspective of its own products, it was apparent to me that no one got the point.

The hardware vendors emphasized solving storage management issues by hard-wiring management directly into the infrastructure. Network Appliance told us we could buy NAS appliances, iSCSI gateways to back-end FC fabrics, and nearline disk storage subsystems (with back-end connections to tape, if desired) that would enable us to create an orderly progression of data flow across a multi-tiered storage repository. This is the classic disk-to-disk-to-tape play. In essence, storage infrastructure should be built and managed to handle data movement.

From Cisco Systems, we heard how Cisco switching and routing equipment could move data between “the three networks in most enterprises”—the corporate LAN-based workstation and server farm, the enterprise Fibre Channel storage fabric, and even the odd switched-Infiniband server cluster environment that Cisco is seeing in those firms with a yen for high-performance computing. All data is manageable across these interconnects courtesy of Cisco’s switching platforms and their embedded operating systems. Again, the view was that infrastructure should be built and managed to facilitate data movement.

Next to these vendors, Symantec stood out like a sore thumb. With no hardware to sell, Symantec’s speaker gave a brief presentation on the company’s multiple backup strategies and products: NetBackup for enterprise platforms (aka UNIX), Backup Exec for Windows machines, and something called Enterprise Vault—an overarching archive product for files, e-mail, and structured content. Symantec’s presentation moved the discussion of storage management out of the realm of plumbing and into the realm of software services. (It was also mercifully shorter than the other PowerPoint slide decks.) Still, here was a company whose storage management story focused on applications for moving data across infrastructure.

If these speakers can be considered representative of the industry old guard (and I believe they are), capacity management and I/O routing pretty well defines what they believe “storage management” to mean: resource management. I am forced to wonder if this definition is still adequate.

Classic storage resource management (SRM), which is derived from the mainframe space, is above all a hardware-facing management paradigm. In a block diagram, SRM would likely include discovery of disk spindles and their monitoring in terms of proper operation (especially heat), solvency of interfaces and interconnects, and available space. When disk drives are captured in arrays, SRM commonly includes the discovery of aggregates of spindles (LUNs) and the monitoring of the health and capacity allocation of the array as a whole. Place arrays in a fabric and you have other devices (such as switches) to add to the monitoring workload.

SRM block diagrams might include low-level service discovery and monitoring as well. Data movers, hierarchical storage management and archive routines keyed to simple drivers (such as file age, RAID operation, and even virtualization, compression, and security) may be part of the SRM monitoring stack. Usually, these services are embedded in the storage products that you buy, so monitoring them (together with hardware) usually involves establishing a connection between the SRM tool or console and the hardware platform(s) via an application programming interface (API) or Simple Network Management Protocol (SNMP) Management Information Base (MIB).

With approximately 11,000 new storage platforms being introduced to the market each year, just aggregating their API or MIB information in a common console for reporting and event monitoring is a technically non-trivial task. For one thing, vendors are not consistent on the details they provide about the internal operation of their wares. In many cases, the only information you can obtain about a particular array is the list of LUNs the array is advertising for use by applications and end users. In essence, you see what the server sees, which is not a complete picture of how capacity is being allocated within the array itself.

Additionally, some hardware vendors don’t like the idea of common (aka platform agnostic) management schemes that might facilitate the deployment of a competitor’s gear on the same raised floor as theirs. As a result, they deliberately limit the level of information available to SRM consoles via common management interfaces, while in many cases bolstering the proprietary management story for their own gear deployed in a homogeneous way.

Finally, some hardware vendors have created annuity programs around their hardware APIs. To get any access to management connections, independent management software vendors must ask very nicely or, more often than not, pay big bucks to become a “gold partner” or to have their management console “certified” by the hardware vendor.

Bottom line: storage-facing capacity management confronts significant challenges. Indeed, we aren’t managing capacity allocation very efficiently at all, and this is showing up in the labor costs of storage, where management shortcomings increase the number of personnel needed to manage growing capacity; and in new hardware acquisitions, lack of management forces us to add more “junk drawers.”

In place of effective capacity management, most vendors have jumped on the consolidation bandwagon, as though amassing all of your gear in a central location makes it any more manageable. It doesn’t, though it does increase the temperatures of the data center and the user community, when it results in costly delays in data access and spoils user or application productivity.

Treating spindles as inventory might provide the beginning of an answer. If we simply expose all spindles to active analysis with respect to their ongoing usage and performance characteristics, identifying capacity availability, hot spots (which spindles contain data that is very frequently accessed and/or updated), and physical drive-performance characteristics and error conditions, then factor in depreciated asset value and probable failure rates, we could achieve a greater level of control over storage costs. Managing disk as inventory would also help us to think outside the box, and to understand value propositions, however counterintuitive they may seem, about the disposability of disk, the merits of RAID 1/0 or Copy-During-Write configurations and other sensible strategies in infrastructure design.

What’s missing from this storage management story, even with the improvements suggested above, is the application-facing side of a total storage management solution. More on this in next week’s column. For now, your comments are welcomed at jtoigo@toigopartners.com

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles