In-Depth

Data Lifecycle Management and the Mainframe Mindset

Without storage manageability, we can't effectively address storage costs, which take up to 60 cents of every storage hardware dollar. So why, in the distributed systems world, are we stuck without decent management tools?

I often hear mainframers disparaging the issue of data lifecycle management as a problem only for those companies hosting their applications on technological tinker toys found in distributed environments. At a recent event, where I was arguing that a data naming scheme was a prerequisite for anything such as Information Lifecycle Management in the distributed world, a woman was chuckling at her table in the front of the room. She wasn’t extremely loud or disruptive, just happy about something that only she understood.

At the break, I asked her what was so amusing. Her response was that she had none of the problems that I was describing in her environment, an IBM z/OS mainframe shop. She marveled at how little the distributed systems folk had learned from the mainframers, where management (and a database-oriented file system) was built in and logical and physical class descriptions for data migration were de rigueur. She said that IBM’s mainframe systems managed storage and hierarchical storage management software had taken care of most of the issues covered in my talk. It struck her as ironic and, well, funny, to be hearing them re-discussed so many years after they had been “fixed” by Big Blue.

In the mainframe world, well-defined roles have long been established for system memory, direct attached storage devices (DASD), and tape and optical devices. With some effort, and using a robust set of tools, IT professionals can classify their data and migrate it across infrastructure based on access frequency and other criteria. It’s been that way since about 1980.

A similar story could be recounted about an IT manager I recently met in Atlanta. He did not get all of the hoopla about replacing file systems with a database (a goal of Microsoft and a potential boon for data management and data naming). With all of his critical apps hosted on AS/400 platforms, the file system had always been a bunch of records in a database. He said he saw nothing new in this data lifecycle management and file system replacement stuff.

That the pain of contemporary distributed computing – and of data management across distributed computing infrastructure – has come back to haunt companies hosting their applications in this environment comes as no surprise to many tenured (that is, mainframe savvy) IT professionals. To them, it was the movement of business applications onto distributed computing platforms that is to blame for everything we now confront. They interpret all of the infighting among open systems storage vendors and the complaints over ineffective data management as indicators of the same thing: the absence of the sanity provided by mainframes.

Such a view seems to be catching on in the open systems world these days. Many companies prefer stovepipe hardware solutions from brand name big iron vendors as the modality for solving their storage woes. Going with a single storage vendor, in theory, delivers all of the “comfortable numbness” that went along with outsourcing the IT strategic planning process to Big Blue in the classic mainframe shop. There are a lot of CIOs who apparently like it that way.

But don’t count a certain cellular communications giant in that group. On a visit to the company recently, I learned about an on-going competitive analysis of storage management software products that the company was performing. They had purchased a sizeable amount of EMC Symmetrix storage, hoping to leverage the value proposition advanced by Hopkinton’s “one-throat-to-choke” and common management across homogeneous hardware. The problem was, according to the CIO, EMC’s native storage management software, Control Center (ECC), just didn’t deliver the goods.

“We were sold a bill of goods,” said the CIO about the ECC software, detailing many functional aspects of the software that were promised but were either non-functional or missing altogether. “We don’t even want to hear their ILM pitch, since they can’t even get their storage resource management part right to our satisfaction,” he added with a chuckle.

Still intent to seek a single vendor solution, he said that HDS was considered to replace EMC. “But we found AppIQ (HDS’ management software, which is OEM’ed from the software company) to have even more holes than ECC.”

Breaking with the stovepipe model, the company tested, and dismissed, Veritas Software as “too burdensome to manage on every host,” and is now evaluating Computer Associates’ BrightStor product family. The cellular CIO hopes that BrightStor will meet his carefully crafted assessment criteria and has promised to share his findings in this column. So, we will hopefully have the chance to report the specifics of this evaluation

Bottom line: you can never go home again. Returning to a “mainframe mindset” with respect to storage acquisitions doesn’t necessarily mean that homogeneous infrastructure will deliver better management. And without manageability, you can’t effectively address storage costs, which amount to between 45 and 60 cents of every dollar spent on hardware today.

We welcome your views, and any reports of evaluations performed of storage technology by your company. On request, your identity (and that of your company) will be held in confidence. Write to jtoigo@intnet.net.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles