In-Depth

SNIA’s Information Lifecycle Management Paper Reads Like a Beach Novel

Like a summertime guilty-pleasure-but-pointless novel, and SNIA’s latest document on information lifecycle management makes for nice beach reading.

In 1973, Richard L. Nolan, an associate professor at Harvard’s Graduate School of Business Administration, set forth a model describing the assimilation of technology by business organizations. He enumerated five stages through which most businesses would pass on their way to IT nirvana: a gentle S-curve depicting a predictable pattern of learning and adoption that would result in what might be called today a pure business-information nexus.

Based mostly on the analysis of budgetary expenditures made by companies studied by Nolan, the first four stages were defined by the steady adoption and funding of data processing (DP) by a business until, at stage IV, integration became the order of the day and end users were placed increasingly in the driver’s seat.

Stage V was less precisely defined. Somehow, the organization would move from an IT-as-service-bureau model to an IT-as-data-manager model, though it was never clearly spelled out exactly how this would occur. In fact, how and when we would get to Stage V depended on whose hypothesis you believed.

One view held that the next evolution of IT would be determined by changes in technology itself, while others contended that evolution would be contingent upon the assimilation of specific technologies such as databases or word processing software or personal computers. Still others held that Stage V was entirely dependent on a combination of situational factors—ranging from management style to economic factors to a willingness to embrace technology innovation, etc.—that would vary wildly from one organization to another.

By the late 1980s, the value of the Nolan model as a predictor and guideline to strategic planners in business organizations had diminished substantially. Critics said that the model was more metaphorical than meaningful, especially given that “Nolan’s own operationalization of the model remains proprietary, [and] other researchers [are] forced to rely on their own judgment for many important measurement and interpretation issues.”

Bottom line: Nolan had not defined his terms or assumptions clearly (limiting the ability of others to collect empirical data to prove or disprove his S-curve) and he held the model itself as proprietary (limiting its accessibility for peer review).

Why this trip down memory lane? Simple. Nolan’s model used to provide IT managers considerable comfort by serving as a generally-agreed-upon theory for justifying the strategic objectives of IT (and their associated expenses) to senior management. For its part, management seemed willing to buy into justifications based on the Nolan model, if only because of its Harvard B-school pedigree.

Filling the Theoretical Vacuum

Since the demise of the Nolan model, there has been something of a theoretical vacuum that needed to be filled. What’s more, there is a chasm between front office (corporate management) and back office (IT) goals and objectives in many companies that seems only to be worsening with time.

I was reminded of this fact this week when a kind reader forwarded a copy of the latest white paper from the Storage Networking Industry Association covering its latest thinking about information lifecycle management. Unfortunately, much like Nolan’s work, the document is more metaphorical than actionable, and it seems to suffer from the same lack of empirical evidence to help it serve as any real data-management model.

The authors of the document (you must be a member of SNIA (or know someone who is) to access it), are a powerful trio: Jack Gelb from IBM, Edgar St. Pierre from EMC, and Alan Yoder from Network Appliance. Two of them hold advanced degrees and all are “Senior” operatives for their respective companies. Bottom line: the document is full of industry sanction, providing the party line of the major players in storage today (excluding HDS, HP, and Sun).

The main thrust of the text is to describe mechanisms for managing data as a function of service-level objective groups (SLOGs) and composite storage services (CSS)—services from aggregated but disparate infrastructure components. In true engineering spirit, there are lots of diagrams of abstraction layers and high-level workflows that portend to describe how data produced by the business and classified by yet-to-be-defined-committees of business line managers and end users will be fitted with “offered data service levels” (ODSLs), then automated across infrastructure throughout their useful life.

Getting Past the Acronyms

If you can get past the acronyms, you see what the authors intend—and it is admirable. We could actually do lifecycle management if (1) data was classified so it could be ingested into the right workflow, (2) combinations of gear could be classified based on the services they offer (assuming they work together at all), and (3) technology remained stable enough for a sufficient period of time (or the industry continued to comply with a common model for advertising services and for ensuring interoperability between different products) that you didn’t have to reinvent the wheel every year or two.

As you might suspect, the central problem here is the fiction that underlies the entire scheme. Summertime is a typically high sales season for guilty-pleasure-but-pointless novels, and SNIA has given us some nice beach reading.

First, for the scheme described in the paper to work at all, it would require an unprecedented amount of cooperation from an industry that doesn’t seem very inclined to cooperate on much. Even SNIA’s much-touted SMI-S management initiative has fractured, resulting in the formation of a rival effort within the industry—IBM’s Aperi group.

Second, the information consumer (e.g., the business) would need to be able to identify correctly their data inputs and outputs and have the technical acumen to connect them to the service-level descriptions advertised by infrastructure. The paper addresses the later point by postulating the existence of at least two new IT positions: 1) the information technology architect (ITA) who interfaces with the business line managers (who own the data) and the records managers (who know compliance requirements) to help “translate” their requirements into policies; and 2) a “data service resource manager” (unclear if this is a human being) responsible for keeping straight the interfaces between SLOGs and underlying infrastructure CSSs (or, in English, someone whose job it is to make sure that a continuously changing hardware set can still be cobbled together to offer something like the services required by continuously changing service level objectives).

Ultimately, according to the authors, it would also be nice if everyone in the application software development community would build hooks into their products to recognize the SLOG/CSS connection, too, so they could automatically send data where it needs to go.

Where’s the Business Case?

As in any good summer blockbuster, we must suspend disbelief to give the story wheels. Missing from the document as a whole is a business case for doing anything at all. Yes, there is a lot of hat tipping to the regulatory requirements for data protection, fast retrieval, retention, and deletion. One gets the sense that this is the only reason (or even the most important one) to do ILM—which it isn’t. A smart guy from CA once remarked that compliance is only a catalyst for data management; it isn’t a full-fledged business case. Truer words were never spoken.

ILM should be driven by the same metrics as any other IT initiative: measurable benefits to the company in terms of cost savings, risk reduction, and process improvement. In this paper, the focus appears to be mostly on reducing the risk of regulatory noncompliance. This, perhaps, is a reflection of the current proliferation of investigations that the SEC has launched against many storage players—and anticipation of some that are to come. But I digress…

Absent is any discussion of ferreting out the underlying cost data associated with specific platforms. In fact, no one is saying much about cost savings accrued to this scheme at all. Theoretically, ILM should take into account the cost of parking data of value X on a platform of cost-per-gigabyte Y. The vendors don’t want us to go there or we would all be investing a lot of our money in storage-over-IP solutions from Zetera, NetGear, and Bell Micro instead of their overpriced frames.

As for process improvement, little is mentioned. Does overlaying an already overpriced storage infrastructure with a data management scheme that requires, at the outset

  1. the significant participation of company business managers (isn’t their time expensive?)


  2. the creation of new job descriptions and hiring people to fill them (where’s the job description and how do you vet the candidates?), and


  3. the development of storage infrastructure classes in the absence of real performance data from vendors themselves (how could you build CSS profiles without this?)


all with no practical assurance of a measurable improvement in corporate efficiency or productivity make any sort of business sense?

If it does, I’m not seeing it. Like Nolan’s Stage V, we have a lot of discussion here that requires leaps of faith.

The simple truth is that the authors have not made a business case for what they are attempting to do. Instead, they have merely engaged in the engineer’s favorite pastime: drawing diagrams of interesting architectures on white boards.

Going further, the document provides no indication of broad (or even narrow) consultations with business people to see what they think about the architecture. It begins with a simple assertion that SNIA’s dictionary defines ILM as such-and-such, so we’ll work with that. In truth, SNIA surveyed both its own “End User Advisory Group” and many attendees of its SNW conference last year and discovered that there wasn’t even general agreement among business people on SNIA’s definition of ILM. That hasn’t stopped them from continuing to use the definition. They chalked up the results to their failure to properly educate business users.

Like Nolan’s work, there are no actionable test case data here—no way to test the many assumptions they are making with respect to use cases and the like. There is also zero guidance on the “operationalization” of the scheme, which (for now) is only being shared with SNIA insiders. The entire initiative seems to be very proprietary, an interesting tact for a self-styled broker of open standards. But, isn't that SNIA in a nutshell?

In the final analysis, lacking a clearly articulated business case, I don’t see many senior managers willing to invest in this ILM architecture when and if it ever goes public. Without their investment, there is no ILM.

Today, more than 30 years after Nolan’s S-curve, we still find ourselves missing the convenient theory to justify our expenditure of resources to transition to Stage V. If you were looking for SNIA to fill the void, look elsewhere.

On the other hand, if you wanted some enjoyable summer fiction to help you sleep at the beach, drop an e-mail to SNIA and ask for a copy of the July 19, 2006 paper entitled, "Managing Data and Storage Resources in Support of Information Lifecycle Management". Just for a laugh, tell them I sent you. Then copy me on the mail you get back: [email protected].

Must Read Articles