Is 2003 Just 2002 Redux?
With the arrival of the New Year, we asked several storage luminaries for their perspective on the pace of storage technology change and what we can expect in 2003. No surprise: it will probably be more of the same.
With the arrival of the New Year come a new flock of marketing messages. Here are a few opinions expressed in late December at CMG’s Annual Conference in Reno, NV, by storage luminaries.
Randy Kerns of the Evaluator Group says the pace of change in technology continues to be slow. Networked storage, he argues, is the strategic play going forward, but companies will need to make an “infrastructure commitment” of at least seven years to realize business value.
Kerns adds that storage management is the current big thing, and virtualization will be a key enabler. Ultimately, he adds, much more work will need to be done to manage data movement: to create a life cycle data management approach that covers data placement, establishes a “chain of custody,” and provides adequate security. He concedes that the transformation to a common management standard will again be a slow moving process.
Randy has been quoted many times in this column because he is one of the truly smart guys in the storage industry analysis biz, and many of his December comments made sense.
However, asking companies to align themselves with a vendor for seven years is tantamount to telling consumers to commit to often half-baked networked storage technology and virtualization techniques while they are being made truly ready for use in business prime time. This advice is very questionable, especially given the need in most companies for IT managers to demonstrate a quick return on investment for new hardware acquisitions.
Kerns’ view of life cycle data management is a good one. We have preached the same view from this bully pulpit. The difference is that while Kerns would prefer to wait out the tedious process of vendor "coopetition" to arrive at a common approach, we believe that an open object-oriented data naming scheme could be formulated and launched very quickly as simply another layer of software running between the operating system and I/O processes.
Brian Polowski of Network Appliance says that his company is already coming to market with a unified storage product that does both NAS and SAN. In his sensible view, support for NAS/SAN hybrids is driven by the fact that cash is king. He notes that the cost of people is up and the cost of time is up, but the cost of bits (of storage hardware) is down. Disk is denser, faster, and cheaper today than ever before. Still, people are reluctant to embrace SANs because they are difficult to deploy and manage and lack a provable ROI. NAS/SAN hybrids can address these concerns.
Whether NetApp will be the beneficiary of NAS/SAN hybrid platforms remains to be seen. However, Polowski is right on target when he notes that an inexpensive, manageable, and scaleable storage platform that is IP-attached will certainly eclipse the Fibre Channel beasties that predominate the SAN world today.
Seagate Technologies’ Rob Pegler suggests that the key things to remember about 2002 were not innovations by vendors but non-innovations. We already liked this guy, but we liked him more when he asked aloud the following questions: “Why is there no operating system-based Fibre Channel stack interface on motherboards? Why are there no blade servers with disk drives? Why do we continue to tightly couple transports and file systems with controllers and device drivers when clearly these need to be broken?”
Will any of these non-innovation issues be addressed this year? Probably not. But it is good to see someone in the industry questioning the fundamentals that have helped to make storage the buzz hassle that it is today.
In contrast to Pegler, Bill Zahlavi of EMC seems to suggest that his company is still trying to make sense of the world in which it operates. While the evolutionary process may seem slow to outsiders, Zahlavi argues that, from where he’s standing, change has occurred at an extremely fast pace.
He recalls that EMC was a “one product company” when he joined it six years ago. In less than a decade, the Hopkinton giant has diversified product lines into NAS and SAN, introduced multi-platform support, and was “forced into software in 2002.” (“It was a pull, not a push,” he says.)
Going forward, says the self-described “visionary,” one goal of EMC is to develop policy-based storage management. He doesn’t give a timeframe for realizing this objective, stating that all of the complex interdependencies of components must be clearly understood first.
That, we agree, will take some time, especially since vendors have evidenced little willingness to cooperate on even the most fundamental of issues: determining which storage operations are properly conceived as control path tasks and which are data path tasks. Take a look at the latest common storage model at SNIA to see what a mess proprietary vendor interests have made of efforts to define how storage happens.
From these opinions and others, it would appear that 2003 will see the perpetuation of conflicts and infighting that predominated in 2002. Optimistically, we could argue that the time is ripe for new players without vested interests to come to market with their new and innovative wares—displacing the old guard. Pessimistically, however, this is unlikely. In the current market, the only way that new guys can compete is if Fortune 500 companies, which account for more than 60 percent of storage spending, are willing to give the small fry a shot. At least for now, that doesn’t seem likely to happen.
We’ll keep watching the market as the year develops.
Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.