In-Depth

Readers Speak Up

Readers are on the right wavelength when it comes to their storage concerns.

Over the past couple of months, my inbox has been peppered by comments from readers, mostly positive, though often provocative. I thought it would be a good idea to discuss some of the issues in this week’s column.

The (de)merits of storage consolidation were tackled in a recent column and provoked several responses. One was from an IT person from an unnamed company who wrote:

“First I wanted to tell you that I have read many of your articles and I think you do a great job of breaking down the storage industry. It is refreshing to read your candid remarks without the vendor bias.

“I am from the IP network world and it seems to me that storage is now where networks were about 8-10 years ago. I was chartered two years ago with owning the SAN, which had been largely unattended until that time. I took over an inefficient, disjointed mess. This mess consisted of 180 MCDATA island switches with about 120 EMC storage arrays spread across five datacenters. The port utilization was at about 28 percent.

“By consolidating switches into a core/edge and storage arrays onto the core/edge, we were able to bring our port utilization up to the current 68 percent. Over the course of the past two years we have saved over $5 million in port purchases just by reusing what we had in a more efficient way. We migrated some of the smaller CX600 arrays with 73GB drives to the larger DMX-3 platform.”

This consolidation tale surely made sense from the perspective of port optimization and cost savings for the reader’s company. Questioning the consolidation panacea in my March 7 article was not predicated on the notion that all consolidation is bad, only that it does not make sense without careful consideration of the network traffic and application performance impact.

Simply taking all spindles wherever they may be and placing them in a common, centralized box or FC fabric is not a real strategy if it leaves remote users “stranded” in the sense of reduced access to data and reduced productivity. This reader, “from the IP network world,” would clearly understand the 80/20 rule of networks: 80 percent of accesses are made from the workgroup that creates the data. Placing their data assets nearer to them simply makes sense.

The gentleman went on to note that he is very interested in Zetera technology, a storage protocol that uses UDP and IP to facilitate storage node identification, segmentation, and access. His remark, having read Zetera’s whitepapers, was, “Coming from the IP side of things, I am encouraged by the Storage over IP solution. I just hope it can gain enough traction from the early adopters to make it to us big enterprise guys.” Truer words were never spoken.

Another man wrote, “I just read your piece on consolidation and wanted to tell you how dead-on it was. As the CTO of a storage-focused solution provider covering New England, I can tell you that we are doing well specifically because we have stopped emphasizing consolidation and moved our core messaging and solutions to align with management and protection of data.”

In response to this note, I can only say that the reader is “dead-on.” Proper data management and right-sized infrastructure must go hand in hand if IT is to gain back some of its “right-hand man” status with senior management. IT must solve problems that the front office understands—not only in the category of cost-savings, but also from the standpoint of risk reduction and process improvement.

Spring Gardening

Another reader responded to my Spring Gardening column, published on March 28. Here are his insights:

“I've seen you speak several times and always read your stuff. Don't take offense, but I think you're completely missing the point on the level of functionality required for proper e-mail archiving. You're only addressing a small subset of the functionality that should be there. Think about these levels of functionality:

  1. Straight archive stored off-site.
  2. Archive stored off-site, but compliant.
  3. Archive stored off-site, but compliant and searchable.
  4. Archive stored off-site, but compliant and searchable (with reasonable response time), and scalability into multiple terabytes (at least).
  5. Archive stored off-site, but compliant and searchable (with reasonable response time), and scalability into multiple terabytes (at least), and usable for litigation support in the event of a lawsuit or investigation.

“I would contend that #5 is, or will soon be, the only effective and cost-efficient solution. So, any system that isn't a hybrid—offering a compliant e-mail archive that can switch *directly* into litigation support mode (with provable chain of custody, lack of spoliation, etc. so the judge doesn't throw the evidence out)—won't be worth installing in the long run. If you throw in the recent move toward having eGovernance features, you can make the requirements even more stringent.”

I responded immediately that there was absolutely no offense taken. I appreciated his insights into the nuances of the archive process. In a future column, I may just interview him to dig more deeply into his view (and his product or service) relative to this matter.

In my opinion, he has just begun to peel back the onion on data classification that is prerequisite for effective data management. My understanding is that the Storage Networking Industry Association is currently setting out on a path to create a scheme for data classification that they may put forth as another “quasi-standard” (like SMI-S). My concern with respect to such an effort is that it is like letting the fox watch the hen house. What else would storage hardware companies dream about if not creating convenient classes of data that they can map to their own proprietary products?

The only way to know what kind of services and controls are required from the data produced and used by an application is to understand the application itself, the business process it serves, and the milieu (regulatory and otherwise) that helps define requirements. The idea that you can establish one-size-fits-most categories for data is a dangerous one. Individual companies will need to develop their own class scheme that fits their own business realities.

Thanks for writing and keep the messages coming: jtoigo@toigopartners.com

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles