In-Depth
Cracking the Code of Data Protection
Storage consolidation into a SAN is often a code word for re-centralization, a strategy based more on nostalgia than necessity—or benefit.
Last week, I was a keynote speaker for a conference in Denver sponsored by integrator Advanced Systems Group and their key suppliers, Hitachi Data Systems and Juniper Networks. The event took place in a movie theater and was followed by a premiere of the new film, The Da Vinci Code.
I didn’t wait around for the movie, but I did enjoy chatting with the fifty or so people who came out for the talk and the popcorn. It seemed like everyone was seeking to “crack the code” of data protection.
I noted that the same analytical process you use to discern what data is mission-critical and required for post-disaster recovery should also be used to define security and compliance targets. Stating this in the broader context of risk management or data management is more likely to get the funding nod from senior management than if you simply called it backup.
Sometimes getting the front office to supply funding for common sense things such as data protection, requires a bit of coded speak. Da Vinci learned this lesson the hard way, I recall, when he was arrested for suggesting a heliocentric view of the universe—that the Earth rotated around the Sun, and not the other way around.
Cracking the code was a good metaphor for the disconnect that exists between the front and back offices in many companies. The former often sees the latter as an “overhead expense,” and (if Nicholas Carr’s “Does IT Matter?” becoming a bestseller is any indication) as a purveyor of work that does not contribute meaningfully to a corporation’s bottom line. Storage infrastructure investments are especially difficult to justify in terms and language that resonate with the front office.
Try explaining how an array will provide measurable return to the company in terms of cost-savings, risk reduction, or process improvement. In these days of Fibre Channel fabrics characterized as a new layer in the client-server model, it is a very difficult case to make.
There is little to connect underlying fabric infrastructure to specific business goals. Unless the SAN has been fielded behind a specific high-performance transaction processing application or database (which is about the only place where fabrics are actually needed), it is difficult to justify the investment at all. SANs do not save money; they usually cost a good deal more than they return, no matter what vendors say. This is because they perpetuate a modality of storage preferred by leading big iron providers: proprietary, difficult to manage, and filled with hidden costs including the need for specialized training for the IT personnel who must manage and maintain them.
SANs are definitely not one-size-fits-all solutions. Most applications used today, including everything from Microsoft (which owns between 80 to 90 percent of the servers in the world) does not need a FC fabric or utilize a meaningful fraction of its capabilities. Redmond’s support for fabrics, it can be persuasively argued, is only a function of its desire to be classified as an “enterprise class” operating system. From what I have seen of the new Windows Storage Server R2, it is really designed as a NAS operating system and only tangentially associated with FC fabrics (NAS is increasingly used as the gateway to back-end SANs).
Cost savings may accrue to fabrics if you buy into the idea of consolidation. Consolidating lots of spindles into a fabric is not a bad idea on its face, but it needs to be considered very carefully. Placing a lot of data into a fabric also funnels a lot of user and application requests into a narrow channel architecture. You need to tread carefully whenever you collapse traffic into a narrow set of ports, as any network person will tell you, to avoid creating the World Wide Wait for all of your customers.
Storage consolidation into a SAN is often a code word for re-centralization, a strategy based more on nostalgia than necessity—or benefit. Taking all of the storage back from the anarchical world of what UC Berkeley researchers call the “data democracy” sounds like a good idea. Who wants unruly and undisciplined users to have their own local arrays? But, once done, most of these projects destroy their value premise by having to deploy caching appliances and half-baked Wide Area File Systems back out into the branch offices to calm the end users there.
FC fabric risk-reduction claims are also code speak. Yes, placing data into a centralized fabric may reduce its dependency on distributed schemes for data protection, which are often half-baked and unreliable. To undisciplined end users, performing backups is a lot like flossing: something we know we should do but often don’t. The theory that placing data in a SAN will give everyone whiter-than-white teeth and healthier gums is nonsense. The reason has to do with a lack of any meaningful understanding of the data itself.
Storage geeks don’t know what data is important and what isn’t. They see data as an anonymous set of ones and zeros. Even when archiving schemes for e-mail and databases are in place, these are only capacity-management techniques and do not consider the value of the underlying data. The further away from the end user (who presumably knows the importance of the data) that you place the data, the more you force yourself into the modality of backing up everything. This has the impact of making backups less useful, even if they are performed more frequently.
Until a workable data-naming and lifecycle management solution can be found, a better approach to risk reduction might be to leave data in place—near the users—who are better able to designate what needs to be replicated for meaningful risk mitigation. Then, use a service provider such as Arsenal Digital (behind the scenes of many telephone company-based backup services) to handle backups on a distributed basis.
Process improvement is a nearly impossible claim to justify with reference to the current crop of FC fabrics—with or without code phrases. The more you abstract data from its users and applications, the more difficult it is to associate any changes at the infrastructure level with meaningful improvements to process efficiency. Claiming that SANs will reduce downtime flies in the face of surveys that demonstrate that SANs are causing more downtime than the direct-attached storage configurations they are replacing. Servers fail less frequently than fabrics.
Several years ago, Harvard professor, Shoshana Zuboff wrote “In the Age of the Smart Machine.” It should be required reading for all IT and business professionals. It talks, without coded language, about the dissociation that has happened between workers and the product of their work. The cobbler used to see the product of his labor each time he finished a shoe. Now, we press buttons and somewhere on an assembly line, a grommet is added to a shoe by a robot. We no longer see the result of our work, a fact that has many practical consequences from a quality standpoint.
The same is true with our data. The more we abstract the connection between data and business information, the more difficult it becomes to demonstrate the efficacy of any storage infrastructure choice. This suggests a very practical course of action.
1. We need to get serious about data classification. This will require a new order of interaction between the front and back office and the abandonment of biases and prejudices by both sides. Technicians need respect for their skills and knowledge, but they can only earn it by learning to frame their objectives in business terms.
2. In addition to this cultural change, we need to begin drawing less abstract and much more direct lines between business value and infrastructure choices. If the infrastructure is generic or commoditized (increasingly the case in storage), we need to use technology that reduces all components to their commodity-off-the-shelf value and pricing. Start by managing disk as inventory—a nineteenth century concept that can be readily applied to twenty-first century storage technology.
3. We must start front-ending our disk inventory with generic controllers or network-based services that let us construct infrastructure in a building block approach. One example is embodied in Zetera technology for Storage over IP, which puts RAID controller functions into an IP switch, using multicasting to stripe data across a limitless number of drives identified by their IP addresses.
Last week, MIT Media Labs announced that it was using this technology, which is embedded on the new Hammer array line from Bell Micro, as the building block architecture for a massive, multi-petabyte storage infrastructure. Other businesses, from pharmaceutical companies to porn-meisters, are telling me that they are very interested in using a similar strategy to store their massive data burgeon.
4. We need to break with the concept that tells us that the universe revolves around storage from Brand X vendor. In fact, we need to accept that storage is increasingly commoditized and that business information is at the center of our universe. That done, we will have cracked the code on the front/back office schism.
Your views are welcomed. jtoigo@toigopartners.com.