Storms on the Horizon: Examining Your Continuity Plans

Continuity planning has significant value to offer besides risk reduction; sadly, the benefits are rarely talked about.

The outlook for the 2009 severe weather season is not good. Depending on which forecaster you read, we are looking at between 9 and 11 named hurricanes this summer, about half of which are expected to make landfall either as full blown hurricanes or as lesser tropical weather events.

Despite the threat, the news services of most Gulf Coast and Atlantic Seaboard states, not to mention the 24-hour cable broadcasts, seem to be playing down the story. This may be due in part to shrinking ad revenues at many news outlets, but I suspect that it is also a reflects that business viewers already have many other issues on their plates, such as shoring up flagging sales.

Nevertheless, here are the facts.

Downsizing and doing more with fewer people increased dependency on automated systems as well as reducedthe number of hands available for creating and maintaining continuity plans. The added dependency on IT, intended to enable fewer employees to shoulder individually responsibilities that were once spread out over many workers, has had the corresponding effect of increasing the brittleness of companies. Truth be told, those occasional service interruptions that might have been taken as momentary annoyances a few months ago might today translate into full-blown emergencies -- or even disasters -- given the heightened vulnerabilities created by increased reliance on computers, networks, and storage.

Last fall, at the official start of the recession, there were reassuring reports that spending on continuity planning by larger firms, pegged in surveys at about six percent of the IT budget, was likely to remain stable. By January, resumes started to flood social networking sites from out-of-work continuity planners seeking new jobs following layoffs.

The only certainty now is that the potential consequences of a major Katrina-like storm -- or even a much less powerful (but nonetheless devastating) tropical storm -- could be damning. Though it didn't make national news, last August's trek of tropical storm Fay through Florida (it made landfall eight times) brought wind and rain that left 93,000 homes and businesses without power for several weeks … and it wasn't even a hurricane.

You might think that companies would be doing everything possible to safeguard their vital human and data assets from loss, especially in states with track records of natural and man-made disasters. However, current survey data suggests that corporate bean counters view plans as "nice-to-haves" rather than "need-to-haves," especially since continuity plans are widely regarded as providing a capability that in the best of circumstances never need to be used.

Management attitudes reflect, at least in part, the abysmal failure of planners to advocate their point of view using a full business-value case. No technology initiatives are being funded today that do not have a compelling story to tell in all three categories of business value: cost savings, risk reduction, and productivity improvement/top-line growth. The arguments I usually hear for business continuity address only the risk-reduction category, despite the fact that business process analysis and data classification according to criticality and regulatory requirements -- a precursor to any effective planning effort -- can actually deliver huge value to organizations from both a cost-containment and an improved-productivity standpoint, if the results are properly leveraged.

When I set about developing a continuity plan, something I've done over one hundred times, it requires first mapping data assets to business processes. Data inherits its criticality like DNA from the process that it serves, so you can't cost-effectively apply protection and recovery strategies to data without knowing its pedigree. Next, you map the data assets back to their hosting environment so you know the vulnerabilities that it is exposed to in flight and at rest in your infrastructure. These are the essential ingredients for building a sensible DR strategy.

What you discover through this process has considerable ancillary value. From a cost-savings perspective, a properly performed analysis of data assets and their current hosting solution can yield tons of information that can be used to archive retention data on green media and to identify stale and contraband data so it can be deleted. This reclaims storage capacity and defers the need to buy more disk much more effectively than does, say, thin provisioning, and can put you right with regulators and environmentalists. Understanding how data is currently hosted can also provide key input to the strategic planning of right-sized infrastructure going forward.

From a top-line growth/improved productivity perspective, the data analysis process delivers two big benefits. First, by enabling you to separate the grain from the chaff in your data assets, you can effectively cull the number of files in storage, placing them in archives or the waste basket as appropriate, thereby increasing the alacrity of work in day-to-day operations. Users can search and find the files they are looking for more efficiently, improving how much work is done at the end of the day. Data analysis also lets you model the costs of supporting business processes with IT services. A data model can be used to support front-office planning and decision making from an IT support cost standpoint.

The point is that continuity planning has significant value to offer besides risk reduction. That much of this value is rarely addressed in the popular literature or at seminars and conferences is usually the fault of the writer or speaker who wants to get quickly to this or that data protection technology and therefore treats upfront analysis as a "given."

Most companies don't do enough work to determine what data really requires protection or in what measure. Some data is needed on a continuous basis -- the business process that uses it cannot tolerate any interruption. This is where ongoing replication with high availability failover is the prescription. This is usually a very small fraction of the entire data set, however.

Other data serves vital applications or processes (as opposed to critical ones) and the recovery of access to this data can be provided in minutes or hours following an interruption without doing irreparable harm to the organization. The difference in cost between the critical protection and vital protection services can be huge: $1.5 million per TB versus $150,000 to $300,000.

For important data, serving an application or process you can do without for days or weeks following an interruption, the costs can be as low as $50,000 to $100,000 per TB. This is the domain of tape backup technologies that provide a media cost of less than $0.44 per GB.

Bottom line: The better the data analysis, the better your ability to be "budget savvy" in the selection and application of appropriate protection and recovery technologies. Interestingly, the same analytical work should be done to develop an information governance strategy, a security strategy, and even an application hosting strategy, so the continuity planning process could actually be contextualized as a data management exercise serving many masters and addressing many business problems.

Business-focused data analysis should guide design in disaster recovery planning, but of equal importance is "testability." Testing is the long tail cost of a continuity program, but few planners ask the question of how a recovery strategy will be tested when that they are defining the strategy. This increases the cost and complexity of testing and change management and is usually where backsliding develops in disaster preparedness programs.

The old rule of thumb still applies: an untested plan quickly falls out of step with business realities and actual infrastructure footprint -- both of which have a tendency to change rapidly over time. The wrong time to learn that the proper data required to support the restoration of a key business process has not been backed up or mirrored is after a disaster has occurred.

That said, the current testing regimen used by companies that do test regularly (a fraction of those saying that they have a plan) is inefficient and resource intensive. There are three things that need to be tested in a continuity plan: data protection, infrastructure recovery, and logistical plans for communications, resupply of business goods, etc. With proper attention to testability during recovery strategy design, it is possible to automate the processes for verifying data protection and even failover techniques so that testing these tasks can be accomplished as a function of day-to-day operations. This drives cost and complexity out of formal plan testing, reducing formal tests to exercises focused on logistics.

Looking at products such as CA ARCserve and XOsoft, Neverfail Group Neverfail, DoubleTake and the like, we now have tools that allow us to aggregate many data protection and failover schemes onto a single management console or dashboard so we can perform simulated or actual recoveries or failovers every day. RecoverGuard from Continuity Software doesn't provide the testing, but it does provide a monitoring console that can raise an alarm if the volume of data we are protecting exceeds the capability of the recovery technique that is being used to restore data within the amount of time we specify. These technologies, simply put, enable us to build in, rather than bolt on, a continuity capability. They also make testing a much more efficient and less complicated undertaking.

Given the increased vulnerability of both small and large organizations today to natural or man-made interruptions of operations, the time to review plans is now. Small businesses can make short work of their planning efforts by downloading a free document I helped write at For the third year running, I am helping Office Depot reach out to smaller firms that are at the same time more vulnerable to disaster than their larger cousins but also who find it much simpler to recover from a technology standpoint. In many cases, all mission-critical data in a smaller firm would fit comfortably onto a $1 CD or DVD. Unfortunately, proprietors of small firms often think that continuity planning is much more complicated and are dissuaded from taking any actions to protect their assets.

For larger firms, there is a wealth of information available on the Internet that can be of value in plan development. Commencing in July, you can download, free of charge, the next edition of my book on Disaster Recovery Planning, which is currently in its third edition. I am putting it up as a "blook" at to save a rainforest. Download at will.

With the many resources available, the continuing evolution of great software products, and a more business-savvy approach to framing the value of continuity planning, we might all be able to weather any storm that the 2009 hurricane season sends our way.

Your feedback is important. Please feel free to send an e-mail message to me at