SHARE Conference Focuses on Disaster Recovery, Mainframe Resiliency
Thanks to its reputation for resiliency, the mainframe is widely viewed as the preeminent platform for disaster recovery and business continuity planning.
Mainframe enthusiasts and System z faithful gathered last week in San Diego for SHARE’s summer conference. While many recent SHARE events have focused on cutting-edge mainframe trends (including zLinux, WebSphere-on-z/OS, and other next-gen workloads), last week’s focus on disaster recovery (DR) and business continuity (BC) probably seemed like a throwback of sorts for many attendees. Both practices are, after all, old hat in mainframe circles.
Not exactly, SHARE officials maintain. They argue that disaster recovery in the post-9/11 and post-Sarbanes-Oxley world is a decidedly more complex proposition. Pam Taylor, a vice president and head of Strategic Development for SHARE, maintains that DR and BC involve a lot more than just backing data up to tape and moving it off site. As a result, Taylor—who by day works as a solutions architect with a Fortune 500 subsidiary—and other SHARE representatives talk up what they call a "holistic" approach to disaster recovery and business continuity planning.
"[Disaster recovery has] been a best practice in the mainframe space for a long time, and absolutely everybody has experience with doing disaster recovery: making backup tapes, and going to the backup site and doing the recovery. Should a disaster occur, most [shops] are even prepared to sort of rebuild their environments in a different location," she comments.
However, such thinking is a vestige of the pre-9/11 and pre-SOX worlds, Taylor argues. "[DR is] really [about] a much more holistic approach than just ‘Could I bring my data center back up somewhere else?’ It’s really about business continuity and about that ability to really be resilient in the face of anything that could occur.
"It could be about being able to recover your business operations should something unexpected happen, like a Hurricane Katrina or something of that magnitude. There’s another aspect to it, too, which is all about how you keep your business services up and running in this fully globalized environment we’re operating in."
That’s the 10,000 foot view. At the detail level, SHARE last week addressed a wide range of DR- and BC-related topics—notably, the importance of business-process recovery or continuity. In the past, Taylor concedes, disaster recovery planning tended to focus largely on IT issues such as bringing a remote data center online and restoring backup data to it as quickly as possible. In the holistic view that she and other experts champion, DR and BC planning must also take into account the people and processes that effectively constitute the raison d’être of a business.
"There was at least one session where we were getting sort of the business view and the planning view of what you need to think about from a broad business resilience perspective. There was a thought leader from IBM here who discussed this issue in particular," she explains.
Adding Security to the Mix
Elsewhere, Taylor says, several SHARE sessions focused on security—especially as it relates to DR and BC planning. There’s a lot more to take into account here than you might think, she points out.
"It’s obviously important to encrypt your data, … but when you start having your business fail over to a different environment, what about the management of all of the encryption keys? How does that work? How do you have that happen seamlessly or at least in a sufficiently transparent manner that your business can be back up and running quickly?" As Taylor, a Certified Information Systems Security Professional (CISSP), notes: "We had some very informative sessions that focused on issues like that."
In many cases, Taylor concedes, SHARE’s focus on DR and BC involved revisiting some very familiar paradigms—such as failover schemes for high-volume transaction processing—and bringing attendees up to speed on new or even revolutionary business resiliency or continuity improvements.
"We had a couple of vendors who actually were bringing their specific solutions for how you go about doing system failover. If you think about something we’re all familiar with—the credit card processing industry—we all know they have really peak activity periods around holidays and Mother’s Day, for example, so they have to have environments where their normal processing can run on their normal systems and the peak loads can have Capacity on Demand come in to play and [provide an] automatic but totally transparent failover to alternative environments. The important thing [from these companies’ perspectives] is that [this failover is] never visible to the consumers."
More than anything else, she continues, last week’s SHARE conference also highlighted the mainframe’s resilience, a platform feature that makes it ideal for DR and BC scenarios: "You certainly have the most business resilient platform available—and its proven technology. What’s more, it isn’t the mainframe of old. It’s open. It’s flexible. We had a couple of IBM executives here speaking in various venues. Their common theme was openness, reliability, and resilience. This was not just marketing hype; they were really speaking from commitment to their platform and what the platform can actually deliver to an organization."
In this and other respects, she suggests, it’s hard not to think of the mainframe as an ideal—indeed sexy—platform.
"When you look at the mainframe [versus other platforms] on that business resilience level and you’re thinking about how quickly can I deploy new servers for one of my business units, if you look at what it takes to get new distributed servers ordered and on the floor versus the ease with which you can bring up zLinux on the mainframe and we’re talking a magnitude of weeks for a distributed platform and just minutes on the mainframe," she points out.
Several SHARE sessions last week also sought to poke holes in another old shibboleth that won’t die: Big Iron as a disproportionately expensive cost center. That just isn’t the case, Taylor says—and several SHARE attendees offered their perspectives as to why that impression lingers on.
"We have consistently been trying to help the technologists who come here be able to communicate the business value [of the mainframe] to their organizations, as well as to really try to carry the message in all of our communications and in the sessions that we’re trying to put on here," she observes. "The business value is there, the ROI is there. Many of the old perceptions of ‘Oh, it’s a mainframe, it must be expensive’ are just myths.
"I’ve heard people talking about various costs in their organization traditionally being allocated to the mainframe because it’s traditionally been the data center. Well, the mainframe isn’t the data center anymore and businesses haven’t gone back and revisited how they’re doing their cost overhead assessments. When they actually go back and do those assessments, invariably it comes up that the mainframe is a platform to look at [from a cost-savings perspective]."