BMC, CA Tackle Mainframe Change in the Age of Compliance

Mainframe change management in an age of compliance is a very different beast

If it seems as if data center automation has been The Next Big Thing (TNBT) for a long time now, that’s probably because it has.

If you work in a mainframe environment, you know that data center automation simply isn’t much of an issue for Big Iron teams.

This isn’t because mainframe data centers aren’t highly automated, of course. In fact, many of the configuration management, asset management and resource utilization issues IT managers in distributed environments are just now coming to grips with were first licked long ago in the mainframe data center.

Nor does this mean Big Iron is in the clear. At least one important issue—change management in the age of compliance—has reemerged to bedevil today’s mainframe data center.

Dave Wagner, director for capacity management and provisioning at BMC Software Inc., says that in most cases mainframe environments and their distributed Windows, Linux, and (to some extent) Unix kin experience very different pain points.

“Most customers in Windows and Unix environments are massively over-provisioned, for one thing. They don’t understand what the true capacity requirements might be, and in a lot of cases they’re only using 10 to 15 percent [of their existing capacity]. That just is not an issue for mainframe customers. So the salient issue is helping automate both the [capacity management] process and the activities that need to be invoked as part of the steps in that process to ensure that (a) it can be done more quickly, (b) it can be tracked more quickly for compliance, and (c) it can be done more accurately.”

Rob Stroud, director of brand strategy for Computer Associates International Inc. (CA) agrees. At the same time, he stresses, some problems—such as enterprise change management—have universal applicability.

“There’s a whole different set of issues [for mainframe environments] on the technology side, but it’s important to understand that [data center automation] isn’t just about technology. It goes back to … understanding this as a process. You don’t buy a single ‘solution’ or a series of ‘solutions’ and get instant change management, for example. You have to look to the process itself.”

CA, BMC, and other software vendors are anxious to help companies do just that. Earlier this month, CA announced a new deliverable for its burgeoning information technology infrastructure library (ITIL) stack (see http://esj.com/enterprise/article.aspx?EditorialsID=1968).

BMC last week introduced new data center automation software as part of its business service management (BSM) push. BSM involves many different processes—from help desk management to service level management and service impact management—but BMC’s new data center automation component addresses asset, capacity, and change management, which company officials collectively call “Datacenter Optimization.” That’s also the name of its newest BSM deliverable. It’s not just a marketing gambit, however, Wagner emphasizes. There’s an important distinction between automation and optimization.

“Mainframe customers are fairly reluctant today to trust the complete process, end-to-end, from [change requests] all the way through [to implementation]. … They’re not yet ready to fly it by wire—they want a human being in there pulling the trigger on those changes,” Wagner says.

To a lesser extent, the same goes for distributed environments, Wagner argues. “If you look at change management, it’s platform-agnostic—it’s process-driven, not platform-driven. So what we’re doing is we’re automating the process, but we’re making sure we’ve got a human being in there through each of those steps to make sure we’re not dropping the ball on anything.”

While capacity management and asset management are time-tested practices in most mainframe shops, change management—especially in an age of compliance—is a different beast, he argues.

“The pain points for the mainframe aren’t quite so directly related to driving up average utilization rates, because let’s face it—they’ve already got them. The pain points there seem to be more on the change side, and change management, especially in the compliance sense. They’re hosting the most critical of the corporate data jewels. They’re of the most interest to internal and external auditors to ensure compliance,” Wagner notes. “They obviously would put high-value assets at risk if they go down or are broken, so to do that you have to have a very strong and enforced change management process. So we see within our data center optimization lifecycle relatively less criticality on the early side of discovery things, with more importance on change management.”

Again, Wagner stresses, data center optimization—at least with respect to change management on the mainframe—does not mean taking human beings out of the loop. Instead, he says, it’s a question of optimizing the loop, so to speak. “Our change management solution is completely applicable to all platforms, whether it’s mainframe, desktop, [or] iSeries. Once the change is approved, the customer has the ability to go and automatically make the changes on platforms such as Windows desktops, along with Windows, Linux, and Unix servers,” he explains. “We don’t have the capability to go and automate the change of the configuration itself on mainframe systems, but that isn’t as much of an issue, because this isn’t something [mainframe customers] want automated. They like to be able to [oversee and implement] this stuff themselves.”

Instead, Wagner says, mainframe customers—and an organization’s IT entire ecosystem—benefit from increased visibility into the process itself, thanks in large part to Datacenter Optimization’s highly structured workflow. “If you take a request for change in your change management system, the workflow follows it all the way through from the request itself to the evaluation to once that request has been approved by a change approval board,” he explains. “For Windows or Unix, it automatically configures our system to automatically provision the appropriate software, but in both [the distributed and mainframe worlds] there’s also the verification that the changes have been made, too.”

Technology itself isn’t a surefire prescription for change management, Wagner stresses, but the increased visibility enabled by technology solutions like Datacenter Optimization help give IT the insight it needs to improve and optimize the change management process. That’s a point echoed by CA’s Stroud.

“It’s really about understanding enterprise change management as a process. It’s not really about going and buying a series of software components—it’s about understanding what the journey is. If an organization is having a business problem of unavailability, and you go through trying to figure out why you’re having unavailability, and you diagnose it as poorly planned or implemented change, the route you should go down is reviewing a change management process.”

About the Author

Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.