Case Study: A Fresh Perspective on Application Performance

Call-services specialist Rockwell tapped an outside QA specialist to nip potential application performance issues in the bud. It’s a growing trend.

CIOs have long known that application performance problems are an expensive drag on their IT budgets, but a study released earlier this year by Forrester Research helped to drive this point home in no-nonsense CIO-speak—that is, dollars and cents.

Forrester interviewed 430 senior IT executives at large companies in the U.S. and Europe and asked them break down the costs associated with application performance problems and downtime in terms of wasted IT staff resources.

The ugly truth: Fully 38 percent of respondents said that 11 or more people typically spend time dealing with end-user application performance problems, while another 57 percent of IT executives estimated that the cost of downtime exceeds $100,000 per hour.

In terms of human-power costs alone, that can add up to more than a few slices out of an IT budget’s pie. Of course, there’s a school of thought that says if application performance problems are nipped in the bud—that is, during the initial development process—they can be eliminated,(or marginalized) as persistent (and expensive) issues in practice. Until recently, this has been the chorus of application performance management testing vendors such as CompuWare Corp. and Mercury Interactive, but—as IT budgets have rebounded—third-party testing vendors have also joined the fray.

Their point: Companies already thoroughly test applications before they deploy them in production—and yet issues inevitably arise. What’s needed is a fresh perspective—another pair of eyes—in the form of a best-of-breed, third-party solutions provider.

This was the road taken by Rockwell FirstPoint Contact, a global provider of call- and contact-center solutions. To be sure, Rockwell completed the requisite internal testing of its new call-center application, but then tapped the services of Keylabs, a Linden, Utah-based provider of quality assurance assessment services.

In at least one respect, this was a given: Rockwell needed to certify its new application for use with CallManager—the call-processing component of Cisco’s Architecture for Voice, Video, and Integrated Data stack—and Keylabs has long provided certification testing services for Cisco and other vendor-branded technology standards.

But as Tim Sullivan, director of system verification for Rockwell tells it, enlisting Keylabs to do a broader quality-assurance assessment was a no-brainer. “The Cisco certification was one of the keys, but beyond that, when we looked at the cost of expanding our facilities [to more thoroughly test the application], it just wasn’t practical,” he explains, noting that he “wanted to test and look at latency across the country—to be able to test going across time zones and states—which we couldn’t do” with Rockwell’s existing infrastructure.

Rockwell’s IT budget is still tight, but Sullivan says he had surprisingly little difficulty selling upper management on an outside QA assessment, even though it wasn’t part of the original RFP.

“It was a no-brainer that we’re going to do this, and it was a key component to make sure this thing’s ready to go to sales,” he explains. “Once you open that door, you see how else you expand the testing, I came back and said, 'I need X dollars for this, it wasn’t part of the original project, but it’s something we need to do,' and they approved it without difficulty.”

So what did Rockwell get for its money? “What we were looking for was more or less a report—almost an Excel spreadsheet—that would enable us to see what impact we would get on [the application] from our testing,” says Sullivan.

Aside from ferreting out bugs and other performance issues, Rockwell wanted to determine the limitations of its new call-center application, so that sales representatives wouldn’t over-sell its capabilities to customers. “We were looking for the kind of the stuff like, ‘Here’s where you need to be [in terms of network resources or capacity] to make this effective in your environment.'”

Outside of overseeing the testing of a large, distributed application that touches a heterogeneous mix of other applications and networks, what does a third-party QA firm such as Keylabs bring to the table?

Sullivan invokes a familiar trope. “If you do this internally, I think you run the risk of your people becoming very familiar with the product, which means that you’re going to miss some things,” he observes. “In our case, I guess we can’t say enough about going across different firewalls and from one environment to the next, because we constantly go into a customer site and see something we weren’t able to reproduce because of traffic that we have no outside knowledge of.”

Jim Chamberlain, vice-president of sales with Keylabs, says that as IT budgets have recovered, he’s seen an uptick in interest in third-party QA, which he attributes to organizations' efforts to eliminate application performance issues before they become long-term drags on their yearly IT budgets.

“That’s certainly a trend that we’re seeing more and more, especially when they’re utilizing these open systems and layering those systems on top of customer environments. You find that those environments become very sophisticated and complex to work with,” he comments.

In fact, Chamberlain says, building custom applications on top of application architectures such as J2EE, .NET, or (in Rockwell’s case) Cisco CallManager can make testing several orders of magnitude more complicated.

“We recently calculated for one of our customers that the number of potential test cases for their environment is approaching 84 trillion, and that’s just a massive amount,” he notes, stressing that this is just as true for an enterprise IT organization as it is for a services provider such as Rockwell.

“You look at some stacks of software and you’ll see 10, 12, 20 different vendors in an application stack, and what happens when one of those needs to be upgraded to affect some specific functionality? Things get really complicated, that’s what, and I think a lot of enterprises are seeing that they don’t necessarily have the capability to do that level of testing, and that’s where we come in to do the assessments for them.”

Sullivan agrees. “It used to be that you could go into any environment, and this [application downtime] was almost just accepted. You just can’t live with that anymore,” he says. “We’re taking our products and layering them over Microsoft, other applications, where everything is so integral, we need to know how we sit with all of these. So from our perspective, this is essential.”

About the Author

Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.