In-Depth

Troubleshooting Application Performance: IT Still Stuck in the Stone Age

Most organizations rely on resource-intensive and manual processes to identify and fix their application-performance woes

When it comes to troubleshooting application performance problems, most organizations are still stuck in reactive mode.

That’s one upshot of a recent study from Forrester Research and co-sponsored by application testing tools vendor Compuware Corp. The Forrester survey found that most organizations rely on resource-intensive and manual processes to identify and fix their application-performance woes. The results were based on responses from 430 senior IT executives with Global 2000 companies in the United States and the European Union.

This jibes with the results of a study conducted nearly two years ago, when Compuware Corp. first asked Forrester to take a look at the manner in which organizations typically dealt with application performance issues. At the time, Forrester concluded that companies were still troubleshooting application performance the old-fashioned way—at the enterprise help desk.

Compuware isn’t without an angle of its own here. To the extent it’s able to highlight the problems posed by poor application performance, it’s also more likely to find a receptive audience for its Vantage application service-management product. In fact, Compuware recently announced a new Vantage 9.8 release, keyed to what officials describe as the problem of optimizing the end-user application experience. With this in mind, Forrester surveyed CIOs and executive decision makers about the importance of delivering application services to users and the challenges they face with service management.

Recognition and Inaction

The results, Forrester found, were largely predictable: by an overwhelming margin (88 percent to 12 percent), CIOs recognize the importance of managing the application service—but by almost as overwhelming a margin (73 percent to 27 percent), very few have taken steps to do so.

But they’ve got every reason to, says Lloyd Bloom, a senior product manager with Compuware. “In this year’s survey we asked [CIOs] to estimate the cost of downtime as well … and what we found was that it typically came to $10,000 to $100,000 dollars per hour for 43 percent of the organizations. These are large organizations, Global 2000 [companies],” he notes. “In fact, seven percent of them said [downtime] cost them more than $1 million an hour.”

The new results jibe in some respects with those of the older survey. In that assessment, Forrester found that the enterprise help desk was often the first to know about application-availability problems or application-performance issues. Compuware said this was proof positive of a reactive service model. In 18 months, not much has changed, Bloom says.

“Two years ago, 60 percent [of CIOs] said they were monitoring both availability and response time, which was interesting because [in the 2003 survey] 73 percent of them said they weren’t getting that information, that they were still relying on the help desk for that information,” he observes. “The unfortunate result is that there hasn’t been much improvement over the last two years in terms of how organizations track end-user experience problems. In general they’re still relying on the help desk, and 63 percent thought that [they were getting this information from the help desk] compared to two years ago.”

That’s where a tool like Compuware’s Vantage 9.8 comes in, says Bloom. The new version has been given an overhaul to more specifically address the problem of managing the application-performance issues that matter most to end users: things like measuring application-response time and other performance metrics from the end user’s desktop. To that end, Vantage 9.8 supports agent-less end-user performance monitoring, along with the use of (traditional) active and passive agents. It also supports real-time monitoring of an organization’s Web site infrastructure, analysis of HTTP and HTTPS applications, and improved J2EE monitoring and troubleshooting.

Compuware’s new agent-less monitoring capability comes by way of its acquisition of the former Adlex, which the company picked up in May. In the past, Compuware had championed the value of agent-based monitoring, but Bloom says that both approaches have distinct applications.

“It’s true that some organizations don’t like agents. It’s true that organizations don’t like or don’t trust agents, but the opposite is also true. Basically, it’s true that we’re polarized everywhere. There are [organizations] that prefer agent-less, there are organizations that prefer agent-based, but there are specific needs that dictate the use of this technology. With agent technology, I can get out to sites that I have easy reach to—my main corporate offices, my headquarters—I can put robots in those sites, and I can put passive agents on desktops and bring that in to my main console, but I can’t do that for my remote users.”

Land of Confusion

If IT organizations are always among the last to know when there’s an application-performance problem, they’re typically among the first to over-react, too.

Forrester reports that IT groups frequently respond to application-performance or -availability issues by throwing as many people at the problem as possible. In fact, almost every survey respondent admitted doing so. Why? In large part, Forrester says, because they’re unable to isolate the problem by means of their existing diagnostic tooling. That’s according to no less than 86 percent of respondents. (For the record, 94 percent of respondents admitted that they involve multiple people in problem resolution, while 38 percent copped to involving more than six people to resolve problems.)

A reactive model like this helps to amplify the damage done when application downtime or performance issues do occur. Forrester found that the cost of an hour of downtime tends to vary between and among organizations, but—in general—downtime is expensive. Forty-three percent of respondents said application unavailability cost them from between $10,000 to $100,000, while 35 percent said downtime cost them over $100,000.

At the same time, Bloom argues, this survey’s results are slightly more encouraging than those of its predecessor. If nothing else, he suggests, it also shows that CIOs have accepted the importance of application service management—which suggests that things might improve still further in the future.

“[T]he survey two years ago showed that 66 percent of the time [downtime] involved more then six people, [which improved] to 38 percent [this year]—although even 24 percent of the time they’re still involving ten or more [people],” he says. “CIOs are more in tune to what the issue is, and that sets it up so that our sales people can go in and instead of evangelizing we’re in there selling more often than not to somebody who understands problems.”

About the Author

Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.

Must Read Articles