In-Depth

When Application Management Collides with SOA

As applications get more distributed and complex, enterprises need to consider tools that handle application service modeling and mapping for change management.

by Terry Sweeney

In IT, pain is relative. There's the pain of a stack of trouble tickets in your inbox. There's the pain of a 30-hour marathon conference call, reported by a multi-state health care provider (which asked not to be named) as they tried recently to troubleshoot a highly distributed application that had gone haywire.

While it might be tempting to write off their experience as isolated and unique, enterprises across all industry sectors are bumping up against the management realities of distributed applications derived from service-oriented architecture (SOA), Web services, and other virtualized application schemes. These models have all gone mainstream, and with good reason: they're more cost-effective, flexible, and robust than basic client-server based apps.

They're also incredibly complex. In many instances, these composite applications are built on top of platforms such as BEA WebLogic, IBM WebSphere, and Oracle Fusion middleware. Moreover, few enterprises are able to track the changing relationships among workgroup servers, databases, Web servers, and enterprise service buses and metadata repositories.

As a result, this composite approach to building and delivering applications (not to mention keeping the enterprise operational) renders conventional systems management tools -- SNMP monitoring, application performance monitoring, transaction tracing, and infrastructure/application component mapping tools -- pretty ineffective. They can see part of the overall picture, but this visibility gap means they can't necessarily track end-to-end performance or handle change management, and they do no application service modeling and mapping, according to Julie Craig, senior analyst at Enterprise Management Associates, in Boulder, Colo.

The impact of change management can slam an enterprise hard, Craig says. "For enterprises with a highly structured IT management environment, 25 percent of their changes result in some kind of production problem. For those who aren’t as structured, that figure can run as high as 80 percent," she notes.

Several factors keep management functions artificially compartmentalized. For one thing, different middleware technologies and off-the-shelf code components hide key relationships between business functions such as inventory, fulfillment, and accounts receivable.

Second, traditional performance and system management collect and display metrics at the code level without any business context.

Finally, this new generation of applications gets reconfigured frequently, sometimes weekly or even daily.

In addition, traditional application performance management solutions can take months to implement. That's a real problem in the middle of a seasonal rush, or (in the case of the health-care company)patient data can't be readily accessed.

It's the same issue Ingram Micro Inc. of Santa Ana., Cali]f. faced: a search function on its Web site that allowed customers to find (and buy) products was malfunctioning. As Barney Sene, corporate VP and CTO, said he and his staff spent six weeks trying to identify the culprit. The job was complicated given all the third-party software the company either has acquired or integrated into its data center.

"We brought up a new version of a client's software, but we didn't see an issue till we got more volume on it, and then it crossed our whole Web tier," Sene explained. "We first assumed it was a database with connection pulling, specific to our application."

Within 24 hours of deploying their selected solution, Ingram was able to follow full context of the transactions in the systems they had developed as well as into third-party tiers and applications. That, in turn, allowed them to isolate the problem. "This is important, since we spent so much time debating with the vendor" over who needs to fix a problem, Sene added. "When you start doing virtualization and sharing resources, it's important to diagnose trouble sources."

This requirement for greater visibility is buttressed by recent EMA research that uncovered IT executives' two most pressing concerns: he high cost of application support,\ and the lack of management tools. In parallel, the pressure from executive suites for companies to align IT more tightly with the dynamic nature of a competitive business environment has never been more intense. That means IT must have a robust arsenal that allows them to manage internal and external resources in a modular, automated way.

Never long to miss out on an emerging opportunity or market, vendors are responding with ways to address these gaps in distributed application management, including ClearApp Inc., Collation (now part of IBM), Relicore Inc. (acquired by Symantec), and Tideway Systems. "These vendors go out and discover where apps are running and map them to the underlying infrastructure so you always have an accurate way to see how your apps relate to the foundational hardware and software," according to Craig.

That stands in contrast to more traditional discovery tools from Hewlett-Packard, IBM and Symantec, which tell IT what servers they have, what software runs on them, and to a lesser extent, the interactions at the network level, says Rob Greer, vice president of marketing for ClearApp, Mountain View, Calif. What these tools fail to capture are the service entry and exit points associated with a particular application or transaction. "You might have application component services running on three different servers," Greer observes, "but which handles shipping or the quote functions? This gets harder to discern as applications leverage shared components on middleware platforms."

What should IT buyers look for as they consider their options? The approaches are dissimilar enough and the underlying issues so complex that there are number of features and capabilities to keep in mind, according to analysts and vendors:

-- Tools should be able to generate a pre-deployment model of an application by analyzing code, instrument monitoring points, and metadata sources such as XML files, and Business Process Engineering Language (BPEL)

-- Any models that a tool generates should automatically update themselves as underlying components and relationships change and without requiring manual intervention or attention to these updates

-- Any modeling engine should embrace both management of application change and virtualized applications; it should also support a wide variety of third-party middleware such as Oracle Fusion and SOA Suite, BEA WebLogic, IBM WebSphere, RedHat JBoss, and Apache Tomcat

-- Rapid deployment time, ideally within hours, as well no need to loop in developers or someone with J2EE expertise.

Some sort of app management tool that handles mapping and modeling is quickly moving from luxury to necessity, especially if any business-critical apps are running over SOA or Web Services, says EMA's Craig. "You can either invest in people and throw them at the troubleshooting issue or you can invest in tools that are going to automate the management," she points out. The people approach might have worked five years ago when system complexity was a far cry from what it is now, but most organizations can’t afford to have an e-commerce app down for an hour while people try to diagnose the source.

"Things aren't going to get any less complex in the next few years with increasing distribution and virtualization," Craig notes. "Automation gives you a much better chance of keeping your business-critical services up and running."

Far away from any marathon troubleshooting sessions, IT surely hopes.

- - -

Terry Sweeney is a freelance writer. You can reach Terry at terry@tsweeney.com

Must Read Articles