Controlling Application Performance: What Every CIO Should Know

Knowing how to measure, manage and improve upon application performance should be the goal of every chief information officer. The CIO must address the concerns and costs of poor performance management and the backlash of any inadequate approaches to application performance management.

Editor’s note: The following article was republished courtesy of the ICCM, a Demand Technology company.

Performance is a vital component of software quality that is frequently forgotten. The top 10 problem applications get plenty of attention. But what about the remainder? By adopting a reactive approach to performance management, most companies succeed in adding cost to the enterprise:

  • * Untuned application software costs companies millions of dollars annually in unnecessary processing.
  • * Untuned application software results in unnecessary and costly hardware upgrades due to excess CPU utilization, increased batch processing time, and/or unacceptable response times.
  • * Untuned application software disrupts business operations. Latent performance problems appear during peak business periods when mission-critical information systems become overtaxed.
  • * Untuned application software undermines mission-critical business practices and the ability to sell or deliver core products or services.
  • The Key Role of the CIO

    CIOs face complicated issues of data center infrastructure, a massive portfolio of widely differing application workloads, and rapidly changing business requirements. Strategically, CIOs need to react quickly to competition. Tactically, they must create common processes across business units to make collection and dissemination of information simpler, and to slash the cost of new applications. The Year 2000 requires significant changes to the application portfolio just to stay in business.

    Compared with these concerns, application performance lacks glamour. Nevertheless, unlike a Y2K project, a systematic program of performance monitoring and tuning almost always produces surprising cost savings for large companies. CIO sponsorship is vital for such a program to succeed.

    In the first place, only the CIO can negotiate the transfer of funds between the capital equipment, salary and software budgets. Second, management prioritization and employee motivation are required to convince lower-level managers and technical professionals of the importance of application performance. Only the CIO can provide the necessary emphasis.

    New systems run into performance problems so often that nearly every company has experienced a performance "disaster." Trade journals often print stories of important projects gone bad, of millions of dollars lost due to a critical application that, at the last minute, proved unworkable.

    Right behind disasters comes a much larger class of applications that actually are placed into production, only to become a continual source of irritation and complaints. Although they are functionally correct, their sluggish behavior has costly side effects by wasting expensive computing resources every time they are run, and frustrating employees who need these systems to do their jobs. Productivity drops and attrition rates rise.

    Untuned applications become the real "performance disaster" story. A large enterprise depends on thousands of programs. Even a moderate-sized enterprise can amass an application portfolio of hundreds of programs. Only a few are important enough to get the attention of busy systems staff. The remainder simply continue to operate "normally" while invisibly draining valuable computing resources, month after month, year after year.

    The cumulative cost of untuned applications is staggering. It is not unusual for an enterprise to spend 25 percent-50 percent of its total computing budget on unnecessary processing.

    Inefficient processing has indirect costs, too. Unresponsive business systems can lead to loss of business when the systems that were supposed to support a business process cannot.

    Effective CIOs aim to have all the right information resources and services available promptly to support interactions with customers. When vital information processing applications run slowly, customers are often directly affected. Conversely, efficient systems afford a positive experience that attracts and retains customers.

    Responsibility

    One of the central challenges of performance management is this: Although performance involves everyone, usually no one feels fully responsible for it.

    In information systems, there are many different jobs, and people must specialize to be effective. A major division exists between those whose focus is applications and those whose job is to manage the systems. The difference in skills between the two groups grows ever wider. Systems specialists struggle to keep up with new technology; application developers seek to understand the business. Packaged software development environments improve developer productivity while isolating them from how systems actually work.

    Performance management is affected by this division of skills because application performance is determined by both internal and external influences.

    Internally, an application’s design and implementation determine the load it places on the computer system’s resources. Externally, the computing environment determines the time it takes to process the application’s workload.

    Application developers are not trained to take all these factors into account, while systems specialists know little about what an application does, know even less about what it is supposed to be doing, and know nothing at all about why.

    Both groups influence performance but neither has an explicit focus on application performance.

    Managing application performance demands a set of skills that cuts across departmental boundaries. For this reason, performance management is often addressed in a reactive mode.

    However, some companies have adopted a more systematic approach, based on performance as a team responsibility. They either establish a group of skilled performance specialists with the right backgrounds (the Performance Department) or with a matrix management approach, they bring together the necessary skills from different departments (the Performance Review Board).

    Ten Key Challenges of Performance Management

    Since no company wants to waste money, why are performance problems so common? Here are 10 reasons:

    1. No Senior Management Sponsor Exists. It is hard to demonstrate the benefits of application performance management to someone until he or she experiences the cost of not doing it. Risk denial ("There won’t be any problems!") and cavalier attitudes ("Someone else will fix them!") are common.

    Clear direction from senior management determines attitudes lower in the organization. When senior management deem good service levels essential, lower-level managers and technical staff will treat performance problems seriously.

    2. Commitment and Coordination. A key difficulty in effective performance management is that no one person can do the whole job. Effectiveness requires an array of skills and perspectives that spans several departments.

    It is not easy to explain why organizations that manage complex projects find it so hard to manage application performance. Lack of CIO focus in this area is a contributing factor.

    3. Communication Barriers Abound. Whenever people with different skills and backgrounds need to cooperate, communication is a challenge.

    The division of skills, although inevitable, makes organizations less efficient in tackling performance issues. Systems specialists have tools that capture details of an application’s behavior but lack the functional knowledge to fix in-depth application performance problems. But, the application developers who understand function find performance tools incomprehensible.

    Unless an organization can establish shared performance goals, a common language for discussing those goals, and a way to bridge the gaps in skills between departments, there is little hope of effective cooperation.

    4. Those Responsible Adopt Reactive Management Styles. Traditionally, IS departments have created new application solutions in response to business demands. The CIO’s reactive approach trickles down to lower-level managers and affects the response to performance management issues.

    Introducing systematic performance management tools and processes and requiring key managers adopt a more proactive approach is the way to break the pattern.

    5. Systems Managers See No Immediately Visible Return on Systems Performance Management. Systems managers are governed by "the tyranny of the urgent." It is difficult for systems managers to assign skilled staff to track the performance of application software that is not visibly causing a problem. More urgent and visible problems demand immediate attention. Performance management can wait until tomorrow. The only effective counter to tactically oriented resource allocation is a more strategic view by the CIO.

    6. Little Focus on Performance During Development. Many performance problems arise because of developer inexperience or lack of tools. Even projects with good people, good managers and good methods suffer from a performance Achilles’ heel. The designers of failed applications built them in good faith, believing they were workable. Unfortunately, performance problems arose but were discovered too late to be fixed.

    Necessary tools do exist, and organizations possess the skills to use them, but they choose to focus instead on other priorities, such as function, schedules or costs. Although acceptable application performance is certainly not the only goal of software development, it should not be ignored either.

    7. Development Managers See No Visible Return on Application Performance Management. A major obstacle to application performance management is that development managers are asked to invest time and resources early on, with no visible return on the investment. The benefits come later, in more efficient systems that reduce hardware costs and staff costs generated by tuning and rework.

    Many managers say in effect, "There won’t be any problems, and even if there are, someone else will have to fix them." These attitudes persist as long as the focus is on delivering timely application functions within budget.

    For the development manager, caught between demand for new features and pressure to deliver, performance inevitably gets squeezed. Only the CIO can alter this equation, by factoring in the later costs of poorly performing applications.

    8. For Developers, Performance Is Not Exciting. Many developers want to use the latest hot tools and techniques. In the rush to embrace new technologies, application performance concerns can be overlooked. Rapid application development (RAD) tools and methods focus on the user’s interaction and the screen’s appearance, while ignoring resource demands and application performance.

    A CIO has the opportunity to address the separate issues of staff attrition and costs at once. Employees need to share a sense of contributing to the success of their organization. By giving more visibility to the real costs of poor application performance, CIOs can show developers that their technical efforts have a direct impact on the bottom line.

    9. Developers Regard Performance as a "Systems" Issue. Many developers act as if performance is unrelated to functional design. While an enterprising programmer may notice an inefficiency and take the initiative to do "extra" tuning work, others assume that performance issues are the responsibility of the database administrator or systems specialist.

    These misconceptions interfere with making performance an integral element of application quality. They must be countered by a development process that incorporates a view of performance based on the need for applications to use computing resources efficiently and to meet response objectives.

    10. There Are Too Many Programs to Track Manually. Systems specialists often work on performance-related issues, but applications receive scant attention. There are so many programs in the typical production environment that only the largest, most frequently run, most poorly performing programs are tracked on a regular basis. Reactive approaches to performance management make this inevitable.

    A proactive approach demands a comprehensive, automated way to monitor applications coupled with a systematic review process.

    Requirements for a Performance Management Environment

    Performance issues touch many different areas of the enterprise, and disputes about service levels can lead to turf war. The process stalls when everyone involved says, "The problem is not in my area."

    Entrenched positions can determine how and when performance issues are tackled. No one may be willing to fight a political battle when "everyone knows it costs too much to manage application performance across the board."

    Service-level management approaches solve this kind of problem by establishing shared ownership of performance and service-level issues and a common way of thinking about performance and business goals. It helps enormously to have a high-level management sponsor who establishes the importance of these goals, helps resolve conflicts, and cuts through the organization’s political barriers.

    Ultimately, money spent on running applications is the concern of the CIO and senior IS executives. These individuals need to step up to the challenge and focus on this issue.

    An effective way to do this is to mandate that the internal cost of running the company’s applications be reduced by a percentage over a defined time period.

    Performance management begins with knowledge of how applications are using computing resources – that is, a set of application performance profiles. To create application performance profiles on an ongoing basis, we need the following:

  • * Measurement tool(s) to gather the raw data about application behavior
  • Analysis tools to filter and summarize measurement data and develop profiles
  • * A database of application performance history
  • * Analysis tools to read the database and simplify trend analysis
  • To overcome communication barriers, it is essential for team members to speak "a common language." Here, a common set of tools is very helpful. If team members use the same tools to look at the same data, they are much more likely to understand each other.

    The tools in question need to focus on application response time, the one performance metric users care about and the one aspect of performance everyone understands.

    Finally, the performance measurement and analysis activities must be carried out systematically. The ideal performance management process would involve the following:

  • * Establishing application and system performance objectives
  • * Measuring application performance against those objectives on a systematic basis
  • * Automatically measuring a large number of applications and recording and analyzing the resulting data
  • * Tracking a high percentage of programs in the application portfolio
  • * Automatically highlighting exceptions to the defined performance objectives or recent history
  • * Establishing measurement and analysis controls accessible to both development and systems groups
  • * Recording performance information in formats that are useful to both development and systems groups
  • A performance management department (or cross-functional team) should adopt this list of objectives.

    A Routine Program

    The key to these cost savings is a systematic, ongoing program of monitoring at three levels:

    Level 1 – Continuous, low-cost exception monitoring. Most well-engineered software subsystems like operating systems, databases, and transaction monitors maintain their own counts of both normal activities and exception conditions. Sample these periodically. Abnormalities often indicate performance changes without the need to start any special traces that might affect normal processing.

    Level 2 – Regular, targeted performance tracking. Monitor key performance indicators regularly for a cross section of critical applications. Record application response times, processor utilizations, and other critical resource utilizations. Track these key indicators over time for signals of impending performance problems.

    Level 3 – Occasional, focused performance audit. Occasionally, focus attention on a single performance-critical application, software component or processor. Create a profile and compare it with previous profiles to detect changes and hidden or potential problems as workload volumes increase.

    Managers accustomed to a performance crisis mentality of "all hands on deck" tend to overestimate the cost of systematically monitoring performance across the board. A performance crisis demands an immediate, massive, and widespread performance audit (level 3).

    The routine activities of levels 1 and 2, by comparison, are more focused and less disruptive; they make lower demands on resources.

    About the Author: Chris Loosley is employed as Senior Internet Consultant for Keynote Systems. Previously, Chris was the Managing Director of the Institute for Computer Capacity Management’s Web site, www.iccmforum.com. He can be reached at cloosley@keynote.com.

    Must Read Articles