Emerging Trends in Managing IT Service
Financial Services is an application-intensive industry uniquely reliant on end-user system availability.
There is no other vertical market that relies more heavily on mission-critical applications than financial services. Can you imagine a bank, brokerage, or insurance company deciding it is going to do business the old fashioned way and eliminate their, trading desk software or electronic credit card processing? Of course not! In fact, in many ways their business IS the application or mix of applications they use, and how well their applications perform dictates how well their business performs.
Consequently, maintaining application availability and performance is a top job for IT (along with maintaining data security and ensuring regulatory compliance). For years, companies have monitored the performance of applications through server- and network-monitoring products. While systems tools can identify obvious performance issues, such as a server failure or poor network performance between sites, they really have no idea how the applications are performing on the desktop itself. These tools cannot measure performance degradation across the infrastructure or the impact of multiple applications running concurrently on the desktop.
The result is a mismatch between the true end-user experience and what IT thinks it is. In financial services this mismatch could mean lost productivity and, in many cases, directly impact the bottom line. A better way to understand and manage applications is by enabling IT to view performance as the end user does.
Any end-user perspective must consider the performance of all applications used by the end-user community. Often IT does not have a complete picture because users download new applications from the Web or internal sites and may not use the standard set of applications and versions installed on the desktop. So while IT attempts to provide a “standard” desktop or “common image” to ensure performance, they find that very few desktops remain “standard” after they're deployed.
This is especially true in financial services, where most users have personal preferences they think make them more productive and most desktops typically have multiple applications that include both third-party and home-grown applications, which can be either Web-based or thick client. This makes the job of ensuring performance more challenging and requires visibility into exactly what is happening on each desktop before a problem can be diagnosed and resolved.
As an example, one solution to measure the performance of Web-based applications is to use synthetic transactions (or simply probes). The advantage of this approach is that IT understands basic availability and performance characteristics of an application in advance of use by the end user (such as the start of the trading day). The disadvantage is that the solution does not provide insight into what is actually occurring during real production use in the heat of the day.
Synthetic testing does not consider what other applications and processes are running on the end-user systems that may impact performance, nor does it take into account individual desktop issues, since synthetic testing occurs only between a small, select group of clients. Consequently, the IT perspective may be that the application is performing well; however, the end-user experience is quite different. In order to understand the true application performance as experienced by the end user, you must see and use real production data and measure it across every mission-critical desktop.
Becoming Proactive—A Strategic Necessity
Since applications are critical to a financial services firm's success, it follows that maintaining their peak performance by addressing issues before they impact the business is a strategic necessity. While applications across different financial service organizations vary, the need for high availability and performance is the same. What a brokerage house uses to obtain real-time market data or place a trade through a trading application differs from how a credit card processor computes and stores transactions or a customs bank processes a loan. Yet these applications are similar in their absolute importance to the business.
Because the cost of downtime and lost productivity is so critical in financial services organizations, a proactive approach to managing mission-critical applications is required. It is not sufficient to just react more quickly when a user reports an error or calls the service desk to open a trouble ticket. You need to anticipate and prevent the problem ahead of time—before the end-user problems disrupts revenue-generating activities.
A best practice methodology to become proactive and deal with the complexity of each individual desktop and application mix is to:
- Identify the applications that are critical to the business
- Prioritize the applications based on the strategic and economic value to your business
- Define acceptable service level metrics that meet business requirements
- Measure and monitor these metrics and trends 24x7, using historical analysis, group comparisons, and detailed diagnostics to spot problems
- Build and enforce a process to proactively identify and resolve issues with these applications—understanding how they interact with network, systems, and other applications.
To effectively do this, a new approach is required to monitor application performance and availability that begins at the desktop using unique data collection technology. It is not reasonable to expect a server-monitoring agent to collect the required depth of data at the desktop or understand end-user experience in totality by simply measuring Web page responses. What is needed is a "purpose designed" end-user application performance solution that unobtrusively monitors everything running on the desktop, analyzes all socket-level transactions, correlates application, network, and system-hardware performance affecting end-user productivity, and enables proactive management of mission-critical applications.
Let’s look at each of the technology requirements for collecting and analyzing performance data from the end-user perspective to enable proactive management of mission-critical applications.
- Purpose Designed. Every desktop is different: each has different applications, user preferences, suppliers, models, and versions. The same is true of laptops, with the added complexity of mobility—each laptop accesses the network in different ways, or a different way each time. End-users often multitask, often having ten or more windows open simultaneously—running thick client applications while accessing a Web server application and, perhaps, logging into a mainframe application. Any solution has to be built to address the heterogeneity and complexity of the end-user system environment, unlike the simple monitoring requirements of a virtually pristine server environment. The solution requires a tool that is designed with the PC in mind—designed to deal with the complexity of the environment while consuming minimal CPU and memory, with no impact on end-user performance.
- Understand Change. The desktop is constantly changing in a financial services environment. Different lines of business deploy new versions of their specialized homegrown application frequently, even more so in response to the latest security concerns. Users download new applications and executables, often looking to increase their own productivity without realizing that they could be downloading hidden spyware, which can be the source of performance issues for their mission-critical applications.
Any solution needs to understand every application used by the end user, without requiring special instrumentation or the need to constantly update configurations to match the changing end-user environment. The solution must continually track what has changed on the desktop (since we know a standard image does not remain the same) to help support resources identify the root cause of a problem and to enable preventive action when they are aware of a change that could become a problem.
- Correlate Network, System Resources with Application Performance. Because financial services applications stress desktop and laptop resources as well as the network, it is important that any solution be able to correlate hardware and network resource activity with how that is affecting application performance. To fully diagnose a problem and understand the complex interactions at each endpoint, it is important to track and correlate the changes and resource utilization. A correlated database must be a core component of any desktop performance and availability solution.
If you’re responsible for managing financial services applications, you need to know about system and application performance issues before the user does and proactively address them. By avoiding outages, delays, crashes and errors—you can directly improve business performance. To get ahead of the problems in this complex, heterogeneous, constantly changing environment, a purpose-designed performance and availability management solution that provides an end-user perspective is required.
Lou Shipley is president and CEO of Reflectent Software, Inc.; the company develops enterprise scalable End-user Systems Management™ software that provides IT with an integrated, real-time view of enterprise applications from the end-user's perspective.