E-business, e-commerce, e-everything!

In the fields of Computer Performance Evaluation (CPE) and Capacity Planning (CP), rapid technology developments are common, as are corporate mergers and acquisitions. It's easy to get caught up in day-to-day crises and to ignore the big picture. So every once in a while it helps to stop what you're doing, look around, and evaluate where you are. Essentially, figuring out where you should be going in the future depends on where you've been in the past and where you are in the here and now.

As the new millennium has arrived, now would be an appropriate time for a 'stop-and-look'. Our objective is simply to look at where CPE has been, outline strategic problems that business in general faces, the problems that CPE and CP still face, and to offer, where possible, strategies for success.

Introduction

Orientation and Attitude - Business Perspectives Effecting Everything. The business landscape has been and will continue to undergo drastic change. The single most dramatic element that has contributed to this change is the rapid introduction of new information technology. Creative business models will continue to emerge that take advantage of what the technology has to offer. We would like to take a few moments out to position you with a prevailing attitude.

Several new and radical business perspectives are pervasive, and they can (and will) affect everything.

Why It's Better For Startup Businesses To Have Too Little Capital, Rather Than Too Much - Historically, lots of capital would imply that inventory would be amply stocked on opening day. But having a large inventory needlessly ties up capital in current IT environments. With an aggressive E-commerce approach, orders to suppliers can be generated and delivered electronically, thus minimizing the need to stock lots of product. Suppliers who can quickly respond to orders will replace suppliers that cannot.

How Studying Customers - Rather Than Your Competition - Gives You A Competitive Edge - Studying the methods and practices of one's competition sometimes provided insight into their strengths and weaknesses. The keyword is sometimes - there's never a guarantee that a weakness could be found and exploited. With Web technologies (e.g. cookies), an organization can study and remember a customers' likes and dislikes, and use that information to improve their product and service offerings. Streamlining the buying process, especially on the Web, will make a positive impact on customers.

Why It Can Be Life-Threatening To Your Organization to Pursue Too Many Good Ideas, Or To Grow Too Fast - Because Web technologies are relatively new, it's easy to fall into a trap of simultaneously exploring too many different technologies for possible exploitation. Doing one thing really well will have a greater positive impact on customers than barely being able to do many things. Customers remember positive experiences and will return for repeat business, while negative experiences send them to the competition.

Why Your People Pose A Greater Threat to the Health of Your Business than Your Competition Does - Valued trusted employees: that's what all organizations want. Over time, these employees will enjoy different measures of success. But success is a lousy teacher - it gives one a false sense of "I can do no wrong". Ingenuity and creativity are not naturally fostered in an environment that has enjoyed moderate degrees of success. People naturally want to repeat what they've done in the past to, once again, attain success. Following past successes or industry trends can be devastating to an organization. Consider 8-track tapes, Beta VCR tapes or vacuum tube televisions as examples of good technologies gone bad. Today's emerging technologies demand creative thought to figure out how to best apply those technologies to improve products and services to customers.

How Integrating Your Business Virtually Can Make the Difference Between Being Quick And Being Dead - It all comes down to speed, a potential customer visiting a Web site and finding his/her needs met easily and quickly makes all the difference. Not finding the product or service, or not being able to connect to the Web site will cause the customer to leave your Web site and visit the competition. All aspects of business processes need to be tied in with this philosophy. And while this "tying together business processes" philosophy may sound great, the not-so-obvious truth is that the underlying hardware, system software, and applications must be sized correctly, and tuned effectively to provide efficient operation.

All of the above maxims ring true as computing enters the next millennium. The challenge to CPE and CP is two-fold: first, to understand how to use the technology to find solutions to classic problems, and second, to model and effectively manage the new applications that emerge.

Scalability

In the 1950s and first half of the 1960s, many companies were trying to establish themselves as leaders in the computer industry. Each unique model of a computer from a company had a unique design and required its own operating system and application software. Even within the same company, models were not necessarily compatible. Computers at different price levels had different designs - some were dedicated to scientific applications, others to business applications. It took a great deal of energy and time to get software that ran on one computer to run on another. This was the trend --just keep building different machines and operating systems.

The initiative that revolutionized the industry came out of seeing a real business need. Organizations did not want to keep re-inventing the wheel as their capacity needs grew bigger. And certainly they did not want to keep converting software so that they could say that they were at the "leading edge" or that they were "state-of-the-art."

The key, first used by Tom Watson of IBM, was to develop a scalable architecture. All of the computers in IBM's System/360 family, no matter what size, would respond to the same set of instructions. All of the models would run the same operating system. Customers would be able to move their applications and their hardware peripherals freely from one model to the next. IBM's notions of a scaleable architecture reshaped the industry.

During a similar period, Ken Olsen created the minicomputer industry when he founded Digital Equipment Corporation (Digital). He offered the world the first small computer - a PDP-1. Purchasers now had a choice: they could pay millions for IBM "Big Iron" System/360, or pay about $120,000 for a PDP-1. Not as powerful as the mainframe, it could still be used for a wide variety of applications that didn't need all that computing power. In 1977, Digital introduced its own scaleable-architecture platform, the VAX, which ranged from desktop systems to mainframe clusters, and again, scalability did for Digital in minicomputers what it had done for IBM in mainframes.

What's the lesson here? Companies like IBM and Digital were successful then because they saw a need that business had to fill incremental computing capacity needs in different ways, without having to waste prior investments in IT. This same need is still with us today. If a company needs more computing power, they ought to be able to get more power so long as its mission-critical application software can still run! So from a hardware perspective, scaleable systems have become more attainable than ever before. The ever-declining cost of hardware is the single factor that has contributed to this reality.

But our industry now faces a new challenge, building scaleable software architectures. Historically, software engineering and object-oriented principles have contributed greatly to allowing software architects to build more reliable and maintainable applications. Universities have forced Computer Science programs to integrate both software engineering and object-oriented programming courses into their curriculums. But no one really knows how to build a scaleable application. Lets look at this problem a bit closer.

Consider an organization with just two locations: one doing lots of business (which translates into a high number of business transactions), and the other doing a modest business (i.e. implying a much lower transaction count). Clearly, as these two locations may be geographically very far apart, it may make sense to have two separate systems configured for each locality. But it also makes sense to have the ability to look across both systems to get a global view of sales activity. At those times, we need to integrate statistics captured from each system. Obviously, the two-location problem becomes much more pronounced as more locations are added. And if the different locations run applications built differently for different platforms, then we are faced with the additional problem of understanding how to access data from other applications/systems.

In this scenario, having the application software and underlying database management system (DBMS) be the same at both sites would simplify building software that would analyze activities across both sites.

But how should, for example, the DBMS be configured at the smaller site vs. the larger site. Different indexes for example, might be needed at the larger site. Unique/customized application reporting needs may be required for each site. As one or both sites grow, how does the capacity planner recognize that underlying application changes logic (e.g. to build and use new DBMS indexes) are required to maintain good performance? And at what point should the application be ported from smaller hardware to bigger/faster hardware? Again, current design methodologies don't address these questions, and the organizations that recognize this and attack the problem first will likely find success and a competitive advantage to be reckoned with.

Compatibility

Computers were once intentionally designed to be incompatible with those from other companies - the manufacturer's objective was to make it difficult and expensive for existing customers to switch brands. Amdahl, Hitachi and other mainframe clone companies ended the mainframe monopoly IBM held. In addition, a cottage industry emerged in the storage arena where companies like StorageTek or EMC could supply completely compatible disk drives for the generic mainframe. Market-driven compatibility proved to be an important lesson for the computer industry.

This notion of market-driven compatibility extended into software and operating systems. While UNIX was once the darling operating system of the academic community, many hardware manufacturers including Sun, Digital and HP embraced it. With its proliferation on many machines, even IBM could not ignore its presence. We see today that MVS, the proprietary operating system of the mainframe, now includes many functions and features to make communication with other UNIX-based systems seamless.

IBM dominated the PC market at its beginnings; then many PC-compatible makers emerged to take part of the pie - as long as their clone PCs ran PC-DOS or MS-DOS applications, they had a chance. Even Apple had to give in and build software that would allow DOS and Windows applications to run on their hardware. Market-driven compatibility is partly responsible for the exponential growth and acceptance of the Internet. Killer programs like Mosaic and its successors, Netscape Navigator and Microsoft's Internet Explorer, allow organizations to share information across different hardware platforms, each running different operating systems.

Perhaps the most critical business IT problem has been solved too - software portability across platforms - by Sun's JAVA. Efforts like IBM Systems Application Architecture (SAA) and consortia efforts like those from the Open Software Foundation (OSF) tried to define infrastructure common to all. But all of these efforts failed miserably. With JAVA, an object may be defined on a Sun SparcStation, clipped to a Web page on an HP 9000, cached on NT and fired on a Mac or Network Computer. JAVA makes dynamic distributed systems possible, where we can readily move objects around for optimal placement during development time, deployment time, and even run time. Perhaps compatibility across platforms is really here.

Note, though, that JAVA is not without its' drawbacks. How many times have you visited a Web site containing JAVA applets that take an exorbitant amount of time to load and/or run. JAVA compilers do not generate pure, executable object code. Rather, the compilers generate byte code, code for a mystical JAVA Virtual Machine (which does not exist). Platform-specific browsers take downloaded byte code, and complete the object-code generation process. Thus, a Netscape Navigator browser for a Macintosh can download and execute the same byte code as an Internet Explorer browser running on an Intel-based PC.

In some sense, the browser is acting like an interpreter, in that it must create the object code required to run. Any interpretive process by its very nature will be at least an order of magnitude slower than a comparable "pure-compilation" process. In addition, the speed at which the byte code is received from the Web is orders of magnitude slower than merely accessing the file on a local disk. Thus, both factors contribute to making applet execution much slower than we would like - in other words, while we gain greater compatibility across platforms, we lose precious response time seconds.

Scaleable architectures and market-driven compatibility are concepts that drive capacity planning for heterogeneous computing systems. The key here is the network - the glue that connects the seemingly different components of the architecture. Thus, capacity planning becomes less of a function of say, counting available MIPS, and becomes more of a function driven by anticipated new business that has to be processed. New business implies that:

  • We want to scale our applications up to process more work;
  • We want them to run on any new hardware we acquire;
  • We need to connect applications (i.e. data) that currently exist on different platforms; and
  • We don't want to re-invent or convert anything, if we can help it, to keep our costs down and our productivity up.

Incremental Capacity Costs

Presumably, a great advantage of modern heterogeneous systems is being able to buy needed capacity in small increments. Small systems have exhibited this characteristic for some time; this allows a more accurate sizing of necessary equipment to business needs. And over the past decade, we've seen the introduction of CMOS processors on the mainframe having the same effect. All of this translates into less excess capacity and therefore lower overall cost - or so one might think.

As we've said before, scalability is key for capacity planning for heterogeneous systems. Economically, scalability is a primary contributor to reduced cost. The theory is you buy enough capacity to do your processing now, and if additional capacity is required in the future, it is acquired at a reduced unit cost because of the constant improvement in price/performance ratios.

Is there a fallacy in this thinking? Consider what happens during the entire life cycle of equipment - especially client/server equipment. You buy what you need today - and incur both acquisition and installation costs. Over time, you also incur operational costs (licensing fees, support personnel, and maintenance). But is that the complete list of all of the costs? What happens when additional capacity is needed, i.e. one server needs more capacity. Yes, you acquire a bigger server, but what do you do with that old server? Do you throw it out? Most companies would rollover the server to a new place - that is, the old server is likely to replace an even smaller machine somewhere else in the organization. And as this may cause a cascading effect, consider, too, that there are costs that have to be incurred when installing each old machine in a new place, e.g. installing/licensing new software, testing, support personnel costs, etc. On a smaller scale, think about what you do with that old PC you use at home when you purchase a new one? Do you throw it out, or do you invariably give it to the needy - your spouse, children, sister, brother, mother, etc.?

In the mainframe environment, these rollover costs were seldom encountered. Processors were generally exchanged or added. But in the distributed environment, a processor swap can cause multiple rollovers. If the rollover costs become significant, it may become uneconomic at some point to do the rollover!

While this may be surprising, the situation does point out that we should understand the actual magnitude of rollover cost in building a financial model over the life span of a distributed system. James Cook proposed such a financial model; namely that the life-cycle cost equals the initial acquisition cost, plus the operational cost, plus N times the installation cost, where N is the total number of swaps in the rollover series. The net impact of a first processor swap may cause an increase of nearly 50 percent to the original acquisition cost, 100 percent for the second swap, and 150 percent for the third swap! Note, too, that at some point, rollover costs will consume any savings that may be gained on cheaper MIPS being available in the future. Thus, spending a little more initially on capacity may actually avoid a processor swap (and its costs) in the future.

The bottom line: the focus of financial management strategies for IT has long been on acquisition. But the realities of the life cycle of equipment in distributed systems dictate that ongoing operational costs (that address rollover) demand more attention. Old, conventional wisdom just doesn't apply to distributed systems.

Vertical and Horizontal Integration

Compatibility forces standards, and because new systems are built following those standards, all of the pieces (hardware, software, and communications) required are easier to put together. Thus, a major shift in the computer industry has made end-to-end business solutions more feasible. The computer industry has realigned itself from having a vertical orientation (single company vendors providing chips, systems built using those chips, the operating system, and the network hardware) to a horizontally integrated, customer-driven set of solutions. In the vertical orientation, customers would buy almost everything from a single vendor, e.g. IBM, HP, Digital, NCR and others. Integration among vendors was difficult and expensive. The costs for switching were very high since every piece of the solution would likely have to be changed.

The horizontal orientation gives customers a choice in each of the infrastructure layers (chips, system software, business applications, etc.) While many companies operate in more than one layer, a customer can choose any vendor in any layer, which maximizes flexibility.

Within Computer Performance Evaluation (CPE) and Capacity Planning (CP), the same need exists. The 70's and 80's saw the cottage performance industry emerge, and tools like performance monitors and databases emerged. Early on, it was common for an organization to get all of its tools from one vendor - often they were the only vendor with a specific type of tool. Hence, we see the background behind the vertical orientation for CPE and CP tools.

But competition drives creativity, and this resulted in organizations changing their acquisition strategy for CPE and CP tools. Today, it is rare to find an organization that has all of its tools from one vendor.

Unfortunately, the problem of analyzing data collected from a diverse set of tools has not been solved completely. Consider the following scenario: your organization has a classic multi-tiered application. At the backend is a mainframe with a large DB2 database and a subsystem like CICS. In the middle are smaller UNIX or NT servers, and at the client-level lie Windows 9X or NT-workstations, and perhaps other desktop systems like Mac's. Perhaps the clients all use browsers to access data on the corporate Intranet. In this configuration, a capacity planner would like to see historical data of workload growth. Can transactions be tracked across all of these platforms? More specifically, can the number of I/O's on each platform be captured and reported? Finally, what about response time that the user actually sees?

Don't think that nothing has been done to address this problem. Twenty-year-old tools like CA-MICS from Computer Associates and Merrill's MXG act as repositories for diverse types of measurement data captured from different operating platforms, thus simplifying the 'where to get the data' question. The Landmark Corporation recently patented a technique to capture true end-to-end response time, and competitors like Candle offer similar functionality. But reporting packages still report on activity on one system at a time. All too often they do not offer reports that describe activities across the diverse platforms. Many popular modeling packages still focus on modeling each server individually, hand waving around the larger problem of component-to-component interactions. BMC Software (http://www.bmc.com/) offers the BEST/1 family of analytic modeling products that span a diverse set of platforms, while companies like NetFusion (www.netfusioninc.com), SES (http://www.ses.com/) and Datametrics (http://www.datametrics.com/) each offer simulation-based modeling that allow component-to-component interactions to be described.

But the whole 9-yards is just not that there there are no standards for performance data. Tool vendors produce what they want in a format that they themselves define. There are no templates defined for multi-system performance reporting. What's a template? Consider it to be analogous to a profit-and-loss statement from the accounting world. Even though companies are organized differently, sell different products, offer diverse services, etc., profit-and-loss statements offer a consistency and a means for comparing a companies' performance. There are no defined public interfaces for accessing the (sometimes proprietary) data produced by many software monitors. Object-oriented design allows a public interface to be defined that would hide implementation details or changes. In an age where object-oriented-everything has come into its' own, this will no doubt change and change soon. In the modeling world, capacity planners still relay on tools like Microsoft's EXCEL spreadsheet program to manually prepare data for input to a model. But as more and more computing objects have to be modeled, the time required for manual data preparation becomes exorbitant. This dictates that model vendors provide greater data filtering/selection/cleaning functionality as part of their overall product offering.

The Past: Measurement, Analysis, Reporting and the Performance Database

Historically, the philosophy behind measurement has been simple: if you can capture it, then you should save it to a file. Our industry has been rather anal in this philosophy, resulting in an incredible glut of performance metrics. Unfortunately, not much thought went into answering the question of "what can we do with this metric?" Analysis routines are classically housed in statistical techniques such as regression (for workload trending), or cluster analysis (for workload characterization). Reporting has long consisted of classic tabular reports consisting of many columns of boring data. Many of us remember having job responsibilities that involved pouring over these reports looking for anomalies. And these reports often come from data stored in a performance database, which grows in size each day, presenting us with the problem of devising an effective archiving scheme.

Shouldn't the future address these long-standing problems? Data dictionaries go a long way in helping to understand the properties of specific metrics. But perhaps we should apply statistical analyses to the stored metrics to identify those metrics that actually contribute to user performance achieved? If meaningful metrics were identified, along with their relationships to other meaningful metrics, we could likely cut down on the number of metrics we store in the performance database. Ideally, we could eliminate useless reports.

In addition, analyses should identify critical thresholds for specific metrics, and more importantly, combinations of thresholds among multiple metrics that indicate exceptions. With systems becoming more complex, the focus on reporting must change to an alert orientation, with a drill-down capability to indicate why the alert was issued, what the underlying evidence is, and (ideally) what should be done to correct the condition. We'll discuss this again in the section entitled "The Future".

The Present: Capacity Planning for the Web and Heterogeneous Systems

E-commerce and Web-based applications are coming into their own. Web sites now offer online catalogs for customers to browse, while service-oriented organizations are building information kiosks that can direct visitors to the services and information they seek quickly. Government agencies are embracing the Web as well, offering information previously hard to find on the Web. Bureaucracy, the single greatest problem that bogs down companies and government agencies, is finally being attacked and minimized thanks to the focus being brought to understand and build streamlined procedures for implementation on the Web.

But within this exciting new world, the capacity planner is faced with an old problem posed a new way.

Consider the following "dinosaur" scenario: a transaction involves using CICS to access data from a DB2 database. CICS is up, but the DB2 database is down. Is the application considered available? Performance reports addressing individual subsystems will show that CICS was available. But since DB2 was down, the transaction couldn't run.

Consider a common e-commerce application from a retail-type Web site: someone places an order via the Web for a complete PC package: monitor, printer, system unit, etc. Perhaps the monitor comes from one supplier, the printer from another, and the other components come from yet a third supplier. Once the transaction is entered, orders are placed electronically to the appropriate suppliers. What if the Web site of one of the selected suppliers' is down or just very slow? Is the application considered available?

We have all visited Web sites that are just too slow. How many times are you willing to go back and visit them? The key to success in the new information age is not only attracting visitors to a Web site, but making them want to come back and visit again and again. So while your organization's Web site might be up, function rich, and performs well, its dependence on a poorly performing Web site can be disastrous to you.

E-commerce and similar Web-based applications involving external links are highly dependent on the availability and performance of other Web sites - often not under the control of the originating application! Expect emerging e-commerce applications to adopt strategies involving automatically changing a selected supplier (i.e. Web site) if a timeout without response is detected after some preset amount of time. Additionally, expect suppliers who often exhibit poor performance and low availability to be relegated to low priority positions on an organizations' list of suppliers. And all of this will involve capturing response time information regarding linked Web sites, storing these times, analyzing them, and ultimately automating the policy decision regarding using this supplier in the future.

One more thought: modeling tools are not sophisticated yet with respect to modeling Web sites. If your organization has a popular Web site, and hit counts are increasing rapidly, the capacity planner is faced with the age old problem of determining what will be needed to process the increased workload effectively while providing acceptable performance. The availability and performance issued just discussed must be included in any modeling exercise as well. Expect modeling tools to begin to focus on these questions.

The Future: Exploiting New Technologies

The future is often difficult to predict. Yet technologies exist today that would appear to be ready to change many aspects of how business will be conducted.

Data Mining

Analytical software enables you to shift human resources from rote data collection to value-added customer service and support where the human touch makes a profound difference. Intelligent software will continuously scan sales data, tracking trends, and notice what products are selling and which are not. No longer will sales associates plow through fat paper reports to find out if sales are going well. If sales were proceeding as desired, there would be no human intervention. Only exception reports would be generated. The implementation and performance of this type of support software will be paramount to the success of an organization.

Clearly, there is a natural resistance to giving up any decision-making function and letting a machine do it. But managing systems that are generating performance data exponentially is just impossible to do manually. With the increasing number of systems and the complexity that this brings, we're simply incapable of recognizing patterns in large amounts of data.

Using data mining techniques, we can find useful patterns in large amounts of data. OLAP (online analytical processing) was the first target for data mining. Data originally collected for accounting and bookkeeping purposes was recognized as a potential mine of information for modeling, prediction, and decision support. This is very similar to what happened in the performance arena: measurements designed for accounting purposes that were collected on the mainframe were subsequently used for early performance analyses. Companies began creating corporate data stores, or data warehouses to satisfy new demands for business analysis. Sophisticated data mining tools can navigate through an information rich environment without requiring users to be experts in statistics, data analysis or databases.

The not-so-distant future in CPE and CP is bright for the application of data mining. Historical data abounds within an abundance of metrics. The challenge practitioner's face is the formulation of key questions. Data mining should be able to integrate sales-type figures with performance metrics to provide new insights into business relationships. We (hopefully) should be able to define real business transactions and Natural Forecasting Units (NFU) that would allow queries such as "how much more memory do I need if my sales of red widgets increases by 20 percent?"

Training and Distance Learning

Training should be available at the employee's desk as well as in the classroom. All training resources should be online, including systems to provide feedback on the training. We've all heard about the use of correspondence courses by professionals who wanted to be trained in a new area. Originally, a correspondence course was conducted by mail, with assignments being received by a student, worked on at their own pace, and returned via mail when completed. This educational model was never really as good as the traditional classroom, but it was effective.

With current multimedia and Web technologies, the notion of a correspondence course has been redefined. Distance Learning now allows geographically dispersed students to interact with each other as well as an instructor. Multimedia lectures are being created that can be viewed at any time by a student. Static Web pages are being used to describe assignments, to identify required reading, and to supplement an audio or video lecture. listserv's and newsgroups are being used to allow students to respond to questions asynchronously that were posed by an instructor. Everyone in a class can both see the question and the answer. Although asynchronous, these technologies allow for interaction in a virtual classroom.

There are some advantages to this mode of learning too. How many times have you sat in a 'real' class with a question in your head, but feel a bit intimidated about speaking up? In an asynchronous mode, all eyes will not be on you, and some of that intimidation you feel should be gone. In addition, we expect chat rooms to be set up for distance learning classes where all students in a class are expected to connect at the same time and interact in real time. Chat rooms would supply a synchronous mode of learning, similar to the traditional 'live' classroom experience.

Tools such as Microsoft Camcorder (packaged as part of Windows 98) which capture mouse movements and clicks, as well as keystrokes have been available for some time. They produce .avi files that can be played back on a variety of platforms. Thus, we would expect training in the use of specific tools to use this technology. Such tools also allow an audio track to be recorded, thus annotating what the user is seeing on the screen.

What does distance learning imply with respect to performance and capacity? Expect dedicated training servers to emerge, as audio and video files can be unruly with respect to size. And as they are potentially very large, capacity planners will likely have to concentrate on sizing network bandwidth correctly. This may, in fact, become the key issue in developing a true distance learning-based training environment.

Reduced Time for Data Collection

Portable devices and wireless networks extend your information systems into the factory, warehouse and other areas. - Small portable devices are emerging that make data gathering easy. Both Saturn and Boeing have implemented trouble-ticket reporting using handheld PCs that have resulted in dramatic reductions in turnaround time for problems. Wireless inventory reporting systems allow for improved accuracy and cut data collection time in half. Ultimately, these systems will proliferate throughout organizations, and must be considered within any significant capacity planning exercise. In effect, the workload that these devices present is dramatically different in quantity than traditional key-in transactions, and the effective transaction arrival rate will be different as well.

Artificial Intelligence and Pro-Active Alert Policies

Many IT organizations make extensive use of automated event detection systems to discover potentially harmful software, hardware and environmental events. But along with the use of event management is an exposure to alarm showers - a situation that occurs when a large number of events are triggered at the same time and the management interface becomes overwhelmed. Administrators can no longer ascertain where the most severe problems reside, let alone their root cause.

Component-based applications are built using applets, Java, or ActiveX and are assembled dynamically at runtime through a browser interface. As the number of these applications increases, new events will need to be tracked, such as component collisions, lack of space for required components, missing components, component version dependencies, expired components and component corruption. An IT organization already overwhelmed with alarm showers will likely be buried by the introduction of component-based applications. It is not humanly possible to monitor and manage all the events, or state changes, in a distributed enterprise of component-based applications.

To address this problem, the IT staff must manage at a higher level. Management tools must be able to dynamically adapt and learn about new applications and technology introduced into the environment. Rather than manually building management rules for each new application, IT needs a management tool that can automatically learn about new applications, filter out the superfluous information, and infer when the application is shifting from a desired state. Furthermore, it would be advantageous if the management tool could proactively predict when there might be a change in the desired state of an application.

The only hope in this arena is the application of Artificial Intelligence (AI) to performance. AI Agents would accumulate a large set of statistics from the system being monitored. Via real-time analyses, the agent would learn about which system states are deemed "good" and "bad." Once a significant number of observations have been accumulated, an AI agent would be able to heuristically determine whether the system has departed from its desired state. If so, then a management interface (e.g. console, pager, e-mail, etc.) could be alerted. This approach has multiple benefits including vastly reduced network traffic, minimized CPU overhead on the systems being managed, and the ability to predict when a system is about to transition from a "good" state. Expect IT organizations to efficiently manage multiple component-based, enterprise applications using this type of approach.

The ability to predict when there will be state changes has many potential benefits. This technology can be used to ascertain when a job stream, for example a backup, is unlikely to complete in the desired timeframe because of anticipated network traffic. The job stream could be automatically modified to start the backup one hour earlier. AI analysis can provide the ability to learn from experience - experience vastly more complex than humans can assimilate. This type of analysis will make recommendations on changing existing management policies to improve the overall service provided.

Potentially, "what if" scenarios could be posed and subsequently simulated to determine the potential effects of configuration changes and technology introductions without committing to suffering their consequences. With this potential simulation/evaluation ability, IT can be more responsive in adapting the IT infrastructure to remain in harmony with the organization's changing business requirements.

Summary and Final Thoughts

"I don't see information technology as a stand-alone system. I see it as a great facilitator. And maybe most important, it's a reason to keep asking yourself the question - why, why why." - Paul O'Neill, Chairman and CEO of Alcoa.

The simple truth is that information technology enables reengineering. And advances in information technology will force organizations that want to be competitive to ask O'Neill's, "Why, why, why" question.

Capacity planners and performance professionals have and will continue to be confronted with the challenge of understanding the impact that new technology will have on an organization. Perhaps that is the one single constant that permeates through CPE and CP. Our objective here was to try to enlighten you to some of the issues coming down the superhighway.

Finally, consider some of the following business lessons as we approach the next millennium, it is likely that they, too, will impact performance.

  • A lousy process will consume ten times as many hours as the work itself requires. A good process will eliminate the wasted time; technology will speed up the remaining real work.
  • A CEO must regard IT as a strategic resource to help the organization generate revenue and/or remain viable.
  • The CIO must be an integral part of developing the business strategy and must be able to articulate in plain language what IT can do to help that strategy.
  • PCs and connectivity make new educational and training approaches possible.
  • Training costs should be treated as part of your basic infrastructure costs, especially in the era of the rapid introduction of new technologies. A critical component of competitive advantage is having personnel that are trained to take maximum advantage out of new technologies.
  • In business as well as the government, (s)he who has the shortest procurement and deployment cycle wins.

Remember - "An organization's ability to learn and translate that learning into action rapidly is the ultimate competitive advantage" - Jack Welch, Chairman, General Electric.

Must Read Articles