Year 2000 Testing: Managing to Beat the Odds

With regard to managing your Y2K project, there’s good news and bad news. The bad news is that the deadline is firm, testing costs will be massive, most of your information systems will be affected and your customers are going to hold you 100 percent accountable for the effectiveness of your testing program.

So what’s the good news? It turns out that there are cost-effective, workable methodologies for streamlining Y2K testing. By applying the proper amount of caution and planning at the start of your testing schedule, you can efficiently achieve the kind of performance and accuracy you need, the documentation you require as evidence of due diligence, and — as a kind of built-in bonus — a testing model that can be adapted to future application development and enhancement projects.

Start at the Beginning

A well-managed Y2K testing project starts long before actual testing starts, at the stage when crucial decisions are being made about the entire renovation program. Early planning establishes a pattern of consistently diminishing tasks, which reduces bottlenecks, increasingly frees up the time and energy of personnel, and avoids last-minute planning crises, permitting all resources to focus primarily on test execution once testing has begun.

A number of specific decisions and assessments that lay the groundwork for test management are made during this pre-renovation phase. One of the most relevant and influential procedures is identifying the real-life business operations or processes (vs. program code) supported by date-affected systems. Here, the analysis relies heavily on the input of end users and investigation of the actual front-end functionality of business applications, since the original developers are rarely available to provide expertise "from within." A related process is application inventory, whereby the specific applications and program segments that require modification are delineated. Priorities must also be determined, establishing which applications should be handled early and which can wait until later in the renovation schedule. Choices here can affect the efficiency of the development and testing process, since the effective renovation of one application may depend on the prior revision and testing of a related system.

Other important decisions handled before revision include the remediation methodology (short-term windowing solution or long-term date field expansion), the physical infrastructure that will support applications being revised, and the standards by which the company will deem its systems to have achieved Y2K functionality. It is also very helpful to collect baseline data at this early stage, since the only alternative is to retain existing code for later testing alongside renovated applications. Finally, budget is a critical factor — test management is clearly affected by the extent of cost limitations on the revision progress, including, for example, the volume of applications that can be renovated within the given time frame, the depth of revisions possible within the budget and performance expectations compared to cost considerations.

Ground Plan for Testing

The next step — creating the actual test plan — is, in part, a reflection of pre-renovation decisions, but also supplies new information that will help shape the test execution stage. The test plan is essentially the blueprint from which all tests will be constructed. Now is the time to make critical decisions, since changes are difficult and costly to implement once construction has begun. Furthermore, test management is dramatically simplified if the test plan is both clear and comprehensive.

Simply put, the content of the test plan should clearly delineate the what, when, where and how of the testing program. Components fall into three logical categories: Planning & Organization, Control and Analysis & Accountability.

In the first category, decisions start with the early identification of testable units. This is followed by the creation of test groups, which are the programs, processes and applications that are either alike enough to be tested together, or perform highly integrated functions and; therefore, must be tested in tandem to produce reliable performance data. Also significant are the date types (i.e., pre-pre-Y2K, near-Y2K, post-Y2K and special dates) and field types — protected, inserted, or calculated — that are to be verified. The role of end users is yet another piece of the puzzle — pacing and scheduling partially depend on whether end users will write scripts themselves or simply provide input for script "authors" on the test team.

The Control category covers issues such as whether to build an isolated test lab or run tests on actual production systems. This in turn helps determine whether tests will utilize current hardware or rely on special test equipment purchased separately. The database population method is another formative area; alternatives include duplicating the production database, assigning a subset of production data, or devising a separate test database that closely mirrors production systems. Various other Control issues include how to leverage off-peak hours with unattended test runs and how to ensure that different test teams can share information quickly and easily.

Lastly, Analysis & Accountability involves significant decisions regarding reporting methodologies, progress and productivity tracking, and techniques for archiving and retrieving test data.

Technology: The Make or Break Point

Arguably, the element of testing that has the greatest impact on management is the testing technology, the backbone of the Y2K testing effort. Currently, test teams are faced with a diverse set of tools — each with their own particular combination of features and capabilities — and consider themselves lucky if the tools available support at least a majority of their testing requirements. However, from the standpoint of test management, certain features are more conspicuous than others in their power to positively or negatively affect the overall testing program. By carefully selecting tools for their management value, you help ensure the success of the project while also optimizing time, money and corporate resources.

For example, in the area of regression testing, highly flexible tools that are easy to learn and use tend to deliver the quickest, most accessible results. Some of the basic features to look for include a visual representation of the test plan, easy integration with your company’s PC-based email, simple ways to transfer data between test repositories and other corporate project management systems, and automatic information flow from test systems to your test management tool. In particular, the technology should provide an uncomplicated method for receiving test information from diverse locations and formats.

A comprehensive regression tool will also allow you to group diverse scripts for a single test run, integrate regression tests with data aging and date simulation tools, schedule automatic, unattended test runs at off-peak hours, choose between separate or grouped data repositories, and run manual tests if desired. Also important are security mechanisms that allow administrators to set permissions for test plans and test data. Finally, check for special technical capabilities, such as object-based functionality, the ability to apply a single script to multiple test runs using different data sets, and both windowing and date field expansion options.

It is also helpful to view the technology from the vantage point of assessment and due diligence. From this perspective, tools should ideally include accurate spot and cumulative reports, as well as graphical ways to display test data and results. Other important capabilities include simple methods for archiving and retrieving test data, and the ability to generate progress and productivity reports on the test team.

Tests Almost Run Themselves

If earlier phases of the test management scenario have been carried out effectively, the only things left at this point are the unforeseen emergencies, and the straightforward, administrative tasks connected with running the tests themselves. Having been relieved of the burden of late planning, the test team is free to devote all of its time and talents to carrying out time-consuming — but routine — matters, such as running the equipment, tracking progress, verifying schedules and deadlines, coordinating test groups, reporting to upper management and archiving test information. Although there is still a lot of work at this stage, it’s no longer cluttered by unnecessary traumas and excess tasks that otherwise could send costs through the ceiling and threaten your testing timetable.

Benefits Far into the 21st Century

With a clear head and an eye focused squarely on management issues, your test team can implement a highly effective testing program that will achieve Y2K compliance at best cost, with optimum use of corporate resources. Not only that, you’ll end up with a well-proven model for future testing, and a storehouse of repeatable tests for future system enhancements. Finally, you’ll have all the records you need as evidence of due diligence, and can feel confident that you have brought your company safely into the year 2000. In effect, good planning equals good management equals good business — the turn of the century can start looking, not like a threatening obstacle, but more like a great business opportunity.



Jonathan Rende is Product Manager for Year 2000 Solutions at Mercury Interactive Corp. (Sunnyvale, Calif.), responsible for driving product strategy for the company’s Year 2000 testing tools.

Back to Article