In-Depth

Productivity Measurement Across the Enterprise

Overheard in an elevator:

Tom: Good morning, Michele, I heard that your department implemented a new productivity program. Please tell me about it because we have some productivity issues too. A couple of our programmers quit yesterday, and I need to find a way to make up the lost time.

Michele: Hi, Tom. Well, it’s really great. We purchased a test data generator and an interactive debugging tool. I’m sure our productivity will improve significantly now. These tools will solve the problems that our programmers have been reporting.

Tom: Sounds great. I’ll look into it. Have a great day! (as he exits elevator…)

 

Ever hear such conversations? Worse, have you ever been part of such discussions? Unfortunately, too many of us have wandered down that narrow path to "keep Johnny coding faster." If only Johnny (and Sally and Mark and Elizabeth…) were coding faster, we would finish all of our projects within budget – that’s the eternal wish. Well, the problem here is that management often puts the productivity burden on the back of the programmer. (We bought you this tool, so what’s the problem, now?) In truth, programmer productivity is an end product of a process that emanates from the top of the organization and is manifested at the lowest level. If we are to "fix" the productivity problem, then we must address it at a corporate level, not at a programmer level. (Translation: We, as management, are part of the problem. Further translation: The solution to the productivity issue won’t be easy.)

Symptoms

Before we discuss an approach, let’s review some symptoms of productivity problems. These may not directly relate to your organization, but I believe you will recognize some similarities.

  • Several key people resign from the company after spending many months on a project. The remaining team is asked to work overtime and weekends to ‘hit the target date.’ After a few weeks of overtime, more people resign. The project slips, is implemented with bugs and post-implementation support is higher than budgeted.
  • A new tool is implemented but there is only token improvement. The cost to train all of the programmers would increase the implementation costs, so training was eliminated or minimal. Programmers were not involved in the tool selection and claim no ownership to what it will do.
  • A project starts out well, but many changes in specifications are made by the customer and IT management responds with "We can’t say no…" and the project slips and slips and slips.
  • A department has a mix of 286, 386, 486 and Pentium PCs, each with a different variation of software products. A document written by Tom can’t be read by Sally, and the new debugging tool won’t work on the 286 or 486 PCs. Programmers are asked to "work around it." (Yes, it’s true. I’ve seen programmers in today with 286 PCs – or with 3278 terminals – trying to stay abreast on a project where the standard for documentation is Microsoft Office 97. Go figure.)

This isn’t rocket science. If we were running a trucking company, the need for adequate trucks, maintenance schedules, periodic parts replacement, adequate support facilities throughout the distribution area and trained and well-rested drivers would be obvious. Somehow, in IT, the perception is too often that it is the programmer’s responsibility to learn the new environment, to compensate for poor estimates on the part of management and to compensate for the lack of effective tools.

In a trucking company, no one would ask the driver to repair the truck, to buy new tires, to find new routes or to drive faster. An old truck can’t carry the load, an ill-maintained truck won’t go the distance, a tired driver will be in an accident and a schedule that is too aggressive will be affected by errors in execution – all of which impacts on delivery dates and escalates costs. Why aren’t we as smart as trucking company managers? Trucking managers know that the cost of a new truck is high, but it is part of doing business. We Information Technology (IT) managers think the cost of training and staffing can be avoided if we "manage well." In such thinking, we’re just sniffing "management glue." It gives us a high and makes us feel the problem will go away -- but it never does.

The challenge is to implement some productivity measures to monitor all of us, not just the programmers. Doing that requires some honest discussion on problems, plus reviewing some recent project failures. The measures don’t need to be complex, but they need to include the entire organization.

Some Background

Before implementing any of these measures, you will need to consider some implementation mechanics. Here are a few:

Support. Measurements should be reviewed to ensure that all measures to be implemented are supportive of each other. For example, a measure that tracks speed of implementation is in conflict with one that tracks quality of implementation. While two such measures can be valid, when one goes up, the other will likely go down. Be prepared for this.

Tracking. A simple means of tracking should be implemented parallel to the measurement so that data is captured as part of a normal routine. If a simple means cannot be identified, implementation of the proposed measure should be postponed until a capture mechanism is available.

Objective. The objective of the measures should be reviewed with key customers prior to implementation to ensure that the intended goal of the measure tracks a performance indicator of importance to the customer. For example, tracking mean-time-to-fail may not be important to the customer, whereas mean-time-to-repair may be critical.

Connectivity. No one measurement stands alone. A measure at the programmer level will have implications to the unit and department levels as well. Measures will help you track the symptoms of problems, not the problems themselves. This is especially true of training. The IT training program needs to connect to the corporate training program to provide a sense of continuity to the programming staff. It’s also one of the few mechanisms that let programmers feel a sense of ownership in learning and implementing new technologies.

Management Measures

A division’s management structure provides key elements of the performance of an organization. It is here that the foundation for productivity is set. By committing to the following (or similar) objectives, the senior management sends a positive signal throughout the company. Here are a few proposed measures for the senior management (i.e., directors and above):

Availability of Resources: Corporate management should measure the percentage of projects for which they can provide skilled human resources within the time frame desired by the customer. Tied to this might be a similar measure on the percentage of projects where contractors are needed. A goal needs to be set and tracked. This measure helps address the issue of potential under-staffing for priority projects.

Contribution to corporate goals: The management should measure the percentage of projects and enhancement requests (ERs) that contribute to corporate goals. This may need research to determine the percentage allocation of staff to various functions. For example, if 20 percent of a department is to be devoted to production support, the ERs for the department should be part of the measurement. Therefore goals need to also consider maintenance aspects, not just growth aspects. Tools need to be developed (forms, etc.) to provide developers with the ability to capture and report on this information. A senior steering committee, documenting key projects for the coming year, removes much ambiguity here. For example, right now we are seeing Y2K projects being sidetracked by short-term business objectives that stretch resources unnecessarily. A steering committee could stop this, removing the pressure on department units to deliver.

Customer satisfaction: A survey needs to be developed and given to all customers at all levels (i.e., from VP to business analyst). The survey should be of a standard format and implemented as part of routine business. The survey should provide questions over which IT has control, as well as those where it doesn’t have control. After the surveys are returned, a team should meet with each customer to review the comments and suggestions to ensure that customers are being heard. Such a survey should be conducted twice a year and numbers reported, identifying what IS and IS NOT under IT’s control. A separate survey needs to be developed for all ERs over three work months, with a different survey for routine maintenance ERs. These three surveys will keep a finger on the pulse of customer confidence in IT, and will allow management to monitor projects, maintenance and overall customer satisfaction. Some of these surveys would be part of "Unit Measures" (documented below). These are soft surveys, but carry much weight because they are responses from customers, using customer language.

Tool availability: Management needs to assess its current technology inventory and strategy, and measure the effectiveness of tools and techniques available to the staff. For example, the inventory and business strategy might identify that CICS mainframe applications will continue for several years, and this might indicate that automatic screen builders and COBOL design tools are needed. The measure then becomes ‘to what degree are these tools made available and used by development staff?’ This also triggers a training initiative to ensure that adequate CICS skills are available for the anticipated life of the systems. On this, all changes in technology need a training component. For example, in my own experiences, I see companies upgrading to COBOL/MVS, yet offering no training to staff on how to use the many new facilities.

Staff knowledge and growth: Management needs to assess needed skills in staff and develop a priority matrix on how training will be provided and for whom. The measure will show the percentage of staff that is trained on key initiatives within the year’s training objective. Accountability should rest here if a project begins with a staff of programmers who do not have the necessary skills.

Grade level performance: Documentation on performance or skills at a given grade level is often not available. This needs to be defined, maintained and communicated to staff. If management does not know what to expect from a senior programmer versus a junior programmer, then job titles become moot. While this will continue to be a subjective arena, management needs to improve the quality of this document. Otherwise, programmers will continue to question why they aren’t being promoted or otherwise recognized. This measure will assist the management’s training objective to identify IT’s successes and shortcomings.

Architectural direction: Management needs to identify the strategy for Information Systems to support strategic directions, identifying the components and the reasons for their selection. Whether this is client/server, Internet or host/PC is not the key here. The key issue is developing a strategy, setting a time frame, publishing the plan and then measuring the performance against the goals. I visited a company where senior management purchased high-end PCs for a group of programmers and left them with the task of "finding the new road to application architecture." The programmers were provided with no tools, no software budget and no direction on corporate initiatives. Still, they were being asked to design new client/server and Internet tools for the corporation. As you might imagine, several key individuals quit and the project was a failure. Management cannot abdicate this responsibility, although many might wish to do so.

Unit Measures

Each unit within the IT department needs measures to monitor its performance. This has benefit only when measurements also exist at the corporate level. Otherwise, unit measures tend to be just "finger pointing exercises." By having measurements at all levels, managers can discuss status and future plans without feeling defensive. Some possibilities for unit measures are:

Staff flexibility: This measure tracks the skill/knowledge matrix of staff to the desired matrix (this begs for the matrix to be developed first). Since different units require different knowledge, this varies between units, but the measure remains the same for all unit managers. This measure is tied to measures for management.

Project performance: This measure captures the percentage of scheduled projects that the unit committed to completing properly, both on time and within the budget. The goal of such a measure is not to track individual performance, but to track the impact of unexpected projects. The objective of such a measure focuses on reducing the number of unexpected projects. Such a measure needs to capture both projected and unprojected projects. This becomes a planning tool for customers who believe that planning is not necessary and that reaction is the only appropriate approach.

Quality of systems: This measure tracks bugs in a system development process to improve the debugging process within the development process. It does not prevent bugs from happening. This might include processes and forms to capture the errors at each phase of development, including production. Measuring bugs should NEVER include measuring programmer performance. The two issues aren’t the same. Bug detection should be free from measurement up to the point of acceptance testing. Prior to that, collaborative team efforts to resolve bugs should be rewarded not punished.

Customer communications: This series of measures tracks reports, customer responses to questions, confirmation of estimates, etc. While this might also be a survey item, it is similar to the customer service measure of answering phone calls. Customers are frequently upset, not because the project is off track, but because they don’t know the honest status. Specific communications to track the progress for a given project need to be identified as part of a project contract. This is valuable in itself, and will help set the culture within the IT department on which communications are key. (Translation: Just because management is issuing memos or conducting meetings does NOT mean that management is communicating.)

Performance within budget: This measure tracks specific projects or project categories and measures their performance within the allocated budget. The intent is to discern reasons why budget estimates were wrong and learn from it. A management style of learning from failures and publishing results and corrections will reinforce performance within the unit.

Staff objectives: This measure ensures that the unit develops objectives for individuals to help each person contribute toward corporate goals. The percentage of change of initial objectives, percentage of withdrawn objectives and weighting of objectives for individuals to corporate goals would be a measure of capacity for the unit, the department and the corporation.

Individual Measures

With accountability for management and unit objectives separate, the individual can be measured also to assess where and how to improve the individual’s performance. I’ve never met a programmer who wasn’t willing to take accountability for their own performance, but I’ve also never met a programmer who would take accountability for what they couldn’t control or predict. Here is a sampling of objectives that could be assigned to the individual, once management and unit objectives were in place.

Quality of deliverables: This measure assesses the deliverables of individuals, allowing feedback from management on the individual’s contribution to job objectives (not to their "job title"). All measures should be on deliverables that reach production, not on any found during the development process.

Skill growth: This measure tracks the individual’s growth in skills based on the coming year’s predefined goals, in which the individual has control over the outcome. This ties to the skill objectives and needs identified by management.

Timeliness of deliverables: This measure varies by jobs and skills, and helps management and individuals identify aspects of individual performance to improve contribution and growth potential.

Customer communications: This varies by job, but a measure of the number of contacts, type of contacts and documentation of key contacts, is important to customer satisfaction. Many programmers do not even know the names of the customers – or what the project is supposed to do.

Quantity of deliverables: I include this measure because it is expected. Whether you use function points or just count lines of code (LOC) is irrelevant from my perspective. So long as the programmers see it as a consistent measure, it can be useful. Not perfect, but useful. (Yes, programmers might theoretically code a single word per line in a LOC measurement, but even the comment on this possibility is a statement of distrust of the programmers, so don’t even think it.)

Becoming Part of the Solution

Productivity measurement is an issue that is on everyone’s plate. We cannot avoid it. Pretending it is a coding issue avoids our responsibilities. Only a concentrated effort throughout an organization will yield any noticeable improvements. Our programmers are hoping we see the light soon and improve their work environment. This includes direction, commitment, training and tool replacement. If we do not make these improvements, we stand to lose our staff, lose our productivity, encounter rising costs and lose control of our direction. If companies continue down the path of expecting unplanned overtime to achieve productivity goals, they will fail. Our programmers want to be part of the solution, not part of the problem.

 

About the Author:

David Shelby Kirk is an instructor, author and a successful consultant with MVS Training, Inc. His newest book, COBOL for OS/390 Power Programming with Complete Year 2000 Section, is now available. He can be reached at [email protected].

Must Read Articles