The New Meaning of Software Quality

Building software faster doesn't mean the software is better. What your IT organization may need is an agile quality process.

By Neil Garnichaud

What's true in nature is true in business: Apply enough external pressure and fundamental change happens. Agile happened because -- in the face of tremendous market forces -- business needs to give real-time input to the development process so as to ensure business success.

An analogous change is occurring now in software quality -- also in response to market forces. Business needs to mine real time, quantifiable value from its quality processes to deliver better software faster, and "better" doesn't mean meeting users' requirements. It means meeting users' expectations. Bug counts don't matter. Uptime is keeping the company's doors open -- it just has to happen.

Such metrics are table stakes. Winning means giving the user an amazing, differentiating experience before the user knows what that experience is or finds a competitor who will provide it. What delights the user today may be different tomorrow. The question is: how will your quality process keep up? The answer is: agile quality.

How do you achieve agile quality -- an amazing and evolving experience in ever-shorter timeframes? You don't do it with quality silos. That's when development does its own process, then QA, then production, and then monitoring. Everyone "owns" quality in that each silo addresses an aspect of quality, each with its own tools and its own set of metrics -- which is like saying no one owns quality. As a result, user issues tend to get fixed retroactively and usually only after a round of finger pointing.

In that world, more quality simply means fewer trouble tickets. It doesn't mean a better user experience in terms of metrics meaningful to the business, such as number of users, number of click-throughs, abandonment rate, conversions, and (ultimately) sales.

Agile quality, on the other hand, encompasses all four of the classic quality silos in a complete process -- so organizations can fix process defects, not just code defects, and leave everyone with more time and fewer defects to fix. Defects not only consume less time but are less likely to be about bugs and more likely to be about experience enhancements.

When monitoring is no longer a silo, everyone is accountable for fine-tuning the software based on real-time user experience metrics. The key is having tools that enable a holistic, agile quality process, with mechanisms available to quickly align that process with the actual user experience, rather than just fix technical issues.

Business-Centric Versus Technically Centric Teams

It actually won't matter how well your team performs technically if members don't perform well collectively as a team. Business-centric teams focus on the health of the business even though members contribute greatly through their individual technical skills. Technically centric teams, on the other hand, focus on the individual technical areas, regardless of the health of the business. That's why people blame each other for defects. They all know they did their jobs well, regardless of the outcome.

That outcome is often a slower fix, an unhappy user, and, most important, the likelihood that this defect will happen again because they didn't fix the process that caused the defect. In a business-centric team, the issue would immediately trigger a response, not just at the help desk, but where the process broke -- be it development, QA, or somewhere else.

In fact, in a business-centric world, most "triggers" would not result from users at all but from IT itself in response to its own real-time monitoring of user experience metrics. The difference is that metrics are now more immediately actionable -- first, because they reveal how users respond to the software (not just to how the software responds to users) and, second, because they inform other (former) silos about how their activities can improve quality. On both levels, the key question is less "How do we fix the bug?" and more "Where did we fail?"

Obviously, to get actual user behavior metrics means you must monitor actual user behavior. You must study both what users do (including click-throughs, page views, and abandon rates) and what applications do (in terms of such measures as uptime, response rates, network latency, and functional failures). It also requires that the both kinds of behavior -- user and application -- be monitored under real-world conditions with live production code. That could involve, for example, overlaying Web site analytics with functional and load test data gathered by dozens of monitoring agents globally distributed across the Internet.

Breaking the Silos

Overlaying live test data with live analytics is an example of where actionable data informs action -- by breaking down traditional silos. QA is technically centric when tests are only run against pre-deployed software under laboratory conditions. QA is business-centric when the same test assets (e.g., scripts) are deployed out on the Internet along with the applications they test. Rather than reinvent monitoring assets in production, you reuse test assets from QA. Using the same assets in both places makes it more likely that issues found in production can be reproduced in QA.

Production and QA then have a common reference point. Functional issues, code dependency issues, and load conditions that were unanticipated or not identified in the lab now become clear to everyone throughout IT. QA tests and procedures can, therefore, be made more lifelike, deployment processes can be updated, similar issues can be avoided in the future, and the overall quality process (not just the code) can be improved. Because analytics can be plotted against performance data, both QA and production can correlate their activity directly to actual business outcomes, so those business outcomes now drive QA and production. That just leaves development to bring quality improvement full circle.

Better than Agile

Agile development has made agile quality an oxymoron. Agile development reduces the time available to find bugs -- yet bugs found in development have the biggest impact on software quality. When developers can get informed about quality issues faster (i.e., fast enough to stay agile), it is typically a huge quality win. What's required is a collaborative environment where developers review and comment on each other's code online as it is developed. This gives developers the expert information they need to improve a specific piece of code on the spot and the information is captured so other developers can avoid (or easily learn to overcome) similar issues in the future -- hence, improving both the code and the process.

Contrast that scenario with how code reviews typically take place now -- by having developers gather in a physical room among piles of printouts, examining code line by line. The process is not agile nor is institutional learning going on.

In an automated, collaborative environment, developers spend less time fixing bugs and more time writing code, so agility is enhanced rather than undermined. Furthermore, the information exchange that takes place (both among developers and between developers, QA, production, and monitoring) becomes less about fixing bugs and more about how to make the customer happier. That's true even if what makes the customer happy today is different from what made the customer happy yesterday.

A Final Word

Ultimately, as with agile development, market forces will determine how organizations will build software that is better, not just build software faster -- but first they will have to build a more agile quality process.

Neil Garnichaud is the VP and general manager of hosted solutions at SmartBear, a software quality tools supplier for over a million developers, testers, and operations professionals. Mr. Garnichaud has served as COO/CTO for Digital Mediums and PCLender.com, and was VP of operations at Standard and Poors, as well as Aztec Technology Partners. He also served BankBoston for 16 years in project management. You can contact the author at Neil.Garnichaud@smartbear.com.
comments powered by Disqus