Q&A: Automating Application Testing

Automated testing tools can relieve the tedium of testing, but are they appropriate for every testing task?

In this frank discussion, Ed Schwarz, cofounder and vice president of engineering at Gorilla Logic, explains the challenges of application testing, when automated testing can help, and when such tools are inappropriate.

Enterprise Strategies: In light of newer technology (cloud computing, virtualization) and IT requirements (ever-larger data sets, narrower processing windows), what are the biggest challenges in testing applications today?

Ed Schwarz: I’d say there are three major challenges.

First, there are too many deployment platforms. Folks in the Web business are already accustomed to the hassles of cross-browser compatibility and broadband-versus-dialup bandwidth. Windows developers in particular are accustomed to different OS versions, screen resolutions, and competing installed bases. With the advent of mobile platforms, it's become even worse. The number of devices is mushrooming and (much as I love the platform from a developer standpoint) the proliferation of Android variants is adding another layer of complexity.

The second problem is hi-fi simulation. For connected apps (in contrast to single-player games or books, for example), getting a realistic mix of usage and load is a great challenge. If you can run a real-world beta, that's great, but for most situations it’s not realistic in terms of time, cost, and user-base management. So you need to rely on a test harness, and designing it is almost like making another little version of the application. You need business types to describe the anticipated usage patterns, architects who understand the system to craft suites that won't just end up testing the cache, QE experts who can track and manage the process so there are meaningful outputs, operations staff who can monitor the systems under load, and, of course, developers who can diagnose and try to fix test failures or scalability/performance problems.

Third and leading right from my last point: enterprises underestimate the effort and investment to get a good test in place.

What are the biggest mistakes testers make in testing applications?

They don’t run enough tests!

Furthermore, there’s an over-reliance on unit tests versus black-box functional testing. The under-automation of black-box functional tests results in low repeatability and challenges isolating faults. The last mistake I’d point out is having developers perform the testing.

What best practices can you recommend to avoid these mistakes?

First, I’d put Quality Engineering on the team from the day the development environment is set up, and I’d dedicate people to QE.

I recommend that you automate black-box functional tests of the primary use cases of the system. In addition, you should develop load test simulations early, even before there is infrastructure in place to handle real-world loads

Your company sells automated functional testing tools. What part of the testing process can you handle with automation, and which areas don't benefit from automation (that is, what CANNOT be automated)?

It's hard to automate the "look and feel" of the user interface. Although image-based tools can do some of this, they don't do as well when there are multiple target platforms with different resolutions, font technologies, and so on. Similarly, a "sluggish" overall response (vs. "unacceptable" response) can be hard to detect.

Another thing that can’t be automated is usability. Automated tests simply don't care about this. They also don’t do well with dynamics -- for example, sound levels and synchronization. It's not that these cannot be automated, but it's difficult, time-consuming, and brittle.

Load testing required a combination of automated and non-automated tasks. Load tests involve a lot of pieces working together, and so even if the test is automated, they typically benefit from babysitting. It can be cost-prohibitive to build a "bullet-proof", context-retaining test harness for every piece of a complex integrated transaction - often it's much simpler to "watch the console". Even if a system would give some evidence of failure or degradation, without being there it's often impossible to reconstruct "what was going on" across the whole chain when it happened.

Are there downsides to automated testing? For example, can using such tools give testers a false sense of security so they think they've covered every possibility?

Absolutely, especially with over-reliance on unit testing versus black-box testing. This is why, for interactive applications, automated testing needs to be complemented with real-person tests, especially for new features.

Automated testing is great for regression testing, "smoke testing," and to some extent for performance testing. Real people are needed to see if a system "works" -- for example, automated testing will never discover errors in specifications.

What features make your company’s product, FlexMonkey, unique in the market?

FlexMonkey records and plays back user interactions, so it can be used by non-developers. It captures real-world user interactions. As an option you can generate native Flex code so tests can be modified or extended in the same language as the application. With FlexMonkey, you can run headless and fully automated in a “Continuous Integration: environment.

Budget-conscious IT will appreciate that the product is free and open-source and can easily be extended for your custom components and interactions.

I’d also point out that tests can be re-used for load testing, and it supports robust id-based tests versus brittle "click-at-these-coordinates" tests. You can validate any property of any object on the screen. Finally, there’s Flexmonkium integration with Selenium so you can combine browser and flex testing.

Must Read Articles