Please note – As of March 2013, I have rewritten this post in the light of further experience and discussions. The updated post is available here.
I feel like I’ve spent most of my career learning how to write good automated tests in an agile environment. When I downloaded JUnit in the year 2000 it didn’t take long before I was hooked – unit tests for everything in sight. That gratifying green bar is near-instant feedback that everthing is as expected, my code does what I intended, and I can continue developing from a firm foundation.
Later, starting in about 2002, I began writing larger granularity tests, for whole subsystems; functional tests if you like. The feedback that my code does what I intended, and that it has working functionality has given me confidence time and again to release updated versions to end-users.
Often, I’ve written functional tests as regression tests, after the functionality is supposed to work. In other situations, I’ve been able to write these kinds of tests in advance, as part of an ATDD, or BDD process. In either case, I’ve found the regression tests you end up with need to have certain properties if they’re going to be useful in an agile environment moving forward. I think the same properties are needed for good agile functional tests as for good unit tests, but it’s much harder. Your mistakes are amplified as the scope of the test increases.
I’d like to outline four principles of agile test automation that I’ve derived from my experience.
Coverage
If you have a test for a feature, and there is a bug in that feature, the test should fail. Note I’m talking about coverage of functionality, not code coverage, although these concepts are related. If your code coverage is poor, your functionality coverage is likely also to be poor.
If your tests have poor coverage, they will continue to pass even when your system is broken and functionality unusable. This can happen if you have missed out needed test cases, or when your test cases don’t check properly what the system actually did. The consequences of poor coverage is that you can’t refactor with confidence, and need to do additional (manual) testing before release.
The aim for automated regression tests is good Coverage: If you break something important and no tests fail, your test coverage is not good enough. All the other principles are in tension with this one – improving Coverage will often impair the others.
Readability
When you look at the test case, you can read it through and understand what the test is for. You can see what the expected behaviour is, and what aspects of it are covered by the test. When the test fails, you can quickly see what is broken.
If your test case is not readable, it will not be useful. When it fails you will have to dig though other sources outside of the test case to find out what is wrong. Quite likely you will not understand what is wrong and you will rewrite the test to check for something else, or simply delete it.
As you improve Coverage, you will likely add more and more test cases. Each one may be fairly readable on its own, but taken all together it can become hard to navigate and get an overview.
Robustness
When a test fails, it means the functionality it tests is broken, or at least is behaving significantly differently from before. You need to take action to correct the system or update the test to account for the new behaviour. Fragile tests are the opposite of Robust: they fail often for no good reason.
Aspects of Robustness you often run into are tests that are not isolated from one another, duplication between test cases, and flickering tests. If you run a test by itself and it passes, but fails in a suite together with other tests, then you have an isolation problem. If you have one broken feature and it causes a large number of test failures, you have duplication between test cases. If you have a test that fails in one test run, then passes in the next when nothing changed, you have a flickering test.
If your tests often fail for no good reason, you will start to ignore them. Quite likely there will be real failures hiding amongst all the false ones, and the danger is you will not see them.
As you improve Coverage you’ll want to add more checks for details of your system. This will give your tests more and more reasons to fail.
Speed
As an agile developer you run the tests frequently. Both (a) every time you build the system, and (b) before you check in changes. I recommend time limits of 2 minutes for (a) and 10 minutes for (b). This fast feedback gives you the best chance of actually being willing to run the tests, and to find defects when they’re cheapest to fix.
If your test suite is slow, it will not be used. When you’re feeling stressed, you’ll skip running them, and problem code will enter the system. In the worst case the test suite will never become green. You’ll fix the one or two problems in a given run and kick off a new test run, but in the meantime someone else has checked in other changes, and the new run is not green either. You’re developing all the while the tests are running, and they never quite catch up. This can become pretty demoralizing.
As you improve Coverage, you add more test cases, and this will naturally increase the execution time for the whole test suite.
How are these principles useful?
I find it useful to remember these principles when designing test cases. I may need to make tradeoffs between them, and it helps just to step back and assess how I’m doing on each principle from time to time as I develop.
I also find these principles useful when I’m trying to diagnose why a test suite is not being useful to a development team, especially if things have got so bad they have stopped maintaining it. I can often identify which principle(s) the team has missed, and advise how to refactor the test suite to compensate.
For example, if the problem is lack of Speed you have some options and tradeoffs to make:
- Invest in hardware and run tests in parallel (costs $)
- Use a profiler to optimize the tests for speed the same as you would production code (may affect Readability)
- push down tests to a lower level of granularity where they can execute faster. (may reduce Coverage and/or increase Readability)
- Identify key test cases for essential functionality and remove the other test cases. (sacrifice Coverage to get Speed)
Explaining these principles can promote useful discussions with people new to agile, particularly testers. The test suite is a resource used by many agile teamembers – developers, analysts, managers etc, in its role as “Living Documentation” for the system, (See Gojko Adzic‘s writings on this). This emphasizes the need for both Readability and Coverage. Automated tests in agile are quite different from in a traditional process, since they are run continually throughout the process, not just at the end. I’ve found many traditional automation approaches don’t lead to enough Speed and Robustness to support agile development.
I hope you will find these principles will help you to reason about the automated tests in your suite.
Gaurav Bansal says:
Nice Post.
2012-08-07, 15:37Johannes Brodwall says:
We must have talked about this at some point, as the list I would produce matches your exactly.
The only thing I’d add myself is one aspect of robustness: If a test breaks after a refactoring, it probably could be more robust.
2012-08-12, 10:12Emily Bache says:
Johannes, yes we’ve probably talked about this before 🙂
I agree with your point – if you’re refactoring you’d expect robust tests to keep passing. They won’t though if they use an interface which you modify in the refactoring, and some refactorings do legitimately change interfaces. That’s usually fine, but can become a problem if many tests rely on an interface that changes often. Like a UI.
2012-08-13, 07:40Rock Den says:
The Management and Customer are more interested to know about how much cost savings can be expected and achieved by implementing Test Automation. However, it is not always true that cost savings is achieved thank you sharing a good content nice job keep it up
2012-12-20, 11:50