As I mentioned in my last post I chaired a fishbowl discussion at SDC2010 with title “Should a professional developer always use Test Driven Development?”. I was delighted that the invited panelists Michael Feathers, Geoff Bache and Andrew Dalke all turned up, along with a few dozen other conference participants. As I predicted, we had a lively and interesting debate.
Michael half-jokingly complained that Bob Martin goes around making these controvertial statements all the time, which Michael then gets to go around defending. Michael has a much more conciliatory attitude than Bob, and his take was that every truly professional developer must have at least given TDD a good try and learnt the technique, even if they then decide not to use it.
Geoff’s main point was that we need to widen the definition of TDD to include any process that involves checking in tests at the same time as the code, and not restrict it to just the classic Red-Green-Refactor style with tests in the same language as the code.
Michael was largely receptive to this view, or at least that the soundbite description of “never write any code until you have a failing test” probably was a bit too brief description to encompass the whole of TDD. He did argue though, that the classic TDD style leads to code with good design characteristics of high cohesion, loose coupling, small classes and methods etc, and that he had not found other design techniques which led to better code than TDD. He was not keen to move to a TDD approach without unit tests, and lose these benefits, even if they result in good tests.
Andrew argued that TDD is not sufficient by itself to produce a good suite of tests, and that there are other, better ways to produce these tests. Andrew pointed out that he had examined Fitnesse, a codebase that Bob Martin, (and some others), has created using TDD, and that he found several bugs, including security holes in it. Michael’s counterargument was that with TDD, you get as good tests as you are capable of – if you are not skilled/aware of security issues, then you won’t test for security holes, whatever process you use to create tests.
Another argument of Andrew’s was that he often likes to write tests that he expects to pass, to verify that his code works as expected, for example that he has implemented an algorithm correctly. In the narrow definition of TDD, you are only allowed to write tests you expect to fail. Michael’s take was that this was indeed a too narrow definition of TDD. He said that he frequently writes tests as a way of asking questions of his code, and this often leads to tests that pass straight away.
Some of the “audience” also stepped up to the microphones and joined in. Brian Marick pointed out that forcing yourself to write the test first was a very good way of ensuring you do actually write the test, instead of being lazy and just writing more code. The counter to that was along the lines of that there are other processes for arriving at a good test suite, which took different kinds of discipline. Andrew quoted the sqlite project, which boasts 100% branch coverage of their code by their test suite. Publishing your coverage figures and refusing to let them slip is a way of preventing developer laziness too.
Brian Marick wrote an article about coverage and tests over a decade ago, so he summarized it for us, which was interesting, but I think slightly beside the point. I think he was trying to argue that measuring coverage alone is not enough to guarantee you have a good test suite, but I don’t think that was what Andrew was trying to claim. Simply doing TDD is not a guarantee that you will end up with a good test suite either.
For me, the interesting outcome of the discussion was pointing out that the alternatives to TDD are not only “cowboy coding” or “test later, ie never”, or “bad tests”, but that there are other legitimate ways to come up with a good test suite, and professional developers may choose to use them instead of classic TDD. TDD is a discipline which all professional developers should perhaps have in their repertoire though. I think we agreed it is also a teaching aid for learning to write good tests.
Happily, we definitely all agree that creating a good automated test suite alongside code is important. The precise method a professional developer should always use to produce it was not agreed upon though.
Sofia Jonsson says:
Thank you for putting together a very interesting discussion session, Emily. I was one of the silent ones in the audience. 🙂
For me, the most interesting thing about this discussion was that it made me reflect upon the main reason for doing TDD. Is it to catch bugs earlier, to have a regression test suite so that you feel secure in doing changes or is it to encourage better design in your code?
For me, the effect TDD has on the design (loose coupling, small classes etc) has always been a (very nice) side-effect. The regression test suite (feeling secure when you introduce changes later on) and instant feedback has been the main reason. Therefore I’m not so strict in always writing my test cases first (although I always think of test aspects very early). I agree with Geoff that we need a broader definition of TDD.
However, it was interesting to hear that to some people the main reason for doing TDD seems to be the reversed. If I understood Michael and Brian correctly they were also the ones emphasizing the importance of actually writing the test cases first. Perhaps if your main reason for doing TDD is the impact it has on design, then it is important to be more strict and follow the classical red-green-refactor-cycle?
IMHO there are other and equally good ways of arriving upon good design, but there are no better ways to get instant feedback or to make you feel secure when introducing changes later on.
2010-03-18, 08:02Emily Bache says:
Thanks for your encouraging and insightful comments, Sofia! I think you should have been braver and taken up a microphone yourself in the discussion 🙂
2010-03-18, 20:00Sofia Jonsson says:
Maybe next time 🙂 But actually I didn’t arrive at these conclusions until afterwards.
2010-03-18, 20:07Brian Marick says:
My point was that it’s easy for code coverage (“reach X% coverage”) easily becomes a test design technique, in which the goal of the next after-the-code test is to increase coverage. Since an important class of bugs is not at all reliably detected by increasing the coverage of existing code, it’s dangerous to have that goal. But when it’s the only quantitative goal around, people slip into the mistake easily. So: check code coverage late; ignore the particular line of missed code; ask what about the way you design tests made you miss that line.
2010-03-19, 18:37