Posts tagged ‘Code Kata’

Do you automatically get better design with TDD? Does an otherwise average software developer produce superior designs if they write the tests first rather than afterwards? Does it make a difference what style of TDD you use?

incident #1

I was at a session at XP2012 with J.B. Rainsberger called “Architecture without Trying”. He demonstrated how he could develop a software system for Point-of-Sale terminals using TDD, and how the design naturally tended towards an MVC pattern as he did it. He claimed that purely by doing TDD, and focussing on two things, (removing duplication and improving names) that a good design would naturally emerge.

incident #2

I heard a talk by Luca Minudel at Agile Testing Days 2011 called “TDD with Mock Objects: Design Principles and Emergent Properties”. He was talking about a study he had done where he got people with varying levels of experience at TDD to do four short exercises. He also got them to answer a questionnaire about their knowledge of SOLID principles, and TDD. He then evaluated how well the designs they came up with in the exercises adhered to SOLID principles, and tried to correlate that with their TDD skill. He found that the people skilled in TDD did better in the exercises than those who only knew the theory of SOLID principles. The practice of TDD seems to help people with design. Luca also found that those more experienced with the London School of TDD did even better than other TDDers.

incident #3

I was working at a client recently when I met a developer from a different department. He came to see me several times over a period of a couple of weeks, and asked for advice about TDD. On about his fourth visit he told me he had written some code and now it was basically working, he wanted to write tests for it. He said he was having difficulty since he’d written a lot of static “helper” methods. I advised him that static methods make code quite hard to test, and can often be a sign of a not very good object oriented design.

He suggested we should invest in a fancy mocking tool that would enable him to easily replace these static methods in the tests. I told him a better investment would be for him to learn to write the tests first, get better at OO design, and not use static methods in the first place. I was probably a bit blunt, and he was quite polite, all things considered. He protested that he shouldn’t have to change the production code in order to get it under test, then left. That was the last time he came to me for advice.

Discussion

So does doing TDD guarantee better design? Well it should certainly help. I’ve presented before about the way TDD gives you early feedback on your design and plenty of opportunities to refactor. It’s less help though if you don’t know what a good design looks like in the first place. I think J.B. goes too far in his claims – if you don’t know MVC or SOLID principles then I’d be surprised if they started turning up in your code with any consistency.

No tool nor technique can survive inadequately trained developers” 

(A quote attributed to Steve Freeman). I think you do need to invest in learning good design techniques independently of TDD. If you lack basic OO design skills you probably won’t be able to do TDD in the first place, London School or otherwise.

I’ve been learning and improving my practice of TDD, including the London School, for many years now, and I was intrigued by Luca’s claims that it led to better adherence to SOLID principles than classic TDD. The London School involves an outside-in approach to design, that makes heavy use of mocks to check interactions between objects. This is in contrast to a more classic TDD style that prefers to verify the code works by checking the state of an object after an interaction.  I wouldn’t claim to be an expert in the London School of TDD, but I think I understand the basics and can adopt this style when I feel the problem is appropriate for it.

I tried out Luca’s four problems, (here on github) to see how I did. Luca very kindly gave me some feedback on my code, and I found hadn’t done as well as I had hoped to in adhering to SOLID principles. I’d got the code under test, but in a few places I could have improved the design more. I also slightly misunderstood the requirements for two of the problems, which led me to fork the repo and improve the instructions 🙂

I think in the cases where I could have done better with the design, it’s possible using the London School of TDD would have led to the improvements. I’m feeling there might be something in Luca’s conjecture. On the other hand, these problems might be so small and abstract, that I didn’t behave the same as I would in a real codebase. Certainly in one case I felt it wasn’t worth extracting an interface when there was only one implementation for it. In a real system maybe it would be more obvious that more implementations were likely, and that adding the interface would lead to a more decoupled design. Or then again maybe I’m just too used to python where expicit interface classes don’t tend to be used. Or maybe I’m just making excuses! In any case, doing these exercises has made me more interested to improve my knowledge and practice of the London School TDD style. 

I think these exercises are interesting little code katas in their own right, quite apart from Luca’s study on TDD. I think you can use them to learn about the SOLID principles, and practice some of the refactorings you often have to do to get badly designed code under test.

I’m working on a python translation of the exercises so we can try them out at the Gothenburg Python User Group meeting next week. Feel free to fork the repo and have a go at them yourself.

For a little while now I’ve been collecting Refactoring Kata exercises in a github repo, (you’re welcome to clone it and try them out). I’ve recently facilitated working on some of these katas at various coding dojo meetings, and participants seem to have enjoyed doing them. I usually give a short introduction about the aims of the dojo and the refactoring skills we’re focusing on, then we split into pairs and work on one of these Refactoring Katas for a fixed timebox. Afterwards we compare designs and discuss what we’ve learnt in a short retrospective. It’s satisfying to take a piece of ugly code and after only an hour or so make it into something much more readable, flexible, and significantly smaller.

Test Driven Development is a multifaceted skill, and one aspect is the ability to improve a design incrementally, applying simple refactorings one at a time until they add up to a significant design improvement. I’ve noticed in these dojo meetings that some pairs do better than others at refactoring in small steps. I can of course stand behind them and coach to some extent, but I was wondering if we could use a tool that would watch more consistently, and help pairs notice when they are refactoring poorly.

I spent an hour or so doing the GildedRose refactoring kata myself in Java, and while I was doing it I had two different monitoring tools running in the background, the “Codersdojo Client” from http://content.codersdojo.org/codersdojo_client/, and “Sessions Recorder” from Industrial Logic. (This second tool is commercial and licenses cost money, but I actually got given a free license so I could try it out and review it). I wanted to see if these tools could help me to improve my refactoring skills, and whether I could use them in a coding dojo setting to help a group.

Setting it up and recording your Kata
The Codersdojo Client is a ruby gem that you download and install. When you want to work on a kata, you have to fiddle about a bit on the command line getting some files in the right places, (like the junit jar), then modify and run a couple of scripts. It’s not difficult if you follow the instructions and know basicly how to use the command line. You have a script running all the time you are coding, and it runs the tests every time you save a source file.

The Sessions Recorder is an Eclipse plugin that you download and install in the same way as other Eclipse plugins. It puts a “record” button on your toolbar. You press “record” before you start working on the kata.

Uploading the Kata for analysis
When you’ve finished the kata, you need to upload your recording for analysis. With the Codersdojo Client, when you stop the script, it gives you the option of doing the upload. When that’s completed it gives you a link to a webpage, where you can fill in some metadata about the Kata and who you are. Then it takes you to a page with the full final code listing and analysis.

The Sessions Recorder is similar. You press the button on the Eclipse toolbar to stop recording, and save the session in a file. Then you go to the Indutrial Logic webpage, log into your account, and go to the page where you upload your recorded Session file. You don’t have to enter any metadata, since you have an account and it remembers who you are, (you did pay for this service after all!) It then takes you to a page of analysis.

Codersdojo Client Analysis
The codersdojo client creates a page that you can make public if you want – mine is available here. It gives you a graph like this:

Screen Shot 2012-08-16 at 09.08.08

It’s showing how long you spent between saving the file, (a “move”), and whether the tests were red or green when you saved. There is also some general statistics about how long you spent in each move, and how many modifications there were on average. It points out your three longest moves, and has links to them so you can see what you were doing in the code at those points.

I think this analysis is quite helpful. I can see that I’m going no more than two or three minutes between saving the file, and usually if the tests go red I fix them quickly. Since it’s a refactoring kata I spend quite a lot of moves at the start where it’s all green, as I build up tests to cover the functionality. In the middle there is a red patch, and this is a clear sign to me that I could have done that part of the kata better. Looking over my code I was doing a major redesign and I should have done it in a better way that would have kept the tests running in the meantime.

Towards the end of the kata I have another flurry of red moves, as I start adding new functionality for “Conjured” items. I tried to move into a more normal TDD red-green-refactor cycle at that point, but it actually doesn’t look like I succeeded very well from this graph. I think I rushed past the “green” step without running the tests, then did a big refactoring. It worked in the end but I think I could have done that better too.

Sessions Recorder Analysis
The Sessions Recorder produces a page which is personal to me, and I don’t think it allows me to share it publicly on the web. On the page is a graph that looks like this:

anychart

As you can see it also shows how long I spend with passing and failing tests, in a slightly different way from the Codersdojo Client’s graph. It also distinguishes compiler errors from failing tests, (pink vs red).

This graph also clearly shows the areas I need to improve – the long pink patch in the middle where I do a major redesign in too large a step, and at the red bit at the end when I’m not doing TDD all that well.

The line on the graph is a “score” where it is awarding me points when I successfully perform the moves of TDD. Further down the page it gives me a list of the “events” this score is based on:

Screen Shot 2012-08-16 at 09.39.32

(This is just some of the events, to show you the kinds of things this picks up on.) “New Green Test” seems to score zero points, which is a bit disappointing, but adding a failing test gets a point, and so does making it pass. “Went green, but broke other tests” gets zero points. It’s clearly designed to help me successfully complete red-green-refactor cycles, not reward me for adding test coverage to existing code, then refactoring it.

There is another graph, more focused on the tests:

Screen Shot 2012-08-16 at 09.44.53

This graph has mouseover texts so when you hover over a red dot, it shows all the compilation errors you had at that point, and if you hover over a green dot it tells you which tests were passing. It also distinguishes “compler errors” from a “compiler rash”. The difference is that a “compiler rash” is a more serious compilation problem, that affects several files.

You can clearly see from this graph that the first part of the kata I was building up the test coverage, then just leaning on these tests and refactoring for the rest. It hasn’t noticed that I had two @Ignore ‘d tests until the last few minutes though. (I added failing tests for Conjured Items near the start then left them until I had the design suitably refactored near the end).

I actually found this graph quite hard to use to work out what I need to improve on. There seem to be three long gaps in the middle, full of compilation errors where I wasn’t running the tests. Unlike with the Codersdojo Client, there isn’t a link to the actual code changes I was making at those points. I’m having trouble working out just from the compiler errors what I should have been doing differently. I think one of these gaps is the same major redesign I could see clearly in the Codersdojo Client graph as a too big step, but I’m not so sure what the other two are.

There are further statistics and analysis as well. There is a section for “code smells” but it claims not to have found any. The code I started with should qualify as smelly, surely? Not sure why it hasn’t picked up on that.

Conclusions
I think both tools could help me to become better at Test Driven Development, and could be useful in a dojo setting. I can imagine pairs comparing their graphs after completing the kata, discussing how they handled the refactoring steps, and where the design ended up. Pairs could swap computers and look through someone else’s statistics to get some comparison with their own.

The Codersdojo Client is free to use, and works with a large number of programming languages, and any editor. You do have to be comfortable with the command line though. The Sessions Recorder tool only supports Java and C# via Eclipse. It has more detailed analysis, but for this Refactoring Kata I don’t think it was as helpful as it could have been.

The other big difference between the tools is about openness. The Sessions Recorder keeps your analysis private to you, and if you want to discuss your performance, it lets you do so with the designers of the tool via a “comment on this page” function. I havn’t tried that out yet so I’m not sure how it works, that is, whether you get feedback from a real person as well as the tool.

The Codersdojo Client also lets you keep your analysis private if you want, but in addition lets you publish your Kata performance for general review, as I have done. You can share your desire for feedback on twitter, g+ or facebook. People can go in and comment on specific lines of code and make suggestions. That wouldn’t be so needed during a dojo meeting, but might be useful if you were working alone.

Further comparison needed
On another occasion I tried out the Sessions Recorder on a normal TDD kata, and found the analysis much better. For example this graph of me doing the Tennis kata from scratch:

anychart (1)

This shows a clear red-green pattern of small steps, and steadily increasing score rewarding me for doing TDD correctly. Unfortunately I didn’t do a Codersdojo Client session at the same time as this one, for comparison. A further blog post is clearly needed for this case… 🙂

A little while back I was back in Stockholm facilitating another Code Retreat, (see previous post), this time as part of Global Day of Code Retreat, (GDCR). (Take a look at Corey Haines’ site for loads of information about this global event that comprised meetings in over 90 cities worldwide, with over 2000 developers attending.)

I think the coding community could really do with more people who know how to do TDD and think about good design, so I’m generally pretty encouraged by the success of this event. I do feel a bit disappointed about some aspects of the day though, and this post is my attempt to outline what I think could be improved.

I spent most of the Global Day of Code Retreat walking around looking over people’s shoulders and giving them feedback on the state of their code and tests. I noticed that very few pairs got anywhere close to solving the problem before they deleted the code at the end of each 45 minute session. Corey says that this is very much by design. You know you don’t have time to solve the problem, so you can de-stress: no-one will demand delivery of anything. You can concentrate on just writing good tests and code.

In between coding sessions, I spent quite a lot of effort reminding people about simple design, SOLID principles, and how to do TDD (in terms of states and moves). Unfortunately I found people rarely wrote enough code for any of these ideas to really be applicable, and TDD wan’t always helping people.

I saw a lot of solutions that started with the “Cell” class, that had an x and y coordinate, and a boolean to say whether it was alive or not. Then people tended to add a “void tick(int liveNeighbourCount)” method, and start to implement code that would follow the four rules of Conway’s Game of Life to work out if the cell should flip the state of its boolean or not. At some point they would create a “World” class that would hold all the cells, and “tick()” them. Or some variant of that. Then, (generally when the 45 minutes were running out), people started trying to find a data structure to hold the cells in, and some way to find out which were neighbours with each other. Not many questioned whether they actually needed a Cell class in the first place.

Everyone at the code retreat deleted their code afterwards, of course, but you can see an example of a variant on this kind of design by Ryan Bigg here (a work in progress, he has a screencast too).

Of course, as facilitator I spent a fair amount of time trying to ask the kind of questions that would push people to reevaluate their approach. I had partial success, I guess. Overall though, I came away feeling a bit disillusioned, and wanting to improve my facilitation so that people would learn more about using TDD to actually solve problems.

At the final retrospective of the day, everyone seemed to be very positive about the event, and most people said they learnt more about pair programming, TDD, and the language and tools they were working with. We all had fun. (If you read Swedish, Peter Lind wrote up the retrospective in Valtech’s blog) This is great, but could we tweak the format to encourage even more learning?

I think to solve Conway’s Game of Life adequately, you need to find a good datastructure to represent the cells, and an efficient algorithm to “tick” to the next generation. Just having well named classes and methods, although a good idea, probably won’t be enough.

For me, TDD is a good method for designing code when you already mostly know what you’re doing. I don’t think it’s a good method for discovering algorithms. I’m reminded of the debate a few years back when Peter Norvig posted an excellent Sudoku solver (here) and people thought it was much better than Ron Jeffries’ TDD solution to the same problem (here). Peter was later interviewed about whether this proved that TDD was not useful. Dr Norvig said he didn’t think that it proved much about TDD at all, since

“you can test all you want and if you don’t know how to approach the problem, you’re not going to get a solution”
(from Peter Siebel’s book “Coders At Work”, an extract of which is available in his blog)

I felt that most of the coders at the code retreat were messing around without actually knowing how to solve the problem. They played with syntax and names and styles of tests without ever writing enough code to tell whether their tests were driving a design that would ultimately solve the problem.

Following the GDCR, I held a coding dojo where I experimented with the format a little. I only had time to get the participants do two 45 minute coding sessions on the Game of Life problem. At the start, as a group we discussed the Game of Life problem, much as Corey recommends for a Code Retreat introduction. However, in addition, I explained my favoured approach to a solution – the data structure and algorithm I like to use. I immediately saw that they started their TDD sessions with tests and classes that could lead somewhere. I feel that if they had continued coding for longer, they should have ended up with a decent solution. Hopefully it would also have been possible to refactor it from one datastructure to another, and to switch elements of the algorithm without breaking most of the tests.

I think this is one way to improve the code retreat. Just give people more clues at the outset as to how to tackle the problem. In real life when you sit down to do TDD you’ll often already know how to solve similar problems. This would be a way for the facilitator to give everyone a leg-up if they havn’t worked on any rule-based games involving a 2D infinite grid before.

Rather than just explaining a good solution up front, we could spend the first session doing a “Spike Solution”. This is one of the practices of XP:

“the idea is just to drive through the entire problem in one blow, not to craft the perfect solution first time out.”
From “Extreme Programming Installed” by Jeffries, Anderson, Hendrickson, p41

Basically, you hack around without writing any tests or polishing your design, until you understand the problem well enough to know how you plan to solve it. Then you throw your spike code away, and start over with TDD.

Spending the first 45 minute session spiking would enable people to learn more about the problem space in a shorter amount of time than you generally do with TDD. By dropping the requirement to write tests or good names or avoid code smells, you could hopefully hack together more alternatives, and maybe find out for yourself what would make a good data structure for easily enumerating neighbouring cells.

So the next time I run a code retreat, I think I’ll start with a “spiking” session and encourage people to just optimize for learning about the problem, not beautiful code. Then after that maybe I’ll sketch out my favourite algorithm, in case they didn’t come up with anything by themselves that they want to pursue. Then in the second session we could start with TDD in earnest.

I always say that the code you end up with from doing a Kata is not half as interesting as the route you took to get there. To this end I’ve published a screencast of myself doing the Game of Life Kata in Python. (The code I end up with is also published, here). I’m hoping that people might want to prepare for a Code Retreat by watching it. It could be an approach they try to emulate, improve on or criticise. On the other hand, showing my best solution like this is probably breaking the neutrality of how a code retreat facilitator is supposed to behave. I don’t think Corey has ever published a solution to this kata, and there’s probably a reason for that.

I have to confess, I already broke all the rules of being a facilitator, when I spent the last session of the Global Day of Code Retreat actually coding. There was someone there who really wanted to do Python, and no-one else seemed to want to pair with him. So I succombed. In 45 minutes we very nearly had a working solution, mostly because I had hacked around and practiced how to do it several times before. It felt good. I think more people should experience the satisfaction of actually completing a useful piece of code using TDD at a Code Retreat.

I’m speaking next week at ScanDev on Tour in Stockholm on the subject of “Software Development Craftsmanship”, and as part of my research I read both “The Clean Coder” by Robert C. Martin and “Apprenticeship Patterns” by Dave Hoover & Adewale Oshineye. These are very different books, but both aimed at less experienced software developers who want to learn about what it means to be a professional in the field. In this article I’d like to review them side by side. First some text from each preface on what the authors think the books are about:

Apprenticeship Patterns

“This book should help you through the tough decisions you face as a newcomer to the field of professional software development. “ (preface xi)

The Clean Coder

“This book is about software professionalism. It contains a lot of pragmatic advice” (preface xxii)

The Content
Both books contain a lot of personal stories and anecdotes from the authors’ careers, and begin with a short autobiography. Some of the advice is also similar. Both advise you to practice with “Kata” exercises, to read widely and to find suitable mentors. I think that’s mostly where the similarities end though.

Dave and Ade don’t say much about how to handle unreasonable managers imposing impossible deadlines. Bob Martin devotes a several chapters to this kind of issue, handling pressure, time management, estimation, making committments etc.

Dave and Ade talk more about how to get yourself into situations optimized for learning and progress in your career. They advise you to “Be the worst”, “Find mentors”, seek “Kindred Spirits”. In other words, join a team where you’re the least skilled but you’ll be taught, look for mentors in many places, and get involved in the community.

Bob talks about a lot of specific practices and has detailed advice. He mentions “… pairing is the most efficient way to solve a problem” (p164) Later in the chapter he suggests the optimal composition of job roles in a gelled team. (p169) He also has some advice about how to successfully argue with your boss and go over their head when necessary (p35).

The Advice
Those few example perhaps illustrate that these two books are miles apart when it comes to writing style, approach and world view. Dave&Ade have clearly spent a lot of time talking with other professionals about their material, acting on feedback and testing their ideas for validity in real situations. The book is highly collaborative and while full of advice, is not prescriptive.

Bob Martin on the other hand loves to be specific, provocative and extreme in his advice. “QA should find nothing.”(p114) “You should plan on working 60 hours per week.” (p16) “Avoid the Zone.” (p62) “The jury is in! … TDD works” (p79) These are some of his more suprising pieces of advice, which I think are actually fairly doubtful propositions when taken to extremes like this. Mixed in are more reasonable statements. “You do not have to attend every meeting to which you are invited” (p123) “The professional developer is calm and decisive under pressure”. (p150)

The way everything is presented as black-and-white, do-or-do-not-there-is-no-try is actually pretty wearing after a while. He does it to try to make you think, as a rhetorical device, to promote healthy discussion. I think it all too easily leads the reader to throw the baby out with the bathwater. I can’t accept one of his recommendations, so I throw them all out.

Some of Dave&Ade’s advice is actually just as hard to put into practice. Each of their patterns is followed by a call to action. Things like re-implementing a program you’ve written in an imperative language in a functional language (p21). Join or start a user group (p65). Solve the same coding exercise once a week for the next four weeks (p79). None of these things is particularly easy to do, but they seem to me to be interesting and useful challenges.


Collaboration
Bob has also clearly not collaborated very widely when preparing his material. One part that particularly sticks out for me is a footnote on page 75:

“I had a wonderful conversation with @desi (Desi McAdam, founder of DevChix) about what motivates women programmers. I told her that when I got a program working, it was like slaying the great beast. She told me that for her and other women she had spoken to, the act of writing code was an act of nurturing creation.” (footnote, p75)

Has he ever actually run his “programming is slaying a great beast” thing past any other male programmers? Let me qualify that – non-fantasy-role-playing male programmers? Thought not. This is in enormous contrast to Dave&Ade, whose book is full of stories from other people backing up their claims.

Stories
Bob’s book is full of stories from his own career, and he is very honest and open about his failures. This is a very brave thing to do, and I have a great deal of respect for him for doing so. It’s also really interesting to hear about the history of what life was like when computers filled a room and people used punch cards to program them. Dave&Ades stories are less compelling and not always as well written.

Bob’s book is not just about his professional life, he shares his likes and dislikes. He reccommends cycling or walking to recharge your energy, or “focus-manna” as he calls it, (p127). Reading science fiction as a cure for writer’s block. (p66) Listening to “The Wall” while coding could be bad for your design. (p63) When describing “Master” programmers he likens them to Scotty from Star Trek. (p182)

All this is very cute and gives you a more rounded picture of what software professionalism is about. Maybe. Actually it really puts me off the idea. I know a lot of software developers like science fiction and fantasy role playing, but it really isn’t mandatory. He usually says that you may have other preferences, and you don’t have to do like he does, but I just don’t think it helps all that much. The rest of the book is highly dogmatic about what you should and shouldn’t do, and it kind of rubs off.

Conclusions
The bottom line is, I wouldn’t reccommend “The Clean Coder” to any young inexperienced software developer, particularly not if she were a woman. There is too much of it written from a foreign culture, in a demanding tone, propounding overly extreme behaviour. The interesting stories and good pieces of advice are drowned out.

On the other hand, I would recommend “Apprenticeship Patterns”. I think it is humbly written and anchored in real experience from a range of people. I agree with them when they say you need to read it twice to understand it. The first time to get an overview, the second time to understand how the patterns connect. It’s not as easy to read as it might be. But still, I think the content is interesting, and it gives a good introduction to what being a professional software craftsman is about, and how to get there.

I’ve been working on a kata called “Tennis”*, which I find interesting, because it is quite quick to code, yet is a big enough problem to be worth doing. It’s also possible to enumerate pretty much all the allowed scores, and get very comprehensive test coverage.

What I’ve found is that when I’m using TDD to solve the Kata, I tend to only enumerate actually a very small number of the test cases. I generally end up with something like:

Love-All
Fifteen-All
Fifteen-Love
Thirty-Forty
Deuce
Advantage Player1
Win for Player1
Advantage Player2

I think that’s enough to test drive a complete implementation, built up in stages. I thought it would be enough tests to also support refactoring the code, but I actually found it wasn’t. After I’d finished my implementation and mercilessly refactored it for total readability, I went back and implemented exhaustive tests. To my horror I found three (of 33) that failed! I’d made a mistake in one of my refactorings, and none of my original tests found it. The bug only showed up with scores like Fifteen-Forty, Love-Thirty and Love-Forty, where my code instead reported a win for Player 2. (I leave it as an exercise for the reader to identify my logic error 🙂

So what’s the point of TDD? Is it to help you make your design good, or to protect you from introducing bugs when refactoring? Of course it should help with both, but I think doing this practice exercise showed me (again!) that it really is worth being disciplined and careful about refactorings. I also think I need to develop a better sense for which refactorings might not be well covered by the tests I have, and when I should add more.

This is something that my friend Andrew Dalke brings up when he criticises TDD. The red-green-refactor iterative, incremental rhythm can lull you into a false sense of security, and means you forget to stop and look at the big picture, and analyze if the tests you have are sufficient. You don’t get reminded to add tests that should pass straight away, but might be needed if you refactor the code.

So in any case, I figured I needed to practice my refactoring skills. I’ve created comprehensive tests and three different “defactored” solutions to this kata, in Java and Python. You can get the starting code here. You can use this to practice refactoring with a full safety net, or if you feeling brave, without. Try commenting out a good percentage of the tests, and do some major refactoring. When you bring all the tests back, will they still all pass?

I’m planning to try this exercise with my local python user group, GothPy, in a few weeks time. I think it’s going to be fun!

* Tennis Kata: write a program that if you tell it how many points each player has won in a single game of tennis, it will tell you the score.