Posts tagged ‘TDD’

I am really interested to find out more about this concept of “clean code”, and in particular how it relates to programming language. To this end, I’m still chewing on KataArgs.

My latest idea is to start from Bob Martin’s Java implementation, and translate this as directly as possible into python. My idea is to then refactor it to be more pythonic, and see if it turns out looking anything like his Ruby implementation.

I have put up some code on launchpad, which is my attempt at a direct translation of the Java. It was really interesting to do, actually. Of course I had read the Java before, and followed along in the book all the steps to create it, but actually translating it made me understand the code on another level. When I tackled this Kata from scratch, I also got a much better understanding of it, but this was different.

One thing that jumped out at me was the error handling. It’s much more comprehensive than anything I’ve produced in my own solutions, and also compared with his Ruby solution. So I think it’s a bit misleading of him to say “I recently rewrote this in Ruby and it was 1/7 the size”. Of course it is smaller. It does less. Although to be fair, in a way it does more too…

One thing I found awkward to translate was the use of the enum for the list of error codes. Python doesn’t have a direct equivalent, being dynamic as it is. The other awkwardness was the Java iterator. In python iterators are built into the language, but don’t let you backtrack, or get the current index, unlike Java ones. I was surprised to find how extensively the tests rely on this functionality. To my mind, they probe the internals of the args parser too much.

By far my most interesting observation, though, is the one I want to explore more. This code is well written Java, but directly translated, it makes pretty poor python. Why is that? What, specifically, are the smells, and what are the antidote refactorings?

I will no doubt post more on what I find, (with the help of my friends at GothPy, of course)

I’m having fun with this KataArgs. In my last post, I took a closer look at Bob Martin’s Java and Ruby solutions to it. Since then, we have tackled this Kata at a couple of GothPy meetings. (My coding dojo; the code is here.)

Several of us did some more work on the kata individually after the meetings, and a lively discussion on our mailing list ensued. I also challenged the local Ruby user group Got.rb to have a go at it, and one person posted his solution there too.

It’s all good fun, anyway, and hopefully we’re all learning something about what we mean by “clean code” along the way.

Over Christmas I finished reading the book “Clean Code” by Robert C. Martin. I thoroughly recommend the book, which is highly practical, technical and well written. In it, Bob seeks to present the “Object Mentor school of clean code”, as he puts it, “in hideous detail”.

The book is full of code examples, clean and less clean, and detailed advice about how to transform the latter into the former. All the examples are written in Java, though, which leaves me wondering a little if “clean code” in the Object Mentor meaning of the word, looks the same in other languages.

In Chapter 14 of the book, there is a fully worked example of a little coding problem that I would call a code Kata. It’s a little program for parsing command line arguments. I know, there are loads of libraries that do this already. But never mind. It’s a non trivial problem yet small enough to code up fairly quickly. One thing that caught my attention was the footnote on page 200, just after he has presented his best Java solution to the Kata. “I recently rewrote this module in Ruby. It was 1/7th the size and had a subtly better structure.” So where is the code, Bob? What is this subtly better structure?

I had nothing better to do on Boxing Day than sit around and digest leftover-turkey-curry, so I sent a little mail to Bob asking him for the code. To my delight, I got a mail back only a few hours later, with a message that I was welcome to it, and the url to where he’d put it on github. Evidently Bob also had time on his hands on Boxing Day.

I have had a look at the Ruby code, and although my Ruby is fairly ropey, I think I can follow what it does (surely a sign of clean code?). The design is very similar to the Java version presented in the book, with a couple of finesses. (The next part of the post will make most sense if you first look at Bob’s Java version and Ruby version).

The first finesse I spotted, is that the Ruby version defines the argument “schema” in a much more readable fashion. Rather than “l,p#,d*” as in the Java version, it reads


parser = Args.expect do
boolean "l"
number "p"
string "d"
end

ie the program expects three flags, l,p, and d, indicating a boolean, number and string respectively. You can do this in Ruby but not Java because the language allows you to pass a code block to a method invocation. (the stuff between “do” and “end” is a code block, and the class “Args” has a method “expect”) I think the Ruby version is rather more readable, don’t you?

The second finesse I can see is that the argument marshallers dynamically add themselves to the parser, rather than being statically declared as in the Java version. This means that if you discover a new argument type that you want to support, in the Java version you have to crack open the code for Args.java and add a new case statement in the “parseSchemaElement” method, as well as adding the new argument marshaller class. In the Ruby version, you just add the new class, no need to modify an existing one. Bob invented the Open-Closed principle, so I guess it’s not so surprising to see him following it 🙂

So in Args.java:


private void parseSchemaElement(String element)
throws ArgsException {
char elementId = element.charAt(0);
String elementTail = element.substring(1);

// long if/else statement to construct all the marshallers
// cut for brevity
[...]
else if (elementTail.equals("#"))
marshallers.put(elementId, new IntegerArgumentMarshaller());
else if (elementTail.equals("*"))
[...]

or in the Ruby code, each marshaller just tells the parser to add itself:


class NumberMarshaler
Parser.add_declarator("number", self.name)
[...]

in the Parser class:


def self.add_declarator(name, marshaler)
method_text = "def #{name}(args) declare_arguments(args, #{marshaler}) end"
Parser.module_eval(method_text)
end

def declare_arguments(args, marshaler)
args.split(",").each {|name| @args[name] = marshaler.new}
end

You can do this in Ruby but not Java, since in Ruby you can dynamically construct and execute arbitrary strings as code, and add methods to classes at runtime. (The string declared as “method_text” is constructed with the details of the new marshaler, then executed as Ruby code in the next line, by Parser.module_eval) This is an example of metaprogramming.

So it seems to me that the “subtly better structure” that Bob refers to in his footnote, is made possible by powerful language features of Ruby, such as metaprogramming and closures.

Of course my favourite programming language is Python, which also has these powerful language features. I am rather interested to see if I can come up with an equally clean solution in Python. I am also interested if any hotshot Java or Ruby programmers out there can improve on Bob’s solutions. To this end, I have added a description of this Kata to the catalogue on codingdojo.org. We had a go at it at our last GothPy meeting, without any great success, although I hope we might do better at a future meeting.

So please have a go at KataArgs and see if you can write some really really clean code. Do let me and the community on codingdojo.org know how you get on!

At agile2008 I attended a session with Dan North about Behaviour Driven Development. Someone on the agile sweden mailing list was asking about it, so I decided to write up my notes here.

Most cellphone and computer software is delivered late and over budget. The biggest contributing factor to cost bloat is building the wrong thing. So what software and business people need is “a shared understanding of what done looks like”.

Test Driven Development is about design, conversations, and writing examples for a system that doesn’t yet exist. It’s not really about testing. However, once the system exists, your examples turn into tests, as a rather useful side effect.

A User Story is a promise of a conversation, and it is in that conversation that things go wrong. The customer and developer rarely agree what “enough” and “done” look like, which leads to over- or under- engineering.

Dan suggests a format for User Story cards which aims to prevent this communication gap.

On the front of the User Story index card, you have the title and narrative. The narrative consists of a sentence in this format:

As a stakeholder
I want feature
so that benefit

where benefit is something of value to stakeholder.

On the back of the card, you have a table with three columns

Given this context | When I do this | then this happens

Then you have 4 or 5 rows in the table, each detailing a scenario. (If you need more than that then the story is too big and should be split)

Dan finds that in his work, this leads to conversations about User Stories where “done” and “enough” are discussed, and defined.

User Stories should be about activities, not features. In order to check that your User Story is an activity, you should be able to do a thought experiment where you implement the story as a task to be performed by people on rollerblades with paper. You must think about it as a business process, not a piece of software.

When creating the story cards, the whole team should be involved, but it is primarily the business/end user stakeholders and business analysts who write the title and narrative on the cards. They then take in a tester to help them to write the scenarios.

Are people familiar with the V model of software testing? When this was conceived, they thought that the whole process would take 2 years, and span the whole project. Dan ususally does it in 2 days. Many times for each project.

Then Dan offered to show us how to do BDD using plain JUnit. He requested a pair from the audience, so I volunteered. At this point my notes dry up, and I am working from memory, but I think the general idea is like this.

You talk about “behaviour specs” not tests. The words you use influence the way you think, and “behaviour specification” gives much better associations than “tests”.

Each behaviour specification should be named to indicate the behavour it is specifying. Not “testCustomerAccountEmpty” rather “customerAccountShouldBeEmpty”.

In the body of the spec, you can start out by typing in the prose of one of the scenarios you have on the user story, as a comment.

//given we have a flimble containing a schmooz
// when we request the next available frooble
// then we are given a half baked frooble and the schmooz.

Then you can fill in code after the “given” comment. When you have code that does what the comment says, delete the comment. Repeat with the “when” and “then” comments.

In this way, you build up a behaviour specification that drives your development of the system. A few minutes later (hopefully) you have a system which implements the specification, and at that point your spec helpfully turns magically into a regression test which you can run. At that point you can start calling it a test if you like. But actually it is more helpful to your brain to continue to think of it as a behaviour specification. It leads to much more constructive conversations about the system.

At the XP2007 conference, Geoff and I presented a workshop entitled “The coder’s dojo: Acceptance Test Driven Development in python”. (Geoff also presented the same workshop at agile2007). We had three aims with this workshop, the first was to use the meeting format of a coders dojo, the second was to do some coding in python and the third was to demonstrate how you can do Acceptance TestDriven Development using TextTest. We felt the workshop went well, we had around 30 participants and we were able to do a little of everything we had set out to do.

Perhaps the most important thing was what we learnt from the experience. The workshop participants gave us some very useful feedback. One thing people said was that there were too many new ideas presented to expect as much audience participation as we did, and that instead of trying to do aRandori style kata, we should have done it as a Prepared kata. There also seemed to be a view that the Kata we had chosen (KataTexasHoldEm) was quite a difficult one. Another very valuable piece of feedback was that we were doing Test Driven Development (TDD) with much larger steps than people were used to.

What I have done is to create a screencast in an attempt to address this feedback, and open up our workshop material to a larger audience. In it, I perform a Prepared Kata of KataMinesweeper, doing TDD with TextTest and python. Geoff and I have been developing this approach to testing for a few years now, and we think it deserves consideration by the wider agile testing community. There are important advantages and disadvantages compared to classic TDD with xUnit.

I will write an article about this approach and provide links to the screencast in my next post on this blog.