Posts tagged ‘conferences’

I was in Finland recently, at the European Testing Conference. I both attended the conference and presented a workshop about “Approval testing with TextTest“. I won’t say any more about that, since Ben Linders did a brilliant write-up already that was published on InfoQ. There were several other highlights, and I wanted to just share a paragraph or so about each.

Mob Testing is what happens when your development team decides to work together on testing tasks as a Mob. I took part in a workshop where Maaret Pyhäjärvi facilitated two different mobbing exercises, one where we automated some UI tests using Selenium, and one where we practiced Test-Driven Development on the FizzBuzz kata. I have already done some Mob Programming and this felt very similar, except the focus was on developing tests rather than production code. It seems to have similar benefits – you have access to all the knowledge of everyone in the team, and you can learn things you didn’t even know to ask about. It makes pairing seem like a slow way to share good working practices.

JUnit 5 is on the horizon, and has several useful improvements over the previous version. Generally the syntax clutter is reduced, and the way you create parameterized tests has been overhauled. The most significant change though, (especially for people like me who work on developing other testing tools), seems to be that they’re designing the test-running engine to be separated so you can re-use it to run other kinds of tests. Any infrastructure that works with JUnit will then be able to run these other tests as well. In principle it opens up JUnit’s success as a platform, to be re-used by other test frameworks. Thanks Nicolai Parlog for this useful summary of the next generation of one of the most widely-used tools in the Java world.

Joel Hynoski has worked at many of the tech giants in our industry, including Google, Twitter, Apple, and now Lyft. He spoke about some of the engineering challenges they had overcome, specifically in the area of testing. One thing I liked was their tool that detects flaky tests, and puts them in ‘jail’. (A flaky test is one that sometimes passes and sometimes fails, when run against the same code. They are a pain and can be a huge waste of time.) When a test is in ‘jail’, that means it’s no longer run in the build pipeline, so it doesn’t block new releases. It instead gets flagged as needing maintenance. They then have a SLA that says how long a test is allowed to remain in jail before an engineer needs to look at it and fix the flakyness – a day or two I think.

I can feel a little in awe of someone who has worked in those kinds of famous engineering organizations, working at web-scale with some of the best developers in our industry. What I found most encouraging about talking to Joel, was that he was very down to earth about the problems these organizations face. They still battle with legacy code, despite it often only being a few years old. They have trouble creating reliable automated tests. The developers don’t always trust the test automation. They still have production bugs and hotfixes…

Alex Schladebeck spent the first ten minutes of her presentation giving a splendid rant about the bad reputation of UI testing. To summarize: (criticisms she hears about UI tests -> her responses)

UI tests give slow feedback -> and valuable feedback, doesn’t have to be after every build
need more infrastructure/machines -> yes, deal with it
they’re the top of the test pyramid -> they are in the pyramid! you can’t ignore them. They find different stuff than unit tests. Consider your context.
they’re flaky -> they’re not as bad as they used to be! Could be your app isn’t designed for testabilty? Could be your test design is poor?
they cause lots of work when small changes in your app -> that happens in development work too! Also, happens more if you design them badly.

She then went on to give some excellent advice about how to design your UI tests. It was mostly about layering your test code in different levels of abstraction, and getting a good collaboration going between developers and testing specialists.

Conferences are about meeting people and the organizers of this conference had very deliberately scheduled sessions to encourage this. We had a ‘speed dating’ session where you talk to about 8 random people for five minutes each. We had a ‘lean coffee’ session, where all the speakers were each asked to facilitate a discussion table. I thought this worked particularly well as a way to find people with similar interests, and get them to talk about their experiences. The hands-on workshops were all at the same time, so you had to go to one and not just attend talks all the time. There was also an open space scheduled when it would not clash with any other kinds of sessions. I thought all this together made for a pretty welcoming conference where you were bound to get to know new people.

Overall I had a really good time at this conference and I’d recommend it to both testers and developers with a strong quality focus.

I forget exactly when, but I think it was 2008 or 2009. Anyway, I was at a software conference, and I was chatting with a developer after one of the sessions about cool new technologies and stuff. I don’t remember what hot new thing it was we talked about, all I remember, is the shoes he was wearing!

credit: flickr, Steve Hodgson

This is rather an unusual style of running shoe. At the time, I’d never seen any like this before, and I was intrigued. It turns out that this developer I was talking to was, like me, also something of a serial early adopter, it’s just that he not only picked up shiny new programming tools and technologies.

At the time, I was running in a pair of shoes with thick heel padding the shop assistant had assured me would correct my bad posture and foot “pronation”. This guy’s “five finger” shoes had none of that, in fact quite the opposite. I was looking at disruptive running technology.

The conversation quickly switched from the latest programming tools and frameworks, as this guy explained the essential benefits of his shoes:

  • Lightweight
  • Your toes can spread out, giving better push-off from the ground
  • Thin sole – you adapt your stride to the surface because you can feel it
  • No heel padding, means you strike the ground with the whole foot simultaneously.

And of course, most importantly for a technology enthusiast:

  • People stare at you feet!

Following this conversation, a little googling about and watching the odd video by a “running style expert”, I became convinced. To be honest, it wasn’t much contest – shiny new technology, being in with the cool kids – I bought some new shoes, with toes and everything!

It took me several weeks to get used to them. You have to start with short distances, build up some new muscles in your foot, and learn to strike the ground with the whole foot at once. In my old shoes, I had a tendency to strike heel-first, because of the huge wad of padding on the sole, but in these minimal shoes, that just hurt. It was useful feedback, the whole-foot-at-once gait is supposed to be better for your knees.

After a few weeks of running shorter distances, slower than before, I gradually found my stride, and really started to enjoy running my usual 7km circuit of the local forest, in my eye-catching five-fingered shoes.
Unfortunately it didn’t last. Maybe two or three months later something happened. I think the technical term for it is Swedish Autumn. It turns out that forest tracks gain a surprising number of cold, muddy puddles at that time of year! Shoes with a very thin sole that isolate and surround each individual toe in waterlogged fabric, mean absolutely freezing feet 🙁

So I’m back on the internet, looking for new, shiny technology to fix this problem, and of course, I buy some new shoes. This time I got a pair of minimalist shoes in waterproof goretex, with basically all the features of my old shoes, minus the individual toes.


I was back out on the forest track, faster than ever, with dry, comfortable feet – win! The only problem was, people were no longer staring at my eye-catching toes. So you can’t have it all!

So this is normally a blog about programming. What’s going on?

Test Driven Development as a Disruptive Technology

I’ve been thinking about this, and it seems to me that as with running shoes that have toes, TDD is something of a disruptive technology. Just as I haven’t seen the majority of runners switch to shoes with toes, I also havn’t seen the majority of developers using TDD yet. Neither seem to have crossed Geoffrey Moore’s “chasm”.

Geoffrey Moore's technology adoption distribution showing the chasm

Lots of developers write unit tests, but I think that’s slightly different. I’m talking about a TDD where developers primarily use tests to inform and direct design decisions, and rely on them for minute-by-minute feedback as they work. In 2009, Kent Beck made an observation in his blog that “the data suggests that there are at most a few thousand Java programmers actively practicing TDD”. I don’t think the situation is radically different today.

So can we learn anything about TDD from the story about running shoes? A couple of points I find relevant:

  • Early adopters will try a new technology based on really very flimsy evidence, and will persevere with it, even if it slows them down in the short term.
  • Early adopters like to look cool and stick out.

You may think that last point is just vanity, but actually, being a talking point helps drive adoption, but primarily amongst other, similarly minded technology geeks.

I remember a while back I was at work, writing some code, when a guy from another team came over to ask me something. He was about to leave, when he did a double-take and stared at my screen for a moment. “Are you doing TDD? I’ve never seen anyone actually do that in production code. Do you mind if I watch?”. So, you see, eye-catching shiny new technology, and I’m one of the cool kids, about to be emulated by the guy in the next team. 🙂

The other part of this story, is of course the compromise I made when the cool technology met the reality of a muddy Swedish forest track. The toes went, but the shoe I ended up with is still radically different from the one I had before. I think that for TDD to reach the mainstream, it may need to become a little less extreme, a little more practical – but without losing the essential benefits.

What are the essential benefits of TDD? Well, I would say something like this:

  • Design: useful feedback, pushing you away from long methods and tightly-coupled classes, because they’re hard to test.
  • Refactoring: quickly detecting regression when you make a mistake
  • Productivity: helping you to manage complexity and work incrementally

So is it possible to get these things in another way? Without driving development minute-by-minute with tests? Well, that’s probably the subject of another blog post…

You might be interested to watch a video of my recent keynote speech at Europython, where I told this story.

I was recently at the Software Craftsmanship Conference at Bletchley Park in the UK. This is a one-day conference for software developers, attended by around 150 programmers. All proceeds from the event go to support Bletchley Park, which is of historical interest to programmers in particular – the site where Alan Turing and others cracked the enigma code in the 2nd world war. It was the fifth time this conference has been run, and the first time I attended. This is a short experience report.

In the morning I ran a workshop titled “Outside-In, with or without Mocks?“. We were about 50 people in the Ballroom in the Mansion, a very grand room, and it was really great to see so many people working in pairs at laptops, puzzling over some code and tests and how to do Test Driven Development. We were looking at a code kata I’ve designed called “Train Reservation“. It’s in no way a beginner exercise, and the crowd at Bletchley seemed to get on with it rather well on the whole. I’m just sorry I didn’t get round to talk to each pair very often, with 24 pairs I only had a couple of conversations with each during the 2 hour session!

I set up the exercise more or less to force people to use some kind of mock, fake or stub to replace the Booking Reference Service and the Train Data Service, because I am interested in how different people use these. I’ve observed that some programmers avoid using test doubles whenever possible, while others use them frequently. I’ve also observed that some people prefer to work outside-in, starting with a guiding test, while others prefer to start with the business rules at the heart of the problem and work outwards from there. At this particular workshop, there were all sorts of approaches being used. Some started with the guiding test and stubbed the services. Others started with the business logic around the seat selection rules. Different approaches, as I had hoped! Overall I feel encouraged that this exercise is a useful one, and people seemed to get on better with it than the last time I ran it, at XP2013. It’s till rather too big of a problem to tackle in a half day workshop though. I’ll be updating it some more before I run it again, although I don’t have any fixed plans for when that will be yet.

In the afternoon, I went to a session by Ivan Moore and Mike Hill, “Inheritance to Composition“. They gave us a demo of this particular refactoring using a very simple codebase, before launching us into a much more complex one – Fitnesse (starting from the branch “revised-ResponderFactory”). The idea was to take some classes that were using Inheritance – specifically the Template Method pattern – and convert them to instead use Composition – specifically the Strategy pattern. They also helpfully provided us with a sheet of instructions – 6 steps to complete the refactoring with minimal risk and code breakage.

My pair and I got on fairly well with the refactoring, and by the end of the session we were on step 5 with the goal in sight. The experience was of using Eclipse’s refactoring tools extensively, and relying a great deal on the compiler. The tests we had to lean on took a minute and a half to run, and actually, the tests for the classes we were working on were more mini-integration tests than unit tests as such. It meant there were relatively few updates to the tests as we did the refactoring, but the feedback loop was slow. I thought that was really interesting, and was wondering how the experience of the refactoring would change in a language like Python. There you don’t have a compiler, or very much help from refactoring tools.

So after the workshop, I set about trying to construct a similar problem in Python. Perhaps understandably, I didn’t want to translate the whole of Fitnesse to Python, (!), so I tried to re-write only the elements of it essential to this exercise. You can have a look at what I’ve come up with in my new repo “WikiSearchKata“. I’m still working on preparing this properly as an exercise, (the instructions are still rather thin), but I plan to try it out at a GothPy meeting sometime soon.

After the conference sessions had ended, we were treated to a guided tour of the National Museum of Computing which was for me, the highlight of the day! Our enthusiastic guide showed us all sorts of ancient computers and storage devices and punch cards… a few I recognized from my childhood. My dad used to bring home old punch cards and my mum used to write her shopping lists on them when she went to the supermarket. They had a 48K ZX spectrum with rubber keys – just the same as the one I wrote my first program on! They had a CRAY supercomputer similar to the one I remember seeing once when I visited my dad’s work as a child. It’s a similar size to (the outside view of) a Tardis, with a big red button on the front. I don’t think we found out what the red button does, but the guide did say we probably have more computing power in the smartphone in our pocket! I found the changes in storage capacity actually even more impressive. They had these washing-machine sized boxes and dinner-plate sized metal disks that together made a hard drive. I think it held something like 4K.

The highlight of the tour was the WITCH computer – the oldest working computer in the world. It was brilliant! You could actually see what it was doing while it read in a paper tape punched with holes – the program – and loaded values into registries and did calculations. It made this fantastic whirring noise as it ran, and has all these little whizzy flashing lights. It works in decimal rather than binary, so each number is represented by a little “dekatron” – a glass tube with a red light inside, that moves between positions 0-9 in a circle. So you can read which number is in the registry by looking at the position of each light in the array. They also had this little button you could press to make it step through the program one instruction at a time. I got to press it, and single-step a computer from 1951!

Compared with other conferences I’ve been too, this one was rather short, just one day, and with rather long sessions – half or whole day. It was hard work coding and facilitating all day, but in general very interesting people and coding exercises. A second day would have made it more worthwhile my making the trip. In any case, my thanks to Jon Dickinson for organizing it.

Last week I was in Oxford at “Iverson College”, which is a conference on the topic of Array Language Programming. There were about 25 programmers there, most of whom are expert in one or more of APL, J, K, or Q. It’s not my usual comfort zone, put it that way! I’m fairly competent with a number of programming languages, notably Python and Java, but nothing I know is really much like these array languages. It’s been a huge culture shock, but in a good way, I think.

My main discoveries are that Array Programming is different again from Object Oriented Programming and Functional Programming, (although it has a lot in common with functional programming), and that this community contains some exceptional programmers. The total number of array language programmers is however extremely small and their work seems to be pretty much unknown to the wider programming community.

Array Programming Languages
I mentioned before four languages, APL, J, K and Q. They are similar to each other, kind of like Ruby and Python are similar to each other. I’ve gone through an introductory training in each language this week, largely given by the language designers themselves. I’d like to relate a little of what I’ve discovered about them.

APL
This is the oldest of the array languages, invented by Ken Iverson in the 1960s. It’s notorious for using an alphabet of funny-looking symbols to represent the built-in functions. You can try it out at http://tryapl.org – an interactive REPL (Read-Evaluate-Print-Loop) where you can put in snippets of code and see what the symbols do.

I thought at first that APL looked really intimidating and unnecessarily weird. Now having got to know it a little, I can see the benefits to the little symbols. They make the code really concise and unambiguous, and it doesn’t take long to learn their names. Once you can pronounce each symbol in your head as you read the code, it’s not much different from writing out the names in full in the editor.

The variant of APL that most of the conference attendees use is produced by the company Dyalog. I first met the CTO, Morten Kromberg, at an XP conference in 2006. He’s shown me some APL before, but this time I really got a chance to sit down with him and look at how he writes code. Dyalog APL has a powerful IDE including a REPL, where Morten showed me how he plays around with data and code, in order to come up with some useful APL expressions. When he’d got something working, he transfers code from the REPL into a file, to make it re-usable and shareable. It’s a familiar way of working to me, many Python programmers code this way, flipping between the REPL and a script file. It was a real pleasure to code with Morten – he is an extremely skilled programmer. Dyalog APL looks nice too, it has a fully-fledged IDE, and interfaces with .Net, Excel spreadsheets, ASP.net and more. It would fit nicely into the technology stack of many IT departments basically.

J
This was Ken Iverson’s next language, created together with Roger Hui, who now continues development of it. J is similar to APL in many ways, but is open source, and uses only ASCII characters. They’ve made an effort to make it open and less intimidating to newcomers, and probably for that reason, it’s the one I chose to download and try to learn before the conference.

I met Roger at breakfast on the first day of the conference, knowing nothing about who he was, he just said he was a programmer. I confessed that I’d downloaded J and made some joke about hoping I’d get on better with it than Ron Jeffries. (Ron wrote articles in his blog, about his efforts to learn J, and later gave up, finding it too hard!). Roger genuinely didn’t know who Ron Jeffries is, although he did know of the agile manifesto. He was very kind and concerned to help me to understand J though, (and Ron, if he wants!)

Despite my head start with J, by the end of the conference I found APL code easier to grasp – J seems more extreme to me. Roger calls J “executable mathematical notation”, and I’ve always been a bit more of an engineer than a mathematician.

K
K was invented by Arthur Whitney, who was also at the conference. I didn’t really get a feel for how the language works, more than that it’s extremely terse.

Arthur gave a talk at the conference, about his new project, KOS. He and two other guys are writing an operating system pretty much from scratch, using K, C, and bits of the linux kernel, (although they’d like to remove those). He showed us how you write applications for this new OS in K, by demonstrating building a text editor. He began from the alpha version of the OS with just a window manager, and a plain new window canvas that didn’t respond to any keyboard or mouse events.

Arthur added a line of K code to let you enter text into the window – a listener to key presses. Then a line of code to move the caret around with the arrow keys. Then a line of code for changing the font size. Then scrolling. Code to handle Ctrl-C and Ctrl-V to copy and paste text…. in the space of less than half an hour, he had an equivalent to notepad working. No compilation, no reloading. And the code was…. phenomenal. You can see a version of it here. I can’t read it really, it looks mostly like line noise to me. All his variable and function names are one or two characters, and K just seems pathologically terse.

I raised my hand and asked Arthur if he thought his code would be more readable if he used longer variable names? He thought for a moment, looking surprised and a little bewildered by the question. Then shook his head and said slowly “No. no. I don’t think I need that. I want to see all my code on the screen at once”. Needless to say, that was a big culture shock moment for me!

The size of the codebase is something all the array language programmers seem really concerned about, even if Arthur’s code is considered extreme even in that community. One thing I did later that week was take a piece of code that is in Robert C. Martin’s book “Clean Code”  (Args.java), as an example of clean Java code, and showed it to the group. There were general exclamations of “aargghh! that hurts my eyes!” but after a little while as I explained the structure of the code they seemed to appreciate it a little better. What they did say that intrigued me though, was that they automatically scanned the page looking for the symbols – the >, !, = signs – the parts that do something, as they put it. The other text they said obscures the structure, it distracts the eye. Yes, that’s right. Having names for the functions and variables makes the code less readable.

KDB+ and Q
KDB+ is a very small and fast commercial database largely used by financial institutions, also originally created by Arthur Whitney. Q is a kind of domain specific language built on top of K, that you use to query the data in a KDB+ instance.

I sat down with Attila Vrabecz, an experienced Q and KDB+ programmer, and we coded together for a couple of hours. We tackled a problem I’d previously coded with Morten in APL, to help me see what was different. There were many similarities – the workflow was the same for example – experimenting in the REPL before transferring the code into a more permanent, reusable form. I noticed Q has many more English words in it, fewer strange symbols, and Attila made more use of library functions than Morten did. It seems Q is designed to be approachable for a former SQL programmer, although once you scratch the surface, it’s much more like APL than SQL.

Test-Driven Development
I gave a talk at the conference about TDD. My aim was to provoke discussion, and argued that writing automated tests using TDD is the best approach. I was definitely successful at sparking a discussion! Actually, it didn’t seem the idea that programmers should write automated tests for their code was all that controversial, especially amongst the more seasoned developers present. We got way more hung up on how large a chunk of code counts as a “unit”, for your unit test, and what clean code looks like in an array language. To my eye, their units are large and their clean code is terse.

A challenge for the future
Dave Thomas, former lead developer for the Eclipse project, and general software visionary, is also an APL and K programmer. He flew in for just one day of the conference, and his talk functioned as the keynote address for the week – it was a clear challenge to the Array Languages community.

Dave painted a vision for the future where people will be living in a sea of big data they don’t understand, and lack adequate tools to query. He sees a great opportunity for array languages, which are generally very good at handling large amounts of data.

He ended his talk with an ambitious challenge to this community to get its act together, start being seen as a credible alternative, and grow. I could only applaud and agree – I found his advice insightful, and I hope the array languages community will do as he suggests.

I spent one very pleasant evening chatting with a woman who is about to embark on her PhD in atmospheric science. She’s hoping to use array languages to help her create software models that will execute quickly on huge arrays of multi-dimensional climate data. Her work sounds fascinating, and I hope it’s a sign of array languages starting to be used beyond their traditional niche in finance.

So I’m leaving the conference carrying a huge tome entitled “A complete introduction to Dyalog APL”, some pieces of code I’ve written, and good intentions to study further. I do find it fascinating that even with the little I know of it, APL allows me to think about and solve a problem differently than I do in Python. I anticipate I’ll find plenty of people in the Array Languages community willing to help me if I do continue to try to learn it. They’re an opinionated, quirky, mature, gentle, yet small bunch of extremely skilled programmers, and I’m glad to have met and coded with them.

I’ve been working on this Kata “Gilded Rose” at a few different coding dojos lately. There is even a video of a session I did at the “Tampere Goes Agile” conference recently. In the video, you can see me talking about my Principles of Agile Test Automation, which I have just written about, and updated in my last blog post.

I think these test automation principles are useful to think about when you’re doing the Gilded Rose kata. The basic plot of the Kata is that you’ve just been hired to look after an existing system, and the customer wants a new feature. Having a look at the code, you can see you’re going to want to refactor it a little before adding the new feature, and before you do that, you’re going to want some automated tests.

So the first part of the Kata is to add automated tests to the existing code. You’ve got a requirements document the customer has given you, and you can use it to identify test cases. You’ve also got the code which you can read and execute and work out what it does. The customer is happily using the code in production right now, so you can assume that the behaviour it has is the behaviour they want to keep, whatever it says in the requirements document. (hint!)

Warning – spoilers lie ahead! You should probably try the Gilded Rose kata for yourself before reading on!

When I’ve done this exercise with various groups, I’ve spent a lot of time discussing with people how to make their test cases really readable, and express the requirements clearly, and at the same time useful as regression protection when refactoring the code later.

When you design a test suite you have two main aims – to help you understand what the code should do, (and what it does now), and protection from regression failures when you update it. It can be a bit tricky to do both with the same test suite. If you focus solely on describing the requirements in an executable way, you tend to miss edge cases and there are gaps in the regression protection. If you focus only on regression protection, you’ll spend time analysing the edge cases, and measuring code coverage to see how well you’re doing, but the test cases can become quite hard to read and understand.

You can see for yourself by comparing this test case by Bobby Johnson with this text-based approval test. (It was written by several people at a GothPy meeting). Bobby’s test case is extremely readable and expresses the requirements clearly. He’s done pretty well on the edge cases, but I think he’s missing one or two*. With the text-based approval tests, it’s not so easy to understand what the underlying business rules are, although the regression protection is very good.

When I do this kata with a group, we spend some time discussing the various test cases we’ve come up with, and showing them on the projector. When we did this last week at the Booster Conference, people commented that showing these different test cases had given them a better understanding of “readability” and “regression protection”, and many went on to improve their test suites.

Once you’re reasonably happy with your test suite, the next task is to do the refactoring and add the new feature. How useful are your test cases for regression protection? It’s very easy to make refactoring mistakes in this kata, and you will be testing your tests! You may discover while refactoring that there are more test cases that you want to add. Version control can be pretty useful, so you can run the new test cases against the original code.

There’s also an interesting restriction on your refactoring options – the “Item” class is owned by a nasty-sounding goblin and he doesn’t want you to change his code, so if you do, you have to be prepared for some serious consequences! When comparing refactored solutions at the end of the dojo, this is often an interesting discussion point – did you change the Item class? Is your new design so great that you’re prepared to argue with the goblin for it?!

I havn’t tried this, but I would actually like to try running the text-based approval test against all the refactored solutions at the end of the coding dojo, as input to the retrospective. I think this test covers all the edge cases very well, and would reveal any refactoring mistakes that were not caught by the tests people had developed themselves. That would be interesting feedback to have!

If you havn’t tried the Gilded Rose kata yourself, I do recommend it for practicing writing good test cases. I’d be happy to get a pull request from you if you want to translate the exercise into your favourite programming language, or you can do it in the original C#, as Bobby suggests.

If you’re interested in taking part in a coding dojo with me, I’ll be at several conferences later this year: ACCU in Bristol, XP2013 in Vienna and Test Automation Day in the Netherlands.

* I believe he’s missing a check that the quality of backstage passes doesn’t increase past 50