Archive for April, 2012

I’ve heard Joseph Wilk speak before, so I was delighted when he accepted our invitation to come to Scandinavian Developer Conference and share some ideas around “Acceptance testing in the land of the Startup”. In this post I aim to summarize my understanding of what Joseph said. I think what he is doing at Songkick is really interesting and innovative, and is applicable beyond that particular environment.

Introduction

Joseph began by introducing what he meant by “Startup” using Eric Ries‘ definition:

“a human institution designed to create new products and services under conditions of extreme uncertainty”

“Acceptance testing” is usually about checking the product meets the previously agreed requirements – but if everything is changing all the time, that is not as simple as it sounds. Joseph’s acceptance tests reflect what “done” looks like in conditions of extreme uncertainty.

Joseph talked about the Cynefin model (by Dave Snowden) and how working in a startup feels like you’re in chaos a lot of the time, although parts of the business may be merely complicated or even simple. An effective strategy in the “chaos” part of the Cynefin model is “Act, Sense, Respond”, which he related to the lean startup cycle of “Build, Measure, Learn”.

The “Build” part of that cycle for their software product often takes more time than the other parts, and work should be coordinated between several people. During the “Build” phase you can use automated tests to help with communication and feedback. In the “Measure” phase, you’re primarily looking for feedback from the actual end users, rather than automated tests.

The rest of the talk was largely about how Songkick use automated tests during the “Build” phase, and how they do acceptance testing with end users in the “Measure” phase.

The startup

The software Joseph is working on is for the startup Songkick which is devoted to live music. They are about 20-30 people, and the majority are programmers. They have two dedicated UX specialists and two QA specialists. Most people are multi-skilled, care deeply about the quality of the product, and are not afraid to learn how to help outside of their speciality.

Songkick holds three principles dear:

  • Embrace Change
  • Usability
  • Testing

The rest of his talk was about the concrete practices they use around this last principle – Testing. Joseph stressed that he was reporting what they had found to work, and not holding up any “best practice” or industry gold standard.

Features

At Songkick, they use Behaviour Driven Development. They are continually building and improving a “ubiquitous language” to describe their system, by writing Features and Scenarios in a semi-formal syntax. The tool Cucumber is used to turn these features and scenarios into Executable Specifications (aka Automated Tests).

Joseph gave a short intro to how Cucumber works – take a look at this site if you’re not familiar with it. In short, Cucumber lets you define many “Features”, each of which define numerous “Scenarios” which exemplify the desired feature behaviour. It requires you use a particular syntax called “Gherkin”. The gist of it is that you use the following formats when writing features and scenarios:

Feature:
“In order to… I want… So that…”

Scenario:
“Given… When… Then…”

If you put in some extra work and also write “Step Definitions”, your scenarios become executable, and can automatically check the application behaves as expected when run.

The primary purpose of the Cucumber specifications is to promote communication within the team: the words are carefully chosen to have clear meanings related to the domain of the system you’re building.

Hypotheses

In Lean Startup you don’t talk so much about building a “Feature”, more about having a “Hypothesis”. As in “if we add a pink button here, do we get more traffic to the site?”

Joseph showed the following example of a Feature with a Hypothesis:

Facebook signup
In order to increase signup
I want visitors to sign up via Facebook
So that we see a 10% increase in signups

That last line is the measurable part of the hypothesis, that makes it more than just a normal Cucumber Feature. When you’ve delivered a system that has this feature, it should lead to 10% more signups. If it doesn’t, the system might be modified to remove this feature. Features written in this form (In order to… I want… So that…) are prioritized in the product backlog.

When a developer is ready to start working on a feature, she calls a meeting to discuss it. At the meeting there should be representatives from four different job roles: a developer, tester, business analyst and usability expert. Together they discuss what the feature is, the details of how it should work, and perhaps create some wireframes or UI mockups. They will also brainstorm perhaps 7-8 scenarios, such as:

  • successful signup via facebook
  • failed signup via facebook
  • facebook changes their API and no one can sign in any more

After the meeting, the developer starts working on the code to implement the feature, bearing in mind the discussion and the scenarios. The people in the other job roles will also get involved in development, and assessing whether the feature is ready for deployment. Some or all of the scenarios they have come up with will be automated as executable with Cucumber.

There is an overhead to automating scenarios with Cucumber: implementing “Step Definitions” takes time, and afterwards the scenarios are relatively slow to execute. Joseph said that they were more and more only automating a few scenarios for each feature, often just a happy and a sad path. The other scenarios would perhaps be used in unit tests, or exploratory testing. The most important aspect of writing many scenarios was to discover issues with a feature before implementation.

All their Cucumber features and scenarios are published in a searchable web interface called “Relish” which he said encouraged developers to use more business-friendly language, since they knew non-programmers would look there. Joseph said features containing incomprehensible technobabble like “Given the asynchronous message buffer queue has been emptied” (!) do appear occasionally, but don’t last long. Their ubiquitous language is highly visible and in daily use.

Actual Acceptance Tests

Songkick uses a Continuous Delivery approach, where anyone in the company can deploy the latest code, at the push of a button. They have invested a lot of effort to make it all automated, so that when anyone checks in code changes, Jenkins builds it, runs all the tests, (including Cucumber specs), and allows anyone to deploy any successful build. He said deployment was so easy even a dog could do it, (and had on one occasion!).

Joseph explained that many features would be built in such a way that they were “turned off” in production for a few days while they were being built. Only once the whole feature was implemented and present would they activate the code, and perhaps only then for a subset of all the visitors to the site. In this way you can build a feature that takes a few days to implement, and continue to release the new code every few hours. The half built feature is being deployed, it’s just switched off.

The reason for all this effort to get code out of the door is that a feature can’t be considered Done until its hypothesis has been validated. As Joshua Kerievsky said:

“A story isn’t done until it’s being used by real users in production and has been validated to be a useful part of a product”

By deploying often, you get that confirmation or rejection as soon as possible, so you can evaluate whether the feature was worth building. “Build, Measure, Learn”. The “Measure” part happens after an acual customer has got hold of the code and started using it. That is the real acceptance testing that’s going on, not the Cucumber scenarios.

So that’s the way they work today, but it hasn’t always been quite like that. Joseph also talked about why they have arrived at this process.

The build time issue

The Cucumber scenarios are run at every checkin, and Joseph said that at one point it took 4 hours to run them all. This was a serious problem, delaying feedback and slowing the continuous deployment process to a crawl.

Their first approach was to use their technical skills to divide up the tests and run them in parallel on the cloud. The build time went down from 4 hours to just 16 minutes, but at a cost of about $7000 per month in cloud servers. That proved too expensive to be sustainable, so they reconsidered.

The automated tests are there to give confidence that the system works before you deploy. They realized that some of the tests gave more confidence than others, so they removed something like 60% of them. These less valuable tests were then run much less frequently. This took them off the critical path to deployment, and enabled more frequent releases. So far this strategy has led to only insignificant problems in production.

They also re-evaluated their policy for which scenarios to automate in Cucumber, and started putting much more emphasis on unit tests. When you have comprehensive Cucumber tests, it’s easy to have lots of confidence that the system works, and not bother with unit tests. Joseph said they were missing the design benefits of unit testing though – unit tests force your code to be decomposed into units! These days they write fewer Cucumber scenarios, and more unit tests.

Duplicating effort with QA

In addition to automated Cucumber scenarios, the QA people do manual testing before deployment. At one point the developers were surprised to learn than maybe 60-70% of these manual tests were covering functionality that was already well covered by Cucumber specs. They realized that the QA people had little confidence in the automated tests.

Once they had noticed the problem, they began to include the QA people more in the scenario writing process, so they could learn about what the tests did, and how they work. This helped give QA confidence to remove some of the manual tests, so now they only test the most crucial functionality manually before deployment.

Metrics as feedback

Joseph invented a tool called “Limited Red” that records metrics from every test run. It shows failure rates for each scenario, and correlates that with changes in the corresponding feature file, using the Git log. Using this data, he can plot graphs that show each feature, how many times it has failed, and how often it has been edited.

For example, he might find one of the features has failed 16 times, while the code in the feature was only changed 4 times. This could indicate the feature is testing an area of the code that is poorly implemented – the code is broken more often than the test is invalid. This gives developers feedback that they can act on to improve the quality of both the code and the Cucumber scenarios.

Joseph has a blog post with more detail about this tool and metrics approach.

My Concluding Thoughts

Most startups fail, and I have little insight into whether Songkick will be successful in the long run. I am fascinated though, by the way they are working. It seems to me they have a great process that promotes communication and feedback, coupled with enough introspection to adapt it as they learn more.

I’m interested to note that Songkick has extended the “Rule of Three” introduced by Lisa Crispin and Janet Gregory. In their book on “Agile Testing” they encourage you to include a tester in all discussions between a developer and customer. At Songkick they appear to have business analysts in the customer role, and additionaly include a UX (usability) expert in the discussion.

Joseph didn’t mention it, but I suspect that his experiences automating fewer scenarios with Cucumber may be one of the reasons Liz Keogh wrote her blog post “Step away from the tools“. Or maybe the blog came first, I don’t know. The advice is the same, in any case – BDD is about communication first, test automation second.

The “Limited Red” tool is quite new, and I think Joseph is still learning how to act on the data he’s gathering. Having said that, I have high hopes that eventually this kind of measurement and feedback will find its way into more automated testing tools and Jenkins builds. It seems to me that finding out which of your tests are giving the best value for money and which are costing the most to maintain will be generally useful.

My next challenge is to work out how to apply these kinds of ideas to the situation I find myself in right now. It’s a large company not a startup, the customer seems totally inaccessible, the release cycle is years not hours, the build is slow (when it works at all), and UX experts are thin on the ground. Hmm. As Joseph said, his story is about what they’ve found to work, not some universal “best practices”. It’s given me some new ideas and encouragement – now I just have to work out what principles and practices I can reasonably introduce where I am.

Notes 2013-5-13: one of Joseph’s colleagues wrote a post “From 15 hours to 15 seconds” explaining how they got the build time so low, a few months after I wrote this post. I also got a note from Liz Keogh explaining that her blog post “Step away from the tools” was written well before the events at Songkick, in March 2011, and that she consulted with them briefly after that, discussing the build time problem amongst other issues.