This post was originally published on Praqma’s blog

 

A short story about Pre-tested Integration

 

Three programmers
 

Continuous Integration and Code Review are strongly correlated with success. Many use Pull Requests for code review, but for co-located teams this can be an obstacle for CI. Is there a better way?

There are three developers on the (fictitious) team: Annika, Boris and Carol. Annika is a recent hire, fresh from university, Boris is the team lead, and Carol has been around the longest. Each of them is working on a different task. They all synchronized their work with their shared master branch when they arrived at the office today, and now it’s approaching morning coffee time. They have all made some changes in the code which they’d like to share with the rest of the team.

Annika is working on a local branch called ‘red’. She checks it’s up-to-date with master and pushes it to a remote branch named ‘ready/red’. It’s similar for Boris and Carol. They are on blue and orange branches respectively and push their changes to ready/blue and ready/orange.

The Build Server is set up so that it detects new branches on the Version Control Server that follow a naming convention. Any branch beginning with ‘ready/’ is scheduled for integration, and only one of these integration builds runs at a time. The Build Server delegates builds to one or more agents,and since the ready-job agent is idle, it picks up the ‘ready/red’ change straight away and leaves the other two ready-branch builds in the queue.

 

The build job has several steps. First, the agent merges the ready-branch into a local copy of the master branch. Annika’s changes get a simple fast-forward. The agent performs a full build, static analysis, code style check, and unit test. Everything goes well, so the agent pushes the merge result up to the Version Control Server and posts a message on the team message board.

Things start out similarly for Boris’ ready/blue branch. The build agent takes a copy of master from the git server and merges in the ready-branch. This isn’t a fast-forward merge, since there is a new commit in master for the ‘red’ changes, but it’s still ok. So long as the agent can do the merge without finding any conflicts the build can continue.

The agent then proceeds to the next build steps. Unfortunately, Boris hadn’t noticed that one of his changes caused a test failure. The team has previously agreed that the code in master should always pass the tests, so this means Boris’ changes shouldn’t be shared. The build agent sends a message to Boris telling him about the failed tests, discards its merged branch, and moves on. Carol’s ready/orange branch is up next. The build agent starts again with a fresh copy of the latest master taken from the git server. Carol’s changes also merge without difficulty and this time both build and tests pass. The build agent pushes the merge commit to the server and notifies the team.

 

Boris and Carol are having a cup of coffee while they wait for the build server to integrate their changes. Annika is chatting with the Product Owner about the new feature she plans to work on next, ‘cyan’. When they get back to their desks they see the messages from the build server.

Annika is happy to see her changes integrated successfully. She fetches the latest master from the remote git server. She’s now completed the work on the ‘red’ task, and her changes should undergo a code review. She marks the ‘red’ task as finished in the issue tracker and adds an agenda item to the team’s next scheduled code review meeting, which is later that week. Annika selects a new task to work on and checks out a local branch from master called ‘cyan’. Boris sees the message about his failed tests and realizes immediately what he missed. He’s a little embarrassed about his mistake, but happy his teammates are not affected. They may not even notice what’s happened. Boris takes the opportunity to merge the latest changes from master into his ‘blue’ branch. He is quickly able to address the problem with the tests and pushes an update to ready/blue. The build agent gets to work straight away.

Carol is not finished with the ‘orange’ task, but is happy to see her initial changes integrated successfully. She fetches master and merges it into ‘orange’ before continuing work there. She’s noticed a design change that would make her task easier. She plans the refactoring in steps so she can push small changes frequently as she completes the re-design. Sharing her changes with the team often will make it easier for everyone to avoid costly merges. Later that week, in the code review meeting, the team looks at Annika’s changes for the ‘red’ task. It represents a couple of days’ work. The code review tool presents a summary of all the commits involved and they discuss all the changes in the development of the ‘red’ feature.

Unfortunately, Boris and Carol are not happy with a part of the design Annika has made and the code formatting needs improving in places. The outcome of the meeting is that they agree to pair program with Annika on a refactoring of the design, and encourage her to initiate informal design discussions more often during development. The idea is that the more experienced developers, Boris and Carol, should help Annika to learn better design skills. The team finds the code-formatting issues a bit annoying since this kind of detail shouldn’t be in focus for a code review meeting. They create a task to improve the code-style checker in the pre-tested integration build to catch any similar code formatting problem in future.

Commentary

This development process is working really well for Annika, Boris and Carol, and pre-tested integration is a small but important piece. They are not using pull requests, but they have checks on what code is allowed into the master branch, and they have good code-review culture. Integration to master happens at a faster cadence than the flow of work-items. That’s important. Integration is less painful the more often you do it and you might not want to break your work items down to the same small granularity that would be best for code changes. You also don’t necessarily want to delay your integration by waiting for a teammate to review your pull request.

Strictly speaking, this process is not Trunk-Based development since there are more branches involved than just trunk, but so long as the integration is frequent in practice it’s indistinguishable. The benefit of this over Trunk-Based development is, of course, that Boris or any other developer can’t unwittingly break master for the rest of the team.

If you’re using Jenkins you can easily automate the integration process with our Pretested Integration plugin. It’s not difficult to implement this functionality yourself for other build servers. Whichever approach your team chooses, I recommend you settle on a process that results in frequent integration together with collaborative and constructive code-reviews.

 

This post was originally published on Praqma’s blog

Continuous Integration is now synonymous with having a server set up to build and test any change submitted to a central repository. But this isn’t the only way, or even how CI used to work. What did we do before DVCSs and Jenkins?

What is the connection between cricket and pre-tested integration? Pre-tested integration is about the way you set up your Continuous Integration server to metaphorically catch the balls you drop and the tests that fail, just like in cricket.

When a team first sets up a build server, this is the typical way to do it – you have it build any change in the mainline branch, and notify you if it fails. Developers are supposed to run the build and test before they push the code; the build server is supposed to be a kind of back-stop, catching any balls you drop. Any developer who manages to break the build is singled out by having their name displayed on the ‘information radiator’ screen in the corner. Just a little ridicule, some good-natured teasing, enough so that they are shamed into remembering to run the tests next time.

The thing is, it’s quite easy to make a mistake that causes the build to fail. People can get upset with you for breaking the build, and it can be embarrassing to have your name in lights. You can get into a bad cycle of stress, making mistakes more likely, causing you to break the build more often, making you more stressed…

Even a short length of time with a broken master can disrupt anyone else who is trying to integrate their changes, and discourage them from doing so. You can get into a bad cycle of people mistrusting the master and hoarding their changes. This causes bigger commits, more likely failed builds, and less trust… if it gets really bad you can find yourself spending several miserable days resolving three-way merges. Believe me, you don’t want to go there!

I think we can learn something from how they used to do Continuous Integration in the days before CI servers became widespread. I think they had a more humane process that was less stressful, and led to more virtuous cycles than downward spirals.

The whole point of Continuous Integration is to ensure that all the developers are working on essentially the same version of the source code. If you compared my working copy with those of my colleagues you should not be able to see any more differences than have accumulated through work either I or they have done today. Ideally, only changes we’ve made in the last couple of hours. The point of this is to make integrating our work a trivial task that takes seconds and is hard to mess up.

If you’re interested, you can read Jame Shore’s description of CI without a build server, but I will summarize it below:

When a developer was ready to integrate their changes they would physically walk to a designated ‘integration machine’, load their changes there, and perform a full build and test. No-one else was allowed to begin their integration until they were done. If the integration build passed they would simply say in a loud voice to the other developers to pull their changes from that machine. If there was a problem with the integration, and they couldn’t fix it quickly, they would revert their changes on the integration machine, and return to their workstation to fix the problem.

It’s a process that clearly only works if everyone is sitting in the same room. On the other hand, it works fine even when there are no atomic commits, no build servers, and no automated merges. Let me highlight some aspects of this process:

  • Integration to mainline is serialized – one set of changes is integrated and verified at a time.
  • Developers are notified when there are changes in mainline, and are expected to integrate them locally soon after.
  • If the integration fails the rest of the team is not affected, mainline is not broken, no-one else need know.
  • Developers can’t start working on the next task until the integration is completed because they sit at the integration machine, not their workstation. Developers can supervise the integration, fixing small problems as they arise.

I think those points have been lost in the way I see most teams set up their Continuous Integration build server. Basically – too much shaming, and not enough collaboration!

What we at Praqma have done with several of our customers is use a technique we’re calling ‘pre-tested integration’. I think it’s closer to the original CI process. You might have heard it called ‘validated merges’ or ‘pre-verified commits’.

When a developer has local changes that are ready to be integrated to the mainline they first push them to a remote branch named ‘ready/xxx’ (replace xxx with whatever you like, a Jira task number, a feature name, random string…) The build server is triggered whenever it detects a new remote branch that obeys this naming convention. It puts them in a queue and works on integrating them one at a time.

The build server takes a copy of the latest mainline from the central version control repository, and updates it with the changes from the ready-branch. For this to succeed any merges must be straightforward – i.e. possible for it to be done automatically. The build server then runs an automated build and test. If everything succeeds the server pushes the integrated changes up to the central code repository. The build server also updates a webpage or slack channel, notifying the team that new code is available in mainline. The ready-branch is now integrated, finished with, and can be deleted.

On the other hand, if the merge or the build fails, the developer has more work to do – their work wasn’t ready to integrate after all. The build server notifies them so they can go back and fix the problem and submit a new ready-branch. Crucially, the rest of the team is completely unaffected. Someone else is free to integrate their changes instead.

This pre-tested integration process is very similar to the CI process without a build server described earlier. We’ve added a little automation, we’re making use of some features of modern version control systems, but on the whole I think it’s closer to the old manual process than the usual way CI servers are set up.

  • We ensure integration is serialized through build server configuration rather than a physical token.
  • The build server notifies other developers when new code is successfully integrated – I think a post in a slack channel works much like talking in a loud voice in the team space.
  • If an integration fails the shared mainline is not broken, only the developer who submitted the change is affected.

The other two points do differ though:

  • Nothing forces developers to wait for the integration step before starting on a new task, but it is usually more convenient to do so. You wait for the build server chat room message, then pull your integrated change from the remote master before continuing work.
  • Developers can’t fix problems in the integration step while it is ongoing.

During the integration step the build server is busy, but your developer machine is not. You could be tempted to move on to your next task before the previous one is finished and integrated. If the integration fails you’ll have to context-switch back, which may take more time than the idling you avoided.

There are upsides to having an integration process that actually can’t be supervised though. You get into a virtuous cycle where you are more successful if you submit small changes that can be integrated automatically, and you are more successful when you integrate more often. In short, you have put some automation in place that will draw you in the right direction. Smaller, more frequent commits, is exactly the behaviour we’d like to encourage.

If you’d like to find out more please take a look at Praqma’s pretested integration plugin for Jenkins. It’s one way to implement the ideas described in this post. It’s a free and open-source project, and we’re currently working on something of an overhaul. This development is funded through our CoDe Alliance, “where ambitious customers meet around continuous delivery”.

The latest version has support for Jenkins pipeline. This will bring the plugin up to date with modern usage – many people use Pipeline and freestyle jobs these days. The new version will also allow for much more flexibility in job design, with matrix and job combinations. It’s exactly the kind of tool I need for the team I’m working with right now.

Fundamentally though, succeeding with Continuous Integration is not really about tools, it’s about collaboration. You should choose tools that let developers feel safe, tools which encourage them to synchronize their work often, integrating small pieces continually in their work together.

Fictitious people that might get real – and sue you!

Finding realistic data for testing is often a headache, and a good strategy is often to fabricate it. But what if your randomly generated data turns out to belong to a real person? What if they complain and you get fined 4% of global turnover?!

Please note: This story was originally posted on Praqma’s site in October 2017.

GDPR

I am of course referring to the new GDPR regulations which will come into force next year. The aim of them is to prevent corporations from misusing our personal data, which generally seems like a good thing. Many companies use copies of production databases for testing new versions of their software. Sometimes that means testers and developers get access to a lot of data that they probably shouldn’t be able to see. Sometimes it means you’re testing with old and out of date versions of people’s information, or even testing with customers who cancelled their contracts with you long ago.

Generally I’m in favour of tightening up the rules around this, since I think it will better protect people’s privacy. I think it will also force companies to improve their testing practices. In my experience it’s easier to get reliable, repeatable automated tests if each test case is responsible for creating all the test data it uses.

If you instead write a lot of tests assuming what data is in the database from the start, maintaining that data can get really expensive. I’ve seen this happen first hand – at one place I worked, maintaining test data became such a big job that a whole test suite had to be thrown away along with the data!

Data-Driven Testing

At my last job, part of the test strategy involved data-driven testing. In my test code I’d access an internal API to create fictitious companies in the system. Then, simply by varying the exact configuration of these companies, I could exercise many different features of the system under test. Every case test created different, unique companies. This meant you could run each case as many times as you liked. You could also run lots of tests together in parallel, since they all worked with different data. Each time you ran the tests, you had the opportunity to start over with a fresh, almost empty database – a small memory footprint that meant a fast system and speedy test results.

Fictitious People

When I came to my current client I was planning to use a similar approach. The difference here is that instead of fictitious companies, I needed to create fictitious people. So, I got out my random number generator and started creating fictitious names, addresses, and, of course, Swedish Personal Numbers. That’s when things started getting tricky… Let me introduce those of you who aren’t Swedish to the Personal Number.

It’s a bit like a social security number, unique for each person, and the government and other agencies use it as a kind of primary key when storing your data. If you know someone’s number, (and you know where to look), you can find out their official residence, how much tax they paid, credit rating, that sort of thing.

You’ll understand why, then, we started getting worried about GDPR at this point. A Swedish personal number is definitely a ‘personally identifiable’ piece of data, and GDPR has a lot to say about those.

A primary key containing your age

The other thing about the personal number is that it’s not just a number. When they first came up with the idea, in the 1940s, they thought it would be a great idea to encode some information in this number. (I’m not sure database indexing theory and primary keys were very well understood then!) So, they decided that the first part should be your birthdate, then three digits specifying where in Sweden you were born, and the last figure is both a checksum, so numbers can be validated, and also specifies your gender.

These days, lots of Swedish people are born outside Sweden, and not everyone has a gender that matches the one they had at birth, so they had to relax some of these rules. Sometimes too many children are born on one day and they run out of numbers for it, in which case you might get assigned the day before or after. However, the number still always includes your birth year, and hence your age.

Never ask a woman her age, (just get her personal number)

These days it’s quite common for retail chains to ask you to join a ‘customer club’ so you can accrue points on your purchases. When you buy something the assistant will often ask for your club membership number. Almost always the membership number is the same as your personal number, so you end up telling them (and everyone in the queue behind you) exactly how old you are. It’s disconcerting, if not rude!

Fictitious Test People get real and sue you

Back to my test data problem. I’m generating fictitious people, but the system under test both validates the checksum, and uses the personal number to determine their age. I have to generate realistic numbers from the last hundred years or so. This is where things start to get tricky. Someone in the legal department objected to this. He told me that if I was to generate a number belonging to a real person we could really get into trouble because of GDPR. If that person discovered we were using their personal number, and had changed their name to “Test Testsson” living at “Test gata 1, 234 56 Storstad”, that might not be too bad. Unfortunately, if we were using them for a tricky scenario, they might also discover our test database linked their personal number to a dreadful credit rating and a history of insolvency. At that point they might start to get insulted. In the worst case, they could sue us and GDPR would mean we’d have to pay them an awful lot of money!

Keeping fictitious test people imaginary

However unlikely that scenario might be my client is understandably unwilling to take the risk, and I really need to restrict the test system to genuinely fictitious personal numbers.The authorities do provide a list of these exactly for testing purposes, but the trouble is there are only about 1000 numbers on the official list. With the kind of automated testing I’m planning to do that is not enough. Not nearly enough! Assuming my system has about 500 features, I want to write 5 scenarios per feature, I have 25 developers running all the tests 25 times a day… I need something like half a million unique numbers. Less if I can wipe the db more often than once a day, but still. A thousand doesn’t go very far.

So, here’s a startup idea for you – I’m pretty sure my client isn’t the only company out there who would pay good money for a list of a million personal numbers, for people aged less than 100, that are guaranteed not to exist in reality. If you have such a list, and would be willing to maintain it for me such that it remains guaranteed fictitious, please get in touch!

Playing with time

Right, so I’m back on generating personal numbers again, not having found that list yet. I have a couple of ideas about how to keep them fictitious though. The first idea is to use really old numbers. While I’m testing the system, I can set the clock back to 1900, and only use generated numbers from 1800-1890. Since everyone over 125 can be assumed to have passed on, they are unable to sue us. For some scenarios I need under 18 year olds, and by changing the date that the system clock thinks is ‘today’, I can generate children born in the 1870s. Fictitious children complete with bank accounts, mobile phone contracts, student debt, acne… and everything will work just fine!

Hack the checksum

The other idea I had was to use personal numbers with invalid checksums. After the date of birth and the three random numbers, the last figure in the personal number is calculated from the others, using the Luhn algorithm. While I’m testing the system, I can modify the ‘PersonalNumber.validateChecksum()’ function to instead accept all invalid numbers, and reject all valid ones. Then all my tests can generate perfectly invalid personal numbers, and the test database will get filled with people that definitely can’t exist in reality.

As with any well-designed system, there is only one place in the code where I calculate the checksum, so it’s actually a really small change just for testing. It also comes with insurance – if I ever accidently deploy this code change in production, absolutely everything will stop working, since all those valid numbers already in the production database will suddenly get rejected. I think we’d notice pretty quickly if that happened!

Is there a better way?

I haven’t actually implemented any of these strategies yet – we’re still at the planning stage. I’d be very interested to hear if anyone else has solved this problem in a better way. In the meantime, I’ve written some code that can generate personal numbers with and without valid checksums, and also lets you specify an age range – it’s available on my github page.

We can’t be the only people worried about GDPR and personal numbers. That threat of a fine of 4% of global turnover is a pretty mighty incentive to focus some effort on cleaning up test data.

As a woman programmer, I have noticed there is something of a gender imbalance in my profession. It’s an issue that’s interested me for a while, not least because people often ask me about what we can do to improve the situation. For myself, I enjoy writing code and I think it’s a great career. The sexism I’ve been aware of has not made a big impact on my life, although I know not everyone has been so fortunate. Susan Fowler’s blog really shocked me earlier this year. I have had some bad experiences, but not like that.

I recently read this article about the history of women in programming, by . She shows this graph comparing percentage women in different university studies in the US. It’s quite stark:

The percentage of women studying Computer Science suffers a trend reversal in the mid 80’s, while the other subjects don’t. The explanation given, is that it’s about then that home computers began to appear on the market, sold as a toy for boys. I lived through that time, and yes, my family bought a ZX Spectrum in the mid 80’s when I was about 10 years old, and yes, my younger brother learnt to program it and I didn’t. Fortunately I managed to learn to program later on anyway.

All this got me thinking about my current situation. I live in Sweden, and it’s a very different culture than the US. For example, I was reading about the concept of ‘male privilege‘. One of the examples given is that men have the privilege of keeping their name when they marry, while women are questioned if they keep theirs. The thing is, in Sweden, this is not true. Either partner may change their name and it’s not remarkable for them both to keep their original names, or both swap to something entirely different. That’s something of a trivial example, but I do think it is a sign of a wider cultural difference. Privilege is experienced in a social context, and Sweden has a much more feminist society in many ways. (See for example this page about gender equality in Sweden)

So I became curious to see whether the same thing happened in Sweden – did the proportion of women computer scientists also drop in the 80’s? I discovered that the Swedish statistical authority collects and publishes data on this kind of thing, and you can search it via a web gui. I started poking around on it and soon I was hooked. Loads of really interesting data lying around waiting to be analysed!

Here is the plot I came up with, that is showing somewhat equivalent data to the graph on the US that I showed earlier:

(If you want to check my data, I got it from statistikdatabasen.scb.se, from the table “Antal examina i högskoleutbildning på grundnivå och avancerad nivå efter universitet/högskola, examen, utbildningslängd, kön och ålder. Läsår 1977/78 – 2015/16”)

Although the proportion of women in engineering is pretty low compared to the other subjects, it’s encouraging to note that the proportion has increased more than the other subjects. It’s now at a similar level to where doctors, lawyers and architects were thirty-five years ago. (I was disappointed not to find any data for Natural Sciences. I’m not sure why that’s excluded from the source database). Anyway, I’m not seeing this trend change in the 80’s, the curve is fairly smoothly upwards. I suspect the subject breakdown isn’t detailed enough to pick out Computer Science from the wider Engineering discipline, and that could explain it.

So I’ve done some more digging into the data to try to find if there was a turning point in the mid 80’s for aspiring women programmers. I think something did happen in Sweden, actually. This is the graph that I think shows it:

(I’m using the data sources “Anställda 16-64 år i riket efter yrke (3-siffrig SSYK 96), utbildningsinriktning (SUN 2000), ålder och kön. År 2001 – 2013” and “Antal examina i högskoleutbildning på grundnivå och avancerad nivå efter universitet/högskola, examen, utbildningslängd, kön och ålder. Läsår 1977/78 – 2015/16”, the SSYK codes I used are shown in the title of the graph)

If you look at the blue curve for 2001, you can see it peaks at age 35-39 years – that is, there were a higher proportion of women programmers at that age than other ages. If you were 35-39 in 2001, you were probably doing your studies in the mid to late 80’s. Notice that the proportion of women at younger ages is lower. The green and yellow curves for 2005 and 2010 continue to show the same peak, just moved five years to the right. The proportion of women coming in at the younger agegroups remains lower. The orange curve for 2015 is a little more encouraging – at least the proportion of women in the youngest two age-groups has levelled off and is no longer sinking!

So it looks to me like there was a trend change in the mid to late 80’s in Sweden too – the proportion of women entering the profession seems to drop from then on, based on this secondary evidence. I imagine that computers were also marketed here as a boy’s toy. I really hope that things are changing today in Sweden, and that more women are studying computer science than before.

For reference, I did similar curves for several other professions, using the same dataset.

So there are a lot of women lawyers out there, and the proportion looks to be continuing to increase.

Male nurses seem to have things worse than female programmers, unfortunately. Plus I can’t see any real trend in this graph – the situation is bad and fairly stable.

The proportion of women police officers levelled off for a while but they’ve managed to turn things around, and it is now increasing again.

So programming is the only profession I discovered that has this decreasing trend of women participation, even if it has now levelled off. Let’s hope that changes to an upward trend soon – my daughters will by applying to university in about ten year’s time…

 

 

Note: this was originally posted on Praqma’s blog

The life of a consultant has drawn me back, but perhaps surprisingly, this time it’s not a return to my one-person firm. Rather than reinvigorating Bache Consulting, I’ve decided to join Praqma, the Continuous Delivery and DevOps Company. I was pretty happy at Pagero. I’ve successfully run my own business before. Some of my friends think I’m crazy to leave a good job with a great team working with exciting technologies. Others think I’m crazy to not want to be my own boss again.

So why Praqma?

Code that gets used

Writing code is really good fun. Writing code that people actually use is even more fun. That’s one of the reasons I like setting up Continuous Delivery pipelines for a development organization. Anything you change gets used almost straight away, and you get feedback from people around you all the time. It’s a fantastic feeling when you implement something that means the whole development team can move forward more quickly, confident in the quality of what they’re producing.

The kind of work I’m talking about is setting up build and delivery pipelines using a CI tool like Jenkins or Go.CD. It’s automating all the installation steps needed for a new machine in a staging environment. It’s moving all the code into a modern DVCS like Git, and setting up automated processes to help keep the master branch in a working state.

In my experience, this kind of work gives you the opportunity to raise the productivity of so many other people, it can have far more impact than if you stayed in your cube hacking on a product by yourself.

Amplify your effect

Another activity I really enjoy is facilitating and teaching. Preferably that kind of teaching that you do from the back of the room, where your job is to set up situations where learning is inevitable. I’ve spent a lot of time over the years facilitating Coding Dojo meetings, and more recently I’ve started doing more Mob Programming with teams. My aim is usually to get people to experience effective development practices like Test-Driven Development (TDD), and understand the difference it makes.

Again, the focus is on raising the productivity of other people, this time through coaching and training. Actually, I’m not directly raising anyone’s skill level. What I’m trying to do is to activate people’s innate motivation to want to get better at their job, and showing them a way to do that. It goes hand in hand with making the technical environment they’re working in conducive to good practice by having good automation infrastructure. TDD makes a lot more sense when you have a CI system to run the tests, and information radiators that show you when the build is broken.

Raise your game

So I’ve just joined Praqma, which is a consultancy focused on Continuous Delivery and DevOps. I now have a host of colleagues who are also really skilled with this kind of technical coaching role I just talked about, and know how much fun it can be. We’re really good with tools like Jenkins, Docker, Artifactory and Git, we set up Continuous Delivery pipelines, and we coach developers in how to use them. What we’re finding though, is that once we’ve got all that set up, the next step usually involves improving the automated testing in the pipeline. We need to be there coaching developers in TDD, and getting automated system tests set up.

That’s where I’m hoping my joining Praqma will help us all to raise our game. Starting with the consulting work Praqma already does, which of course lays the foundations with Continuous Delivery, we can start doing more coaching in test automation. I have long experience of that, teaching TDD in particular. I have much less experience of some of the other things that Praqma does though. What I’m hoping is that by working together with the other consultants at Praqma, and collectively sharing our toolboxes, we can all have more fun and achieve more for our clients. I don’t mind working alone at a client, I’ve done it before, but having colleagues is so much better. We’re going to have a blast!

Join us

Praqma is expanding throughout Scandinavia right now, with offices in places like Copenhagen, Oslo and Stockholm already. I’m starting the office in Göteborg. If the kind of technical coach role I’ve just described appeals to you, I’d be delighted to tell you more, do send a mail, emily.bache@praqma.com, or a tweet, I’m @emilybache.

We’re not only looking for senior people with lots of experience, by the way. If you have the right attitude and willingness to learn, that can take you a long way. We’re looking for people to join Praqma at all our offices, please send your resume and a covering letter to jobs@praqma.com. Of course I’d be particularly pleased if you wanted to join me in Göteborg.