It Doesn’t Help To Push AI Into A Crappy Process

by | Apr 29, 2026 | Augmented Coding

This post is based on a video with the same name on the Modern Software Engineering Channel

Not everyone is seeing improvements from introducing AI in the development process. For the teams that were already doing well, it seems adding AI makes them go even faster. For those who were already struggling, it seems to be making poor performance worse. I want to talk about the fundamentals of how we work minute by minute, and how to take advantage of these tools in your development process.

Let me tell you a story about getting the fundamentals right.

I once knew a guy who played the trombone, called Scott. He was very good at it – played a lot of jazz professionally. The trouble was, he had back pain. If you looked at the way he stood normally you could actually anticipate that. He was still young, but tended to kind of slouch and round his shoulders a little. Now, trombones are heavy, I don’t know if you ever tried to play one. If you start with this kind of posture, adding a big weight on one side with the other arm moving the trombone slide – you can see that is going make a bad situation worse.

What Scott did was he went to posture lessons with the Alexander technique. Just to learn to stand better. So that when he added the trombone on top of that improved posture, his back would stay straight and able to balance the weight across the shoulders. His back pain improved, and also, so did his trombone playing.

What I’m of course trying to talk about is software developers, our existing process or posture, and how we handle this huge new trombone, agentic AI, onto that. What we’re observing is that those with a good existing process are much better able to handle it. I want to dig into the details of why that is in practice.

Existing process

I’ve spent much of my career helping ordinary developers write better code and adopt a TDD process. I’ve learnt, before you can change someone’s behaviour it helps to understand how they do it already. This is my best description of the most common development process I observe people using without AI – I’ll call it Coding-Driven Development.

  1. Read the story or ticket
  2. Find a place to start coding. (read the code until you find somewhere), then start coding, change some things.
  3. Run the code somehow. – probably set a breakpoint in your new code and run the system in the IDE. Or, open up the browser running the app and click around a bit.
  4. Evaluate. Is my new code working?
    1. No, not good yet – so go back to step 2 – Debug, explore, read more code, find somewhere else
    2. Yes it’s good
      1. Possibly – write a unit test
      2. Possibly – improve the design, add comments
  5. Decide the story or ticket is done, commit and push, create pull request.

Downsides of this process

That process takes anything from a few hours to a few days. It does work – you get new features in your software. Often though there are significant downsides to this process. 

There tends to be time pressure that leads you to skimp on those steps after coding – writing unit tests and improving the design. That can lead to quite a lot of bugs even in the short term, but more insidiously the design tends to degrade over time. It gets slower and slower to add new features as the codebase grows – step 2 – find a place to start coding – takes longer as you have more code to look through. And step 3 – change the code – risks breaking existing functionality, especially if the unit tests are inadequate.

What I’ve spent many years as a technical coach is helping people to change to a more effective development process. Basically TDD changes all those steps.

Adding AI to the Coding-Driven Development process

I want to look carefully at what happens if you take this existing Coding-Driven Development process then hang a trombone on top of it – an AI tool.

  1. Read the story or ticket – give this text to the AI tool and ask it to fix it

Sometimes this works. If your ticket is well explained and small in scope, the code is well organized and the agent is able to work out what to do and where to do it, this can be a good strategy – the AI fixes the issue, you review the code changes and push. Fantastic! Job done.

Unfortunately, in many teams, that isn’t going to happen very often. The work items are not small or well defined enough, and/or the codebase is too large and disorganized for the agent to find where and what to do.

Pleading with the agent to ‘do it again, and no mistakes this time’ – actually doesn’t help. What you’re missing here is the part of TDD where you slice the feature, using examples and make a test list of smaller development tasks.

Most people sigh, the stupid AI couldn’t fix my feature, and simply continue with their normal process.

  1. Find a place to start coding, and start coding there- well, you prompt the AI to write the code there.

This is a smaller task, so it might be more likely for the AI to succeed in writing some code. The trouble is, this process tends to degrade from agentic AI – using tools in a loop – to fancy autocomplete, where you are the tool in the loop. You’re in the editor with the AI, directly writing code together with its inline suggestions. You can’t set up an agentic AI loop unless you can specify the outcome you want before the code exists. That’s a skill many developers have never practiced, and of course it’s a big part of TDD. 

In practice this coding activity still takes hours – you look around trying to figure out which bit of code to change, outsourcing smaller and smaller steps to the AI, getting less and less value from it.

  1. Run the code somehow.

You might have your AI agent set up so it can run the code directly, perhaps it can use an MCP tool to click around and verify some workflows. Do some “manual” testing. Which workflows it will test will depend on how well you have described your intention and acceptance criteria. Again, most developers have very little practice at doing this. So probably you’re still doing at least some of the evaluating by hand in the old fastioned way without AI.

  1. Evaluate. Is my new code working?
    1. No, not good yet – so go back to step 2

Continue prompting the agent to write more code. 

Hopefully at some point you get to 

  1. Yes it’s good
    1. Possibly – write a unit test

Prompt the AI to do that if it didn’t already. The thing is – the kinds of unit tests the AI writes in this situation will be different from the kinds of unit tests you get in TDD. They are being written from the point of view of – the implementation exists, let’s ensure it works. 

In TDD you write tests to express your intent about success criteria, and to ensure a testable and usable design. The tests are playing a fundamentally different role here, you’re getting less benefit from them by writing them afterwards.

  1. Possibly – improve the design

This has got to get through a code review!

Prompt the AI to do some design improvement. It will be restricted to changing the design in ways that don’t break the tests. As I mentioned, the tests are probably very tied to the implementation and might not allow much improvement.

Then you can commit and push in step 5 (Decide the story or ticket is done, commit and push, create pull request) as you would before. Various AI tools can make that easier for you.

AI supported Coding-Driven Development compared with ordinary Coding-Driven Development

The AI tool will probably have helped you to go faster. In his recent video Dave Farley quoted research he was involved in that noted a 30-50% speedup on this kind of coding task with AI assistance.

What you’re getting by hanging an AI tool onto your existing posture, is tickets resolved more frequently, but all the other problems are still there. You’re still probably not writing enough of the right kind of tests and probably still not achieving great code quality. You’re mostly doing what you were doing before, faster.

Jason Gorman refers to this as “attaching a code generation firehose to your existing development process”. If there are bottlenecks downstream of your code changes, like testing and support – they will get worse.

Change the initial posture – starting from a TDD process  

What you need to do is to fix the initial posture of your development process, before you add this new heavy instrument. Let me outline how a TDD process is amplified by the addition of AI. 

Let me explain where this description comes from. As well as relating my own experience, I also asked a half a dozen people who I know were good TDD practitioners in the “before times”, to explain how they work now, together with AI. All of them are very skilled engineers. None of them are currently using swarms of genies or multi agent setups. Everyone I spoke to said they write little or no code by hand any more. The precise mechanics of TDD are different now, but I still recognize the overall process. Let me explain.

TDD or Behaviour Driven Development begins with 

  1. Understand the problem, read the story, talk to stakeholders, come up with concrete examples.

AI can help you find good examples and help explore your understanding of the problem. It can also help you come up with a quick prototype or several prototypes to validate ideas with stakeholders.

  1. Sketch out a development plan or ‘test list’ that comprises examples that will turn into test cases

AI can help you slice the problem into pieces. This is a crucial step in TDD with or without AI. Making a plan that lets you reduce the scope of the problem you have to keep in your context at any one time.

  1. Pick the first slice and scenario or example – then build just that piece. 

The “Red” and “Green” steps of TDD tend to get combined in an agentic workflow, it both writes the test and makes it pass in the same step. You give the AI a clear description of what the code should do, with a fast feedback loop, and a suitably sized problem to work on. It’s really good at that.

  1. Refactor and improve the design

A year or so ago I was very skeptical about the ability of AI to refactor safely. Recently it’s got a lot better and actually these days it does have access to some deterministic tools for things like rename and extract method. You do have to prompt it to do design improvement though. Often the initial code works but is bloated or lacks other performance characteristics. The AI can refactor and keep your tests passing, and usually your tests are already pretty reliable at this point. 

  1. Check in and push.

That is perhaps the biggest difference between this and the Coding-Driven Development process I talked about before. You are ready to commit and push much more frequently, cycle time is measured in minutes not hours. The TDD practitioners I’ve been talking to are all reporting a cycle time very similar or faster than they used to get without AI, typically under 10 minutes.  Developing the whole feature still takes time, but you can commit and push small slices very frequently. 

That is a key characteristic of TDD and there are so many benefits of these frequent commits I will probably need to cover that in another video, but suffice it to say your code generation firehose is producing higher quality code and tests more frequently, in the long run you will go faster.

The other thing about the TDD process I want to highlight is that you can take advantage of different agents with different skills in their context in each step. You can make use of an “Augmented Coding pattern” called  “knowledge documents” that means your development tools get better every coding cycle. Also a big topic for another video. 

Conclusions

Everyone is excited about the prospect of using AI tools to speed up the development process and get better software into the hands of users sooner. I’ve described in some detail how a pre-AI Coding-Driven Development process looks, which is what I have observed as the most common way people approach development tasks. Think about  my friend Scott and his trombone. Adding an AI tool on top of that doesn’t tend to produce the results you’re looking for. You probably need to work on your underlying posture. The people who are really getting the most out of these tools are the ones who are starting from a solid engineering process and have a TDD mindset.

Happy Coding!

Hi – I´m Emily!

 I am a consultant with Bache Consulting and chair of the Samman Technical Coaching Society.  As a technical coach I work with software development organizations who want to get better at technical practices like Test-Driven Development, Refactoring and Incremental Design. I also write books and publish videos. I live in Gothenburg, Sweden, although I am originally from the UK.

Sociala ikoner med hovringseffekt

Practical Coaching –
Beyond the Blog

If you’re enjoying the insights shared here on the blog, you might enjoy my training too.

“Technical Agile Coaching with the Samman Method” offers a practical guide to improving how developers collaborate and write code. You’ll learn hands-on techniques for Test-Driven Development, Refactoring, and effective team coaching.

To learn more about the book, just click the link below.

Blog categories