The kind of training that Actually Works for Learning Agentic AI

by | Feb 4, 2026 | Opinion

Agentic AI-supported software development is probably the biggest shift in our field since high-level languages appeared in the 1950’s and 60’s. As a software developer you can’t ignore this change, and everyone has AI tools nowadays. The big problem I have is, how on earth do you learn to do good Software Engineering with Agentic AI? Is Prompt Engineering actually a proper skill that you can teach and learn? Let’s explore how you gain expertise as a software developer in the new landscape of AI tooling.

If you look there are plenty of courses about how to integrate AI into your product, so you get machine learning enabled features. That’s not really what I mean. I’m interested in how you learn to use AI tools to develop ordinary software. 

You can find courses on ‘how to make an MCP server’ or ‘how to implement RAG’ or ‘add hooks to Claude Code’.  That’s more like it – valuable skills for developers – but still not really what I’m talking about. Courses on ‘how to use AI’ tend to focus on specific techniques not generalizable skills.

Developers in all kinds of organizations are still being told to just try out the AI tools, experiment with them, learn how to use them and figure out how to use them in their production code. Trial and error and personal experience are of course valuable teachers, but there must be a way to get a leg up from someone who’s already done that. I said earlier that the shift to AI tools is probably as big as the shift from assembler to higher level languages.  When that happened, I don’t think the best approach was just to give everyone a FORTRAN compiler and tell them to work out the rest by themselves.

Ok, I wasn’t around when that shift happened, I’m not quite that old, but I did see the early days of Object Oriented design, which was another significant skills shift – when C++ and Java became mainstream. There were a lot of experts going around offering training and coaching, there were books and courses. In some places though, people didn’t get any of that. They just changed the compiler from C to C++ and carried on writing code exactly the same way as before. I see a lot of truly terrible OO code written about this time, and it wasn’t only C++, It turns out ‘you can write FORTRAN in any language’ and people wrote a lot of Java that way too. 

I’m convinced that with this kind of paradigm shift, you’ll get much better results if developers get training in AI tools, from experts. Which is one of the problems, how do you find experts in a technology that is this new?

Agentic AI

Although Large Language Models have been around for several years now, and coding assistants have been in your IDE for nearly as long, It’s less than a year since developers started getting access to Agentic AI. This is qualitatively different from what we had before, and training is particularly valuable.

Agentic AI is a tool that you can ask to take on a whole task, it works in a loop with access to other tools, able to run tests, research topics, try things out and update code across your repository. You can also have multiple agents working on different aspects of a task. 

It’s very different from having a sidebar in your IDE where you can ask questions, or AI generated line completion in your editor. Agentic AI is a much bigger shift in the way developers work minute by minute.

I started looking seriously at these tools about 6 months ago, with a view to start teaching them in my coaching work. I wanted to learn the state of the art Prompt Engineering, Vibe Coding and AI Assisted Engineering.

Software Engineering – optimize for learning and manage complexity

As a technical coach, I’ve spent many years teaching skills like Test-Driven Development, Clean Code, Refactoring – my focus is on improving the minute-by-minute processes and habits while coding. Agentic AI changes how that happens although I think in essence the mindset is the same. Dave Farley has identified the two main concerns in modern software engineering – optimizing for learning and managing complexity.

When you’re working with agentic AI you’re continually learning about the problem you’re trying to solve, working in small steps looking for feedback about whether you’re on the right track. Just like in TDD. 

And you’re managing complexity, deciding how to approach the problem, how to partition the code and separate concerns. There’s an added dimension now of also managing the context that you give each of your AI agents – that is, dividing up the problem. It’s still very like in TDD.

Code Katas

My focus is still on teaching the TDD mindset, the engineering process that I use to both optimize for learning and manage complexity. The way I’ve been teaching TDD for years now is with the Samman method for technical coaching. We use code katas – small fun coding exercises, and also pair and ensemble programming in real production code.

Code Katas, unfortunately turn out to be a terrible teaching aid for agentic AI. These problems are a great size for a human brain trying to learn new ways of working, but they are way too small to present any kind of challenge to AI. You can still use  code katas to learn the mindset, but you have to turn off your AI tools first.

If you instead teach with a larger task, this problem doesn’t really go away. Self-contained training problems like ‘write an online TODO list’ or a ‘microblogging social media app’ sound like they might be a better sized challenge. Unfortunately not. Jodie Burchell describes ‘data leakage’ as a big problem in cases like this. She explains that when you give an LLM a task that someone else has already published a solution to, the LLM will very easily be able to produce something very similar. Because, the published solution is in their training data. There are a lot of TODO list applications and similar published on github, so any piece of software which has previously been used as a teaching example will be in the training data.

You can’t learn how to use a tool on an exercise if it’s going to behave totally differently in your closed source novel production code. Basically I don’t think it’s possible to train skills for agentic AI on made-up examples. Which is a major part of how we usually teach new tools and techniques. How can I prepare training materials without small coding exercises?

Augmented Coding Patterns

What about Patterns then? This is an old idea in software – that by studying existing solutions you can start to spot repeated elements and understand some of the forces to look for that would lead you to not the actual specific solution, but a specific kind of solution – a pattern.

If Prompt Engineering were a skill you could learn, you’d expect there to be patterns in the kinds of prompts that work in particular situations. That is something you could describe on made up examples, and people can study them, then apply the same pattern in production code situations.

Example ‘Pattern’

The book “Beyond Vibe Coding” by Addy Osmani has had good reviews and has a section on patterns for prompt engineering. This book came out less than 6 months ago and I was just reading through the patterns. For most of them my reaction was – this is unnecessary, or this is obvious, or this is superseded. I think it was probably good advice when the book came out but the tools have moved on loads since then. 

For example, one of the patterns was “Contextual prompting” – say you want it to write some code using a particular API, you could include a snippet from the API’s documentation in your prompt, so it would be more likely to use it correctly in the generated code. These days, the AI agent is more than capable of looking up the documentation for an API for itself.

Newer patterns

There are newer patterns available that work much better with the current generation of Agentic AI tools. I’m particularly impressed with the Augmented Coding Patterns compiled by Lada Kesseler, Ivett Ördög and others. These patterns are nicely described in a talk and a website and I’ve found a lot of value using them. 

The example I just shared, Osmani’s pattern “Contextual Prompting”, Lada pointed out to me when I discussed it with her, it’s a vaguely-worded specific case of her pattern “Reference docs” – basically on-demand reference documentation. I’m finding these ‘augmented coding patterns’ are well explained, address fundamental limitations of LLMs and are based on a lot of actual experience of using agentic AI. 

I also want to stress again how fast these AI tools are changing. The very newest models are already doing some of these patterns without you needing to prompt them. For example the pattern “Check Alignment”. This pattern says: when you give the agent a new task, remind it to ask you questions and explain the plan before starting. This can stop the agent from going off in the wrong direction, so you discover whether your ideas are aligned sooner. Last time I used Claude Code, I didn’t need to tell it to do this. When I specified a new task, it spontaneously checked alignment with me. 

Although – It might not always do that. The tools are changing rapidly, and it’s hard to keep up sometimes. 

What is worth learning then?

I’m still convinced that the core skills in software engineering are – optimize for learning and – manage complexity. Those fundamentals don’t change when our tools do. When I use agentic AI, it’s still me doing the engineering. I think Prompt Engineering is a skill within software engineering, it’s part of the wider tactics you use to get the agentic AI to work in small steps and to stay in control of the complexity. “Augmented coding” is probably a better name for it.

A mindset of Test-Driven Development is still the best advice I have for an effective process. However the details of how to do it are different with Agentic AI.

Patterns for augmented coding are the most valuable specific AI skills I advise you to learn right now. I’m working closely with collaborators like Ivett Ördög, Lada Kesseler, Llewellyn Falco and others in the Samman society to bring these engineering patterns into Samman coaching and better teach Agentic AI. If you havn’t already, do subscribe to the Samman newsletter and the Modern Software Engineering channel to keep up to date.

My best advice for developers learning AI tools today: Trial and error in real production code is a slow way to learn. Experts and trainers are in a difficult position when it comes to specific AI skills though. Training on exercises is problematic, although you can use code katas to learn the TDD mindset. Find an expert in modern software engineering and prefer to get technical coaching in real production code. In particular, I encourage you to learn about and experiment with the latest augmented coding patterns.

Happy Coding!

Hi – I´m Emily!

 I am a consultant with Bache Consulting and chair of the Samman Technical Coaching Society.  As a technical coach I work with software development organizations who want to get better at technical practices like Test-Driven Development, Refactoring and Incremental Design. I also write books and publish videos. I live in Gothenburg, Sweden, although I am originally from the UK.

Sociala ikoner med hovringseffekt

Practical Coaching –
Beyond the Blog

If you’re enjoying the insights shared here on the blog, you might enjoy my training too.

“Technical Agile Coaching with the Samman Method” offers a practical guide to improving how developers collaborate and write code. You’ll learn hands-on techniques for Test-Driven Development, Refactoring, and effective team coaching.

To learn more about the book, just click the link below.

Blog categories