Walking Skeletons, Delivery Pipelines & DevOps Drills

On my 3-day Code Craft training workshop (and if you’re reading this in January 2026, training’s half-price if you confirm your booking by Jan 31st), there’s a team exercise where the group need to work together to deliver a simple program to the customer’s (my) laptop where I can acceptance-test it.

It’s primarily an exercise in Continuous Delivery, bringing together many of the skills explored earlier in the course like Test-Driven Development and Continuous Integration.

But it also exercises the muscles individual or pair-programmed exercise don’t reach. Any problem, even a simple one like the Mars Rover, tends to become much more complicated when we tackle it as a team. It requires a lot of communication and coordination. A team will typically take more time to complete it.

And it also exercises muscles that developers these days have never used before. In 2026, the average developer has never created, say, a command-line project from scratch in their tech stack. They’ve never set up a repo using their version control tool. They’ve never created a build script for Continuous Integration builds. They’ve never written a script to automatically deploy working software.

In the age of “developer experience”, a lot of people have these things done for them. Entry-level devs land on a project and it’s all just there.

That may seem like a convenience initially, but it comes with a sort of learned helplessness, with total reliance on other people to create and adapt build and deployment logic when it’s needed. A lot of developers would be on a significant learning curve if they ever needed to get a project up and running or to change, say, a build script.

It’s the delivery pipeline that frustrates most teams’ attempts to get any functionality in front of the customer in this exercise.

I urge them at the start to get that pipeline in place first. Code that can’t be used has no value. They may have written all of it, but if I can’t test it on my machine – nil points. Just like in real life.

They’re encouraged to create a “walking skeleton” for their tech stack – e.g., a command-line program that outputs “Hello, world!”, and has one dummy unit test.

This can then be added to a new GitHub repository, and the rest of the team can be invited to collaborate on it. That’s the first part of the pipeline.

Then someone can create a build script that runs the tests, and is triggered by pushes to the main (trunk) branch. On GitHub, if we keep our technical architecture vanilla for our tech stack (e.g., a vanilla Java/Maven project structure), GitHub actions can usually generate a script for us. It might need a tweak or two – the right version of Java, for example – but it will get us in the ballpark.

So now everyone in the team can clone a repo that has a skeleton project with a dummy unit test and a simple output to check that it’s working end to end.

That’s the middle of the pipeline. We now have what we need to at least do Continuous Integration.

The final part of the pipeline is when the food makes it to the customer’s table. I remind teams that my laptop is a developer’s machine, and that I have versions of Python, Node.js, Java and .NET installed, as well as a Git client.

So, they could write a batch script that clones the repo, builds the software (e.g., runs pip install for a Python project), and runs the program. When I see “Hello, world!” appear on my screen, we have lift-off. The team can begin implementing the Mars Rover, and whenever a feature is complete, they can ping me and ask me to run that script again to test it.

And thus, value begins to flow, in the form of meaningful user feedback from working software. (Aww, bless. Did you think the software was the value? No, mate. The value’s in what we learn, not what we deliver.)

And, of course, in the real world, that delivery pipeline will evolve, adding more quality gates (e.g., linting), parallelising test execution as the suite gets larger, progressing to more sophisticated deployment models and that sort of thing, as needs change.

DevOps – the marriage of software development and operations – means that the team writing the solution code also handles these matters. We don’t throw it over the wall to a separate “DevOps” team. That’s kind of the whole point of DevOps, really. When we need a change to, say, the build script, we – the team – make that change.

But you might be surprised how many people who describe themselves as “DevOps Engineers” wouldn’t even know where to start. (Or maybe you wouldn’t.)

It’s not their fault if they’ve been given no exposure to operations. And it’s not every day that we start a project from scratch, so the opportunities to gain experience are few and far between.

Given just how critical these pipelines are to our delivery lead times, it’s surprising how little time and effort many organisations invest in getting good at them. It should be a core competency in software development.

It’s especially mysterious why so many businesses allow it to become a bottleneck by favouring specialised teams instead of T-shaped DevOps software engineers who can do most of it themselves instead of waiting for someone else to do it. Teams could have a specialised expert on hand for the rare times when deep expertise is really needed.

If the average developer knew the 20% they’d need 80% of the time to create and change delivery pipelines for their tech stack(s), there’d be a lot less waiting on “DevOps specialists” (which is an oxymoron, of course).

Just as a contractor who has to move house often tends to become very efficient at it, developers who have to get delivery pipelines up and running often tend to be much better at the yak shaving it involves.

So I encourage teams to make these opportunities by doing regular “DevOps drills” for their tech stacks. Get a Node Express “Hello, world” pipeline up and running from scratch. Get a Spring Boot pipeline up and running from scratch. etc.

Typically, I see teams doing them monthly, and as they gain confidence, varying the parameters (e.g., parallel test execution, deployment to a cluster and so on), and making the quality gates more sophisticated (security testing, linting, mutation testing and so on), while learning how to optimise pipelines to keep them as frictionless as possible.

Why Does Test-Driven Development Work So Well In “AI”-assisted Programming?

In my series on The AI-Ready Software Developer, I propose a set of principles for getting better results using LLM-based coding assistants like Claude Code and Cursor.

Users of these tools report how often and how easily they go off the rails, producing code that doesn’t do what we want and frequently breaking code that was working. As the code grows, these risks grow with them. On large code bases, they can really struggle.

From experiment and from real-world use, I’ve seen a number of things help reduce those risks and keep the “AI” on the rails.

  • Working in smaller steps
  • Testing after every step
  • Reviewing code after every step
  • Refactoring code as soon as problems appear
  • Clarifying prompts with examples

Smaller Steps

Human programmers have a limited capacity for cognitive load. There’s only so much we can comfortably wrap our heads around with any real focus, and when we overload ourselves, mistakes becomes much more likely. When we’re trying to spin many plates, the most likely result is broken plates.

LLMs have a similarly-limited capacity for context. While vendors advertise very impressive maximum context sizes of hundreds of thousands of tokens, research – and experience – shows that they have effective context limits that are orders of magnitude smaller.

The more things we ask models to pay attention to, the less able they are to pay attention to any of them. Accuracy drops of a cliff once the context goes beyond these limits.

After thousands of hours working with “AI” coding assistants, I’ve found I get the best results – the fewest broken plates – when I ask the model to solve one problem at a time.

Continuous Testing

If I make one change to the code, and test it straight away, if tests fail then I wouldn’t need to be a debugging genius to figure out which change broke the code. It’s either a quick fix, or a very cheap undo.

If I make ten changes and then test it, it’s going to take significantly longer, potentially, to debug. And if I have to revert to the last known working version, it’s 10x the work and the time lost.

An LLM is more likely to generate breaking changes than a skilled programmer, so frequent testing is even more essential to keep us close to working code.

And if the model’s first change breaks the code, that broken code is now in its context and it – and I – don’t know it’s broken yet. So the model is predicting further code changes on top of a polluted context.

Many of us have been finding that a lot less rework is required when we test after every small step rather than saving up testing for the end of a batch of work.

There’s an implication here, though. If we testing and re-testing continuously, that suggests that testing very fast.

Continuous Inspection

Left to their own devices, LLMs are very good at generating code they’re pretty bad at modifying later.

Some folks rely on rules and guardrails about code quality which are added to the context with every code-generating interaction with the model. This falls foul of the effective context limits of even the hyperscale LLMs. The model may “obey” – remember, they don’t in reality, they match and predict – some of these rules, but anyone who’s spent more than a few minutes attempting this approach will know that they rarely consistently obey all of them.

And filling up the context with rules runs the risk of “distracting” the LLM from the task at hand.

A more effective approach is to keep the context specific to the task – the problem to be solved – and then, when we’ve got something that works, we can turn our attention to maintainability.

After I’ve seen all my tests pass, I then do a code review, checking everything in the diff between the last working version and the latest. Because these diffs are small – one problem at a time – these code reviews are short and very focused, catching “code smells” as soon as they appear.

The longer I let the problems build up, the more the model ends up wading through it’s own “slop”, making every new change riskier and riskier.

I pay attention to pretty much the same things I would if I was writing all the code myself:

  • Clarity (LLMs really benefit from this, because… language model, duh!)
  • Complexity – the model needs the code likely to be affected in its context. More code, bigger context. Also, the more complex it is, the more likely it is to end up outside of the model’s training data distribution. Monkey no see, monkey can’t do.
  • Duplication – oh boy, do LLMs love duplicating code and concepts! Again, this is a context size issue. If I duplicate the same logic 5x, and need to make a change to the common logic, that’s 5x the code and 5x the tokens. But also, duplication often signposts useful abstractions and a more modular design. Talking of which…
  • Separation of Concerns – this is a big one. If I ask Claude Code to make a change to a 1,000-line class with 25 direct dependencies, that’s a lot of context, and we’re way outside the distribution. Many people have reported how their coding assistant craps out on code that lacks separation of concerns. I find I really have to keep on top of it. Modules should have one reason to change, and be loosely-coupled to other parts of the system.

On top of these, there are all kinds of low-level issues – security vulnerabilities, hanging imports, dead code etc etc – that I find I need to look for. Static analysis can help me check diffs for a whole range of issues that would otherwise by easy to miss by me, or by an LLM doing the code review. I’m seeing a lot of developers upping their game with linting as they use “AI” more in their work.

Continuous Refactoring

Of course, finding code quality issues is only academic if we don’t actually fix them. And, for the reasons I’ve already laid out – we want to give the model the smoothest surface to travel on – fix them immediately.

And I don’t fix all the problems at once. I fix one problem at a time, again for reasons already stated.

And after I fix each problem, I run the tests again, in case the fix broke anything.

This process of fixing one “code smell” at a time, testing throughout, is called refactoring. You may well have heard of it. You may even think you’re doing it. There’s a very high probability that you’re not.

Clarifying With Examples

Here’s an experiment you can try for yourself. Prepare two prompts for a small code project. In one prompt, try to describe what you want as precisely as possible in plain language, without giving any examples.

The total of items in the basket is the sum of the item subtotals, which are the item price multiplied by the item quantity

In the second version, give the exact same requirements, but using examples.

The total of items in a shopping basket is the sum of item subtotals:

item #1: price = 9.99, quantity = 1

item #2: price – 11.99, quantity = 2

shopping basket total = (9.99 * 1) + (11.99 * 2) = 33.97

See what kind of results you get with both approaches. How often does the model misinterpret precisely-described requirements vs. requirements accompanied by examples?

It’s worth knowing that code-generating LLMs are typically trained on code samples that are paired with examples like this. When we include examples, we’re giving the model more to match on, limiting the search space to examples that do what we want.

Examples help prevent LLMs grabbing the wrong end of the prompt, and many users have found them to greatly improve accuracy in generated code.

Harking back to the need for very fast tests, these examples make an ideal basis for fast-running automated “unit” tests (where “units” = units of behaviour). It would make good sense to ask our coding assistant to generate them for us, because we’re going to be needing them soon enough.

Putting It All Together

If we were to imagine a workflow that incorporates all of these principles – small steps, continuous testing, continuous inspection, continuous refactoring, clarifying with examples – it would look very familiar to the small percentage of developers who practice Test-Driven Development.

TDD has been around for several decades, and builds on practices that have been around even longer. It’s a tried-and-tested approach that’s been enabling the rapid, reliable and sustainable evolution of working software for those in the know. If you look inside the “elite-performing” teams in the DORA data – the ones delivering the most reliable software with the shortest lead times and the lowest cost of change – you’ll find they’re pretty much all doing TDD, or something very like TDD.

TDD specifies what we want software to do using examples, in the form of tests. (Hence, “test-driven”).

It works in micro-iterations where we write a test that fails because it requires something the software doesn’t do yet. Then we write the simplest code- the quickest thing we can think of – to get the tests passing. When all the tests are passing, then we review the changes we’ve made, and if necessary refactor the code to fix any quality problems. Once we’re satisfied that the code is good enough – both working and easy to change – we move on to the next failing test case. And rinse and repeat until our feature or our change is complete.

TDD practitioners work one feature at a time, one usage scenario at a time, one outcome at a time and one example at a time, and one refactoring at a time. Basically, we solve one problem at a time.

And we’re continuously running our tests at every step to ensure the code is always working. While automated tests are a side-effect of driving design using tests, they’re a damned useful one! And because we’re only writing code that’s needed to pass tests, all of our code will end up being tested. It’s a self-fulfilling prophecy.

Embedded in that micro-cycle, many practitioners also use version control to ensure they’re making progress in safe, easily-reverted steps, progressing from one working version of the code to the next.

Some of us have discovered the benefits of a “commit on green, revert on red” approach to version control. If all the tests pass, we commit the changes. If any tests fail, we do a hard reset back to the previous working commit. This means that broken versions of the code don’t end up in the context for the next interaction. (Remember that LLMs can’t distinguish between working code and broken code – it’s all just context.)

The beauty of TDD is that the benefits can be yours whether you’re using “AI” or not. Which is why I now teach it both ways.

The key to being effective with “AI” coding assistants is being effective without them.

Shameless Plug

Test-Driven Development is not a skill that you can just switch on, whether you’re doing it with “AI” or without. It takes a lot of practice to get the hang of it, and especially to build the discipline – the habits – of TDD.

An alarming number of TDD tutorials aren’t actually teaching TDD. (And the more people learn from them, the more bad tutorials we’ll no doubt see.)

If your team wants training in Test-Driven Development, including how to do it effectively using tools like Claude Code and Cursor, my 2-day TDD training workshop is half-price if you confirm your booking by January 31st.

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

In this series, I’ve explored the principles and practices that teams seeing modest improvements in software development outcomes have been applying.

After more than four years since the first “AI” coding assistant, GitHub Copilot, appeared, the evidence is clear. Claims of teams achieving 2x, 5x, even 10x productivity gains simply don’t stand up to scrutiny. No shortage of anecdotal evidence, but not a shred of hard data. It seems when we measure it, the gains mysteriously disappear.

The real range, when it’s measured in terms of team outcomes like delivery lead time and release stability, is roughly 0.8x – 1.2x, with negative effects being substantially more common than positives.

And we know why. Faster cars != faster traffic. Gains in code generation, according to the latest DORA State of AI-Assisted Software Development report, are lost to “downstream chaos” for the majority of teams.

Coding never was the bottleneck in software development, and optimising a non-bottleneck in a system with real bottlenecks just makes those bottlenecks worse.

Far from boosting team productivity, for the majority of “AI” users, it’s actually slowing them down, while also negatively impacting product or system reliability and maintainability. They’re producing worse software, later.

Most of those teams won’t be aware that it’s happening, of course. They attached a code-generating firehose to their development plumbing, and while the business is asking why they’re not getting the power shower they were promised, most teams are measuring the water pressure coming out of the hose (lines of code, commits, Pull Requests) and not out of the shower (business outcomes), because those numbers look far more impressive.

The teams who are seeing improvements in lead times of 5%, 10%, 15%, without sacrificing reliability and without increasing the cost of change, are doing it the way they were always doing it:

  • Working in small batches, solving one problem at a time
  • Iterating rapidly, with continuous testing, code review, refactoring and integration
  • Architecting highly modular designs that localise the “blast radius” of changes
  • Organising around end-to-end outcomes instead of around role or technology specialisms
  • Working with high autonomy, making timely decisions on the ground instead of sending them up the chain of command

When I observe teams that fall into the “high-performing” and “elite” categories of the DORA capability classifications using tools like Claude Code and Cursor, I see feedback loops being tightened. Batch sizes get even smaller, quality gates get even narrower, iterations get even faster. They keep “AI” on a very tight leash, and that by itself could well account for the improvements in outcomes.

Meanwhile, the majority of teams are doing the opposite. They’re trying to specify large amounts of work in detail up-front. They’re leaving “AI agents” to chew through long tasks that have wide impact, generating or modifying hundreds or even thousands of lines of code while developers go to the proverbial pub.

And, of course, they test and inspect too late, applying too little rigour – “Looks good to me.” They put far too much trust in the technology, relying on “rules” and “guardrails” set out in Markdown files that we know LLMs will misinterpret and ignore randomly, barely keeping one hand on the wheel.

As far as I’ve seen, no team actually winning with the technology works like that. They’re keeping both hands firmly on the wheel. They’re doing the driving. As AI luminary Andrej Karpathy put it, “agentic” solutions built on top of LLMs just don’t work reliably enough today to leave them to get on with it.

It may be many years before they do. Statistical mechanics predicts it could well be never, with the order-of-magnitude improvement in accuracy needed to make them reliable enough (wrong 2% of the time instead of 20%) calculated to require 1020 times the compute to train. To do that on similar timescales to the hyperscale models of today would require Dyson Spheres (plural) to power it.

Any autonomous software developer – human or machine – requires Actual Intelligence: the ability to reason, to learn, to plan and to understand. There’s no reason to believe that any technology built using deep learning alone will ever be capable of those things, regardless of how plausibly they can mimic them, and no matter how big we scale them. LLMs are almost certainly a dead end for AGI.

For this reason I’ve resisted speculating about how good the technology might become in the future, even though the entire value proposition we see coming out of the frontier labs continues to be about future capabilities. The gold is always over the next hill, it seems.

Instead, I’ve focused my experiments and my learning on present-day reality. And the present-day reality that we’ll likely have to live with for a long time is that LLMs are unreliable narrators. End of. Any approach that doesn’t embrace this fact is doomed to fail.

That’s not to say, though, that there aren’t things we can do to reduce the “hallucinations” and confabulations, and therefore the downstream chaos.

LLMs perform well – are less unreliable – when we present them with problems that are well-represented in their training data. The errors they make are usually a product of going outside of their data distribution, presenting them with inputs that are too complex, too novel or too niche.

Ask them for one thing, in a common problem domain, and chances are much higher that they’ll get it right. Ask them for 10 things, or for something in the long-tail of sparse training examples, and we’re in “hallucination” territory.

Clarifying with examples (e.g., test cases) helps to minimise the semantic ambiguity of inputs, reducing the risk of misinterpretation, and this is especially helpful when the model’s working with code because the samples they’re trained on are paired with those kinds of examples. They give the LLM more to match on.

Contexts need to be small and specific to the current task. How small? Research suggests that the effective usable context sizes of even the frontier LLMs are orders of magnitude smaller than advertised. Going over 1,000 tokens is likely to produce errors, but even contexts as small as 100 tokens can produce problems.

Attention dilution, drift, “probability collapse” (play one at chess and you’ll see what I mean), and the famous “lost in the middle” effect make the odds of a model following all of the rules in your CLAUDE.md file, or all the requirements for a whole feature, vanishingly remote. They just can’t accurately pay attention to that many things.

But even if they could, trying to match on dozens of criteria simultaneously will inevitably send them out-of-distribution.

So the smart money focuses on one problem at a time and one rule at a time, working in rapid iterations, testing and inspecting after every step to ensure everything’s tickety-boo before committing the change (singular) and moving on to the next problem.

And when everything’s not tickety-boo – e.g., tests start failing – they do a hard reset and try again, perhaps breaking the task down into smaller, more in-distribution steps. Or, after the model’s failed 2-3 times, writing the code themselves to get themselves out of a “doom loop”.

There will be times – many times – when you’ll be writing or tweaking or fixing the code yourself. Over-relying on the tool is likely to cause your skills to atrophy, so it’s important to keep your hand in.

It will also be necessary to stay on top of the code. The risk, when code’s being created faster than we can understand it, is that a kind of “comprehension debt” will rapidly build up. When we have to edit the code ourselves, it’s going to take us significantly longer to understand it.

And, of course, it compounds the “looks good to me” problem with our own version of the Gell-Mann amnesia effect. Something I’ve heard often over the last 3 years is people saying “Well, it’s not good with <programming language they know well>, but it’s great at <programming language they barely know>”. The less we understand the output, the less we see the brown M&Ms in the bowl.

“Agentic” coding assistants are claimed to be able to break complex problems down, and plan and execute large pieces of work in smaller steps. Even if they can – and remember that LLMs don’t reason and don’t plan, they just produce plausible-looking reasoning and plausible-looking plans – that doesn’t mean we can hit “Play” and walk away to leave them to it. We still need to check the results at every step and be ready to grab the wheel when the model inevitably takes a wrong turn.

Many developers report how LLM accuracy falls of a cliff when tasked with making changes to code that lacks separation of concerns, and we know why this is too. Changing large modules with many dependencies brings a lot more code into play, which means the model has to work with a much larger context. And we’re out-of-distribution again.

The really interesting thing is that the teams DORA found were succeeding with “AI” were already working this way. Practices like Test-Driven Development, refactoring, modular design and Continuous Integration are highly compatible with working with “AI” coding assistants. Not just compatible, in fact – essential.

But we shouldn’t be surprised, really. Software development – with or without “AI” – is inherently uncertain. Is this really what the user needs? Will this architecture scale like we want? How do I use that new library? How do I make Java do this, that or the other?

It’s one unknown after another. Successful teams don’t let that uncertainty pile up, heaping speculation and assumption on top of speculation and assumption. They turn the cards over as they’re being dealt. Small steps, rapid feedback. Adapting to reality as it emerges.

Far from “changing the game”, probabilistic “AI” coding assistants have just added a new layer of uncertainty. Same game, different dice.

Those of us who’ve been promoting and teaching these skills for decades may have the last laugh, as more and more teams discover it really is the only effective way to drink from the firehose.

Skills like Test-Driven Development, refactoring, modular design and Continuous Integration don’t come with your Claude Code plan. You can’t buy them or install them like an “AI” coding assistant. They take time to learn – lots of time. Expert guidance from an experienced practitioner can expedite things and help you avoid the many pitfalls.

If you’re looking for training and coaching in the practices that are distinguishing the high-performing teams from the rest – with or without “AI” – visit my website.

The AI-Ready Software Developer #20 – It’s The Bottlenecks, Stupid!

For many years now, cycling has been consistently the fastest way to get around central London. Faster than taking the tube. Faster than taking the train. Faster than taking the bus. Faster than taking a cab. Faster than taking your car.

All of these other modes of transport are, in theory, faster than a bike. But the bike will tend to get there first, not because it’s the fastest vehicle, but because it’s subject to the fewest constraints.

Cars, cabs, trains and buses move not at the top speed of the vehicle, but at the speed of the system.

And, of course, when we measure their journey speed at an average 9 mph, we don’t see them crawling along steadily at that pace.

“Travelling” in London is really mostly waiting. Waiting at junctions. Waiting at traffic lights. Waiting to turn. Waiting for the bus to pull out. Waiting on rail platforms. Waiting at tube stations. Waiting for the pedestrian to cross. Waiting for that van to unload.

Cyclists spend significantly less time waiting, and that makes them faster across town overall.

Similarly, development teams that can produce code much faster, but work in a system with real constraints – lots of waiting – will tend to be outperformed overall by teams who might produce code significantly slower, but who are less constrained – spend less time waiting.

What are developers waiting for? What are the traffic lights, junctions and pedestrian crossings in our work?

If I submit a Pull Request, I’m waiting for it to be reviewed. If I send my code for testing, I’m waiting for the results. If I don’t have SQL skills, and I need a new column in the database, I’m waiting for the DBA to add it for me. If I need someone on another team to make a change to their API, more waiting. If I pick up a feature request that needs clarifying, I’m waiting for the customer or the product owner to shed some light. If I need my manager to raise a request for a laptop, then that’s just yet more waiting.

Teams with handovers, sign-offs and other blocking activities in their development process will tend to be outperformed by teams who spend less time waiting, regardless of the raw coding power available to them.

Teams who treat activities like testing, code review, customer interaction and merging as “phases” in their process will tend to be outperformed by teams who do them continuously, regardless of how many LOC or tokens per minute they’re capable of generating.

This isn’t conjecture. The best available evidence is pretty clear. Teams who’ve addressed the bottlenecks in their system are getting there sooner – and in better shape – than teams who haven’t. With or without “AI”.

The teams who collaborate with customers every day – many times a day – outperform teams who have limited, infrequent access.

The teams who design, test, review, refactor and integrate continuously outperform teams who do them in phases.

The teams with wider skillsets outperform highly-specialised teams.

The teams working in cohesive and loosely-coupled enterprise architectures outperform teams working in distributed monoliths.

The teams with more autonomy outperform teams working in command-and-control hierarchies.

None of these things comes with your Claude Code plan. You can’t buy them. You can’t install them. But you can learn them.

And if you’re ticking none of those boxes, and you still think a code-generating supercar is going to make things better, I have a Bugatti Chiron Sport you might be interested in buying. Perfect for the school run!

The AI-Ready Software Developer #19 – Prompt-and-Fix

For over a billion years now, we’ve known that “code-and-fix” software development, where we write a whole bunch of code for a feature, or even for a whole release, and then check it for bugs, maintainability problems, security vulnerabilities and so on, is by far the most expensive and least effective approach to delivering production-ready software.

If I change one line of code and tests start failing, I’ve got a pretty good idea what broke it, and it’s a very small amount of work (or lost work) to fix it.

If I change 1,000 lines of code, and tests start failing… Well, we’re in a very different ballpark now. Figuring out what change(s) broke the software and then fixing them is a lot of work, and rolling back to the last known working version is a lot of work lost.

Also, checking a single change is likely to bring a lot more focus than checking 1,000. Hence my go-to meme for after-the-fact testing and code reviews:

The usual end result of code-and-fix development is buggier, less maintainable software delivered much later and at a much higher cost.

And all things in traditional software development have their “AI”-assisted equivalents, of course.

I see developers offloading large tasks – whole features or even sets of features for a release – and then setting the agentic dogs loose on them while they go off to eat a sandwich or plan a holiday or get a spa treatment or whatever it is software developers do these days.

Then they come back after the agent has finished to “check” the results. I’ve even heard them say “Looks good to me” out loud as they skim hundreds or thousands of changes.

Time for the meme again:

Now, no doubting that “AI”-assisted coding tools have improved much in the last 6-12 months. But they’re still essentially LLMs wrapped in WHILE loops, with all the reliability we’ve come to expect.

Odds of it getting one change right? 80%, maybe, with a good wind behind it. Chances of it getting two right? 65%, perhaps.

Odds of it getting 100 changes right? Effectively zero.

Sure, tests help. You gave it tests, right?

Guardrails can help, when the model actually pays attention to them.

External checking – linters and that sort of thing – can definitely help.

But, as anyone who’s spent enough time using these tools can tell you, no matter how we prompt or how we test or how we try to constrain the output, every additional problem we ask it to solve adds risk.

LLMs are unreliable narrators, and there’s really nothing we can do to get around that except to be skeptical of their output.

And then there are the “doom loops”, when the context goes outside the model’s data distribution, and even with infinite iterations, it just can’t do what we want it to do. It just can’t conjure up the code equivalent of “a wine glass full to the brim”.

And the bigger the context – the more we ask for – the greater the risk of out-of-distribution behaviour, with each additional pertinent token collapsing the probability of matching the pattern even further. (Don’t believe me? Play one at chess and watch it go off that OOD cliff.)

So problems are very likely with this approach – which I’m calling “prompt-and-fix”, because I can – and finding them and fixing them, or backing out, is a bigger cost.

What I’ve seen most developers do is skim the changes and then wave the problems through into a release with a “LGTM”.

One more time:

This creates a comforting temporary illusion of time saved, just like code-and-fix. But we’re storing up a lot more time that’s going to be lost later with production fires, bug fixes and high cost-of-change.

The antidote to code-and-fix was defect prevention. We take smaller steps, testing and reviewing changes continuously, so most problems are caught long before finding, fixing or reverting them becomes expensive.

I have a meme for that, too:

The equivalent in “AI”-assisted software development would be to work in small steps – one change at a time – and to test and review the code continuously after every step.

Sorry, folks. No time for that spa treatment! You’ll be keeping the “AI” on a very short leash – both hands on the wheel at all times, sort of thing.

The other benefit of small steps is that they’re much less likely to push the LLM out of its data distribution. Keeping the model in-distribution more, so screw-ups will happen less often – while reaping the benefits of immediate problem detection in reduced work added or lost when things go south – is a WIN-WIN.

I know that some of you will be reading this and thinking “But Claude can break a big problem down into smaller problems and tackle them one at a time, running the tests and linting the code and all that”.

Yes, in that mode, it certainly can. But every step it takes carries a real risk of taking it in the wrong direction. And direction, despite what some fans of the technology claim, isn’t an LLM’s strong suit. Remember, they don’t understand, they don’t reason, they don’t plan. They recursively match patterns in the input to patterns in the model and predict what token comes next.

Any sense that they’re thinking or reasoning or planning is a product of the Actual Intelligence they’re trained on. It may look plausible, but on closer inspection – and “closer inspection” is often the problem here – it’s usually riddled with “brown M&Ms”.

So, no, you can’t just walk away and let them get on with it. If they take a wrong turn, that error will likely compound through the rest of the processing.

Think of what happens in traditional software development when a misunderstanding or an incorrect assumption goes unchecked while we merrily build on top of that code.