Finally! Proof That Agentic AI Scales (For Creating Broken Software)

Some of the marketing choices made by the “AI” industry over the last few years have seemed a little… odd.

The latest is a “breakthrough” in “agentic AI” coding heralded by Cursor, in which they claim that a 3+ million-lines-of-code (MLOC) web browser was generated by 100 or so agents in a week.

It certainly sounds impressive, and many of the usual AI boosters have been amplifying it online as “proof” that agentic software development works at scale.

But don’t start ordering your uniform to fight in the Butlerian Jihad just yet. They might be getting a little ahead of themselves.

Did 100 agents generate 3 MLOC in about a week? It would appear so, yes. So that part of the claim’s probably true.

Did 100 agents generate a working web browser? Well, I couldn’t get it to work. And, apparently, other developers couldn’t get it to work.

Feel free to try it yourself if you have a Rust compiler.

And while you’re looking at the repo – and it surprises me it didn’t occur to them that anybody might – you might want to hop over to the Action performance metrics on the Insights page.

An 88% job failure rate is very high. It’s kind of indicative of a code base that doesn’t work. And looking at the CI build history on the Actions page, it appears it wasn’t working for a long time. I couldn’t go back far enough to find out when it became a sea of red builds.

Curiously, near to the end, builds suddenly started succeeding. Did the agents “fix” the build in the same way they sometimes “fix” failing tests, I wonder? If you’re a software engineering researcher, I suspect there’s at least one PhD project hiding in the data.

But, true to form, it ended on a broken build and what does indeed appear to be broken software.

The repo’s Action usage metrics tell an interesting story.

The total time GitHub spent running builds on this repo was 143,911 minutes. That’s 4 months of round-the-clock builds in about a week.

This strongly suggests that builds were happening in parallel, and that strongly suggests agents were checking in changes on top of each other. It also suggests agents were pulling changes while CI builds were in progress.

This is Continuous Integration 101. While a build is in progress, the software’s like Schrödinger’s Cat – simultaneously working and not working. Basically, we don’t know if the changes being tested in that check-in have broken the software.

The implication is, if our goal is to keep the code working, that nobody else should push or pull changes until they know the build’s green. And this means that builds shouldn’t be happening in parallel on the same code base.

Your dev team – agentic or of the meat-puppet variety – may be a 100-lane motorway, but a safe CI pipeline remains a garden gate.

The average job queuing time in the Action performance metrics illustrates what happens when a 100-lane motorway meets a garden gate.

And the 88% build failure rate illustrates what happens when motorists don’t stop for it.

The other fact staring us in the face is that the agents could not have been doing what Kent Beck calls “Clean Check-ins” – only checking in code that’s passing the tests.

They must have been pulling code from broken builds to stay in sync, and pushing demonstrably broken code (if they were running the tests, of course).

In the real world, when the build breaks and we can’t fix it quickly, we roll back to the previous working version – the last green build. Their agentic pile-on doesn’t appear to have done this. It broke, and they just carried on 88% of the time.

Far from proving that agentic software development works at scale, this experiment has proved my point. You can’t outrun a bottleneck.

If the agents had been constrained to producing software that works, all their check-ins would have had to go in single file – one at a time through the garden gate.

That’s where those 143,911 total build minutes tell a very different story. That’s the absolute minimum time it would have taken – with no slip-ups, no queueing etc -to produce a working web browser on that scale.

Realistically, with real-world constraints and LLMs’ famous unreliability – years, if ever. I strongly suspect it just wouldn’t be possible, and this experiment has just strengthened that case.

Who cares how fast we can generate broken code?

The discipline of real Continuous Integration – that results in working, shippable software – is something we explore practically with a team CI & CD exercise on my 3-day Code Craft training workshop. If you book it by January 31st 2026, you could save £thousands with our 50% off deal.

Productivity Theatre

The value proposition of Large Language Models is that they might boost our productivity as programmers (when we use them with good engineering discipline). And there’s no doubting that there are things we can do faster using this technology.

It would be a mistake, though, to assume that we can do everything faster using them.

I’ve watched many developers prompting, say, Claude or Cursor asking them to perform tasks that they could have done much faster – and more reliably – themselves using “classical” tools or just typing the damn code instead of a prompt.

For example, there’s been times when I’ve seen developers writing prompts like “Claude, please extract lines 23-29 into a new method called foo that returns the value of x” when their IDE could do that with a few keystrokes.

In these moments, the tool isn’t making them more productive. It’s making them less productive. So we might, when we find ourselves doing it – and I certainly have – pause to reflect on why.

It could be that we just don’t know the easier way. You might be surprised at how many developers haven’t even looked at the refactoring menu in their IDE, for example. Or that we know there’s an easier way, but don’t want to take the time to learn it.

In the latter case, it’s true that it would probably take them longer the first time. So they continue doing it the long way. Arrested development – often under time pressure, or perceived time pressure – is a common condition in our profession.

But in many cases, it seems performative. We know there’s a quicker, easier way, but we feel we need to show that it can be done – a bit like those people who insist you can cook anything in a microwave. Yes, technically you can, but is that always the best or the easiest option?

Someone calling themselves an “AI engineer” or “AI-native” might feel the need to signal to the people around them that they can indeed cook anything in the proverbial microwave.

And then it ceases to be about productivity. It’s about making a point, and demonstrating prowess to peers, superiors and random strangers on LinkedIn. The technology has become part of their professional identity.

Sacrificing real productivity in service to a specific technology or a technique is nothing new, of course. Software developers have been applying the “if all you’ve got is a hammer” principle for many decades – “I don’t know how we’re going to solve this problem, but we’re going to do it with microservices” sort of thing.

Quite often, these decisions – conscious or unconscious – seem to be career and status-driven. If “AI-native” is hot in the job market, that’s what we want on our CV. “AI when it makes sense” is not hot right now. It may be rational, but it’s less in-demand.

I’m still very much unfashionably rational, having sustained a long career by avoiding getting pigeonholed in the many fads and fashions that have come and gone. I’m interested in what’s real and in what works.

You never know. One day that might catch on.

If you want to hone your “classical” software engineering skills for the times when those are the better option, as well as learn how to apply engineering principles to “AI”-assisted development in an evidence-based approach that more and more developers are discovering gets better results – if it’s better results you’re after, of course – then check out my training website for details of courses and coaching, and oodles of free learning resources.

The Gorman Paradox – Solution II: They’re In The Bin

Software development’s essentially a learning process. Most of the value in a product or system’s added in response to user and eventually market feedback.

With each iteration we get the design less wrong. With each iteration, we learn.

The effect of batch size on learning is profound.

I urge teams to work on the basis that every design decision is guesswork until it hits the real world. We can’t know with certainty that we made the right decisions.

Getting user feedback is the only meaningful mechanism we have to “turn the cards over” and found out if we guessed right. In this sense, learning is characterised as reducing or eliminating uncertainty in product design. Teams who do this faster will tend to out-learn their competition.

Imagine trying to guess a random 4-digit number in one go vs. guessing one digit at a time.

In both approaches, we start with the same odds of guessing it right: 1/10,000. But with each guess, the uncertainty collapses orders of magnitude faster when we’re guessing one digit at a time. The latter approach out-learns the former.

Even if we had an “AI” random 4-digit number generator that enabled us to make 10x as many guesses in the same time, guessing one digit at a time would still out-learn us.

The chances of a complete solution delivered in a single pass – guessing all 4 digits in one go – being even on the same continent as correct are vanishingly remote, and we learn very little because of the nature of user feedback.

If I deliver 50 changes (e.g., new features) in a single release and ask users “waddaya think?”, I won’t get meaningful feedback about all 50 changes.

Most likely I’ll get general feedback of the “LGTM” or “meh” variety, and maybe some specific feedback about things that stood out. (Bugs in a release tend to overshadow anything else, for example – the proverbial fly in the soup. “Waddaya think of the soup?” “There’s a fly in it!”)

If I deliver ONE change, they’ll probably have something meaningful to say about it. We can at least observe what impact that one change has on user behaviour (e.g., engagement, completing tasks etc).

So we learn faster when we iterate fewer changes into the hands of users at a time. This inevitably forces us to apply the brakes on the creation of code, because we need to wait for feedback, and we need to do that often.

I see many posts here from folks claiming to have generated entire applications in days or even hours using LLM-based coding tools. That’s the equivalent of “guessing all 4 digits at a time using an ‘AI’ 4-digit number generator”. That’s an entire application – hundreds of design decisions – created without any user feedback.

Creating an entire application in a single pass is every bit as “Big Design Up-Front” as wireframing or modeling the whole thing in UML in advance. And assumptions and guesses in your early decisions get compounded in later decisions, piling up uncertainty under a mountain of interconnected complexity. Failure is almost inevitable.

This is another potential solution to the Gorman Paradox.

Where are all the “AI”-generated apps? In the bin.

It just so happens that I train and mentor teams in the technical practices that enable them to learn faster from user and market feedback. I know, right! What are the chances?

And it also just so happens that any Codemanship training course booked by January 31st 2026 is HALF-PRICE. Which is nice.

“First, We Model The Domain…”

In my previous blog post talking about the preciseness of software specifications, I used an example from one of my training workshops to illustrate the value in adding clarity when we have a shared understanding of the problem domain.

Now, when many developers see a UML class diagram – especially ones who lived through the age of Big Architecture in the 1990s – they immediately draw connotations of BDUF (Big Design Up-Front). And to be fair, it’s understandable how visual modeling, and UML in particular, gained that reputation, with it’s association with heavyweight model-driven development processes.

But teams who reject visual modeling outright because it’s “not Agile” are throwing the baby out with the BDUF bathwater.

I recounted in my post how providing a basic domain model with the requirements dramatically reduced misinterpretations in the training exercise.

And I’ve seen it have the same effect in real projects, too. As a tech lead I would often take on the responsibility of creating visual models based on our actual code and displaying them prominently in the team space. As the code evolved, I’d regularly update the models so they were a slightly-lagging but mostly accurate reflection of our design.

Domain models – the business concepts and their relationships – have proven to be the most useful things to share, helping to keep the team on the same page in our understanding of what it is we’re actually talking about.

Most importantly, there’s no hint of BDUF in sight. I describe the domain concepts that are pertinent to the test cases we’re working on. The model grows as our software grows, working in vertical slices in tight feedback loops, and never getting ahead of ourselves. We don’t model the entire problem domain, just the concepts we need for the functionality we’re working on.

In this sense, to describe our approach to design as “domain-driven” might be misleading. The domain doesn’t drive the design, user needs do. And user needs dictate what domain concepts our design needs.

Let’s examine the original requirements:

• Add item – add an item to an order. An order item has a product and a quantity. There must be sufficient stock of that product to fulfil the order

• Total including shipping – calculate the total amount payable for the order, including shipping to the address

• Confirm – when an order is confirmed, the stock levels of every product in the items are adjusted by the item quantity, and then the order is added to the sales history.

I’d tackle these one at a time. The domain model for Add Item would look like:

Note that Product price isn’t pertinent to this use case, so it’s not in the model.

When I start working on the next use case, Total Including Shipping, the domain model evolves.

And it evolves again to handle the Confirm use case.

And the level of detail in the model, again, is only what’s pertinent. We do not need to know about the getters and the setters and all that low-level malarkey. We can look at the code to get the details. Otherwise it just becomes visual clutter, making the models less useful as communication tools.

Another activity in which visual modeling can really help is as an aid to collaborative design.

I’ve seen so many times developers or pairs picking up different requirements and going off into their respective corners, designing in isolation and coming up with some serious trainwrecks – duplicated concepts, mismatched architectures, conflicting assumptions, and so on.

It’s the classic “Borrow a video”/”Return a video” situation, where we end up with two versions of the same conceptual model that don’t connect.

It’s especially risky early in the life of a software product, when a basic architecture hasn’t been established yet and everything’s up in the air.

I’ve found it very helpful in those early stages to get everybody around a whiteboard and lay out designs for their specific requirement that’s part of the same model. So if somebody’s already added a Rental class, they add their behaviour around that, and not around their own rental concept.

As the code grows, maintaining a picture of what’s in it – especially domain concepts – gives the team a shared map of what things are and where things go, and a shared vocabulary for discussing and reasoning about problems together.

This is part of the wider discipline of Continuous Architecture, where understanding, planning, evaluating and steering software design is happening throughout the day.

The opposite of Big Design Up-Front.

If your team wants to level up their capability to rapidly, reliably and sustainably evolve working software to meeting changing business needs, check out my live, instructor-led and very hands-on training workshops.

Yeah, About Your “Precise” Specification…

Increasingly, I see people who’ve been struggling with LLM-based coding assistants reaching the conclusion that what’s needed is “better” specifications.

If you were to ask me what might make a specification “better”, I’d probably say:

  • Less ambiguous – less open to multiple valid interpretations
  • More complete – fewer gaps where expected system behaviour and other properties are left undefined
  • More consistent – fewer contradictions (e.g., Requirement #1: “Users can opt in to notifications”, Requirement #77: “By default, notifications must be on”)

Of these three factors, ambiguity is top of my list. It can mask contradictions and paper over gaps. When requirements are ambiguous, that takes us into physicist Wolfgang Pauli’s “not even wrong” territory.

It’s hard to know what the software’s supposed to do, and hard to know when it’s not doing it. This is why so many testers tell me that a large part of their job is figuring out what the requirements were in the first place. (Pro tip: bring them into those discussions.)

An ideal software specification therefore has no ambiguity. It’s not open to multiple interpretations. This enables us to spot gaps and inconsistencies more easily. But more importantly, it enables us to know with certainty when the software doesn’t conform to the specification.

We can never know, of course, that it always conforms to the specification. That would require infinite testing in most cases. But it only needs one test to refute it – and that requires the specification to be refutable.

So I guess when I talk about a “better” specification, I’m talking mostly about refutability.

“Precise”. You Keep Using That Word.

Refutability requires precision. And this is where our natural languages let us down. Try as we might to articulate rules in “precise English” or “precise French” or “precise Cantonese”, these languages haven’t evolved for precision.

Language entropy – the tendency of natural language statements to have multiple valid interpretations, and therefore uncertain meaning – is pretty inescapable.

For completely unambiguous statements, we need a formal language – a language with precisely-defined syntax – with formal semantics that precisely define how that syntax is to be interpreted. Statements made with these can have one – and only one – interpretation. It’s possible to know with certainty when an example contradicts it.

Computer programmers are very familiar with these formal systems. Programming languages are formal languages, and compilers and interpreters endow them with formal semantics – with precise meaning.

I half-joke, when product managers and software designers ask me where they can find good examples of complete software specifications to look on GitHub. It’s full of them.

It’s only half a joke because it’s literally true that program source code is a program specification, not an actual program. It expresses all of the rules of a program in a formal language that are then interpreted into lower-level formal languages like x86 assembly language or machine code. These in turn are interpreted into even lower-level representations, until eventually they’re interpreted by the machine itself – the ultimate arbiter of meaning.

It’s turtles all the way down, and given a specific stack of turtles, meaning – hardware failures notwithstanding – is completely predictable. The same source code, compiled by the same compiler, executed by the same CPU, will produce the same observable behaviour.

So we have a specification that’s refutable and predictable. The same rules will produce the same behaviour every time, and we can know with certainty when examples break the rules.

But, of course, a computer program does what it does. It will always conform to its program specification, expressed in Java or Python or – okay, maybe not JavaScript – or Go. That doesn’t mean it’s the right program.

So we need to take a step back from the program. Sure, it does what it does. But what is it supposed to do?

Remember those turtles? Well, it would be a mistake if we believed the program source code is at the top of the stack. In order to meaningfully test if we wrote the right program code, we need another formal specification (and I use those words most accurately) that describes the desired properties of the program without being part of the program itself.

Let’s think of a simple example. If I have a program that withdraws money from a bank account, and me and my customer agree that withdrawal amounts must be more than zero, and the account needs to have sufficient funds to cover it, we might specify that withdrawals should only happen when that’s true.

In informal language, a precondition of any withdrawal is that the amount must be greater than zero, and the balance must be greater than or equal to the amount being withdrawn. If the withdraw function is invoked when that condition isn’t met, the program is wrong.

To remove any ambiguity, I would wish to express that in a formal language. I could do it in a programming language. I could insert an assertion at the start of the withdraw function that checks the condition and e.g., throws an exception of it’s not satisfied, or halts execution during testing and reports an error.

e.g. in Python “defensive programming” (we can talk in another blog post about what terrible UX design this is – yes, UX design. In the code. Bazinga!)

def withdraw(self, amount):
if amount <= 0:
raise InvalidAmountError()
if self.balance < amount:
raise InsufficientFundsError()
self.balance -= amount

e.g., using inline assertions that are checked during testing

def withdraw(self, amount):
assert amount > 0
assert self.balance >= amount
self.balance -= amount

These approaches are fine, but they’re not a great way to establish what those rules are with our customer in the first place. Are we going to sit down with them and start writing code to capture the requirements?

In the late 1980s, formal languages started to appear specifically with the aim of creating precise external specifications of correct behaviour that aren’t part of the code at all.

The first I used was Z. Z was a notation founded on predicate logic and set theory. Here’s an artist’s impression of a Z specification that ChatGPT hallucinated for me.

Not the most customer-friendly of notations. Other formal specification languages attempted to be more “business-friendly”, like the Object Constraint Language:

context BankAccount::withdraw(amount: Real)
pre: amount > 0
pre: balance >= amount
post: balance = balance@pre - amount

These OCL constraints were designed to extend UML models to make their meaning more precise. I remember being told that it was designed to be used by business people. I found that naivety endearing.

To cut a long story short, while formal specification certainly found a home in the niche of high-integrity and critical systems engineering, that same snow never settled on the plains of business and requirements analysis and everyday software development. We were expecting business stakeholders to become programmers. That rarely works out.

But for a time, I used formal specifications – luckily, my customers were electronics engineers and not marketing executives, so most already had programming experience.

Tests As Specifications

We’d firm up a specification using a combination of Z and the Object Modeling Technique (UML wasn’t a thing then) describing precisely what a feature or a function needed to do.

Then I’d analyse that specification and choose test examples.

BankAccount:: withdraw
Example #1: invalid amount
amount = 0
Outcome:
throws InvalidAmountError
Example #2: valid amount and sufficient funds
amount = 50.0
balance = 50.0
Outcome:
balance = 0.0
Example #3: insufficient funds
amount = 50.01
balance = 50.0
Outcome:
throws InsufficientFundsError

It turned out that business stakeholders can much more easily understand specific examples than general rules expressed in formal languages. So we flipped the script, and explored examples first, and then generalised them to a formal specification.

It was when I first started learning about “test-first design”, one of the practices of the earliest documented versions of Extreme Programming, that the lightbulb moment came.

If we’ve got tests, do we need the formal specifications at all? Maybe we could cut out the middle-man and go straight to the tests?

This often works well – exploring the precise meaning of requirements using test examples – with non-programming stakeholders.

And many people are discovering that including test examples in our prompts helps LLMs match more accurately by reducing the search space of code patterns. It turns out that models are trained on code samples that have been paired with usage examples (tests, basically), so including examples in the prompt gives them more to match on.

So, if you were to ask me what might make a specification for LLM code generation “better”, I’d definitely say “tests”. (And there was you thinking it was the LLM’s job to dream up tests.)

Visualising The Gaps

That helps reduce ambiguity and the risk of misinterpretation, but what of completeness and consistency?

This is where some kind of generalisation is really needed, but it doesn’t have take us down the Z or OCL road. What we really need is a way to visualise the state space of the problem.

One simple technique I’ve used to good effect is a decision table. This helps me to see how the rules of a function or an action map to different outcomes.

Here, I’ve laid out all the possible combinations of conditions and mapped them to specific outcomes. There’s one simplification we can make – if the amount isn’t greater than zero, we don’t care if the account has sufficient funds.

That maps exactly on to my three original test cases, so I’m confident they’re a complete description of this withdraw function.

Mapping it out like this and exploring test cases encourages us to clarify exactly what the customer expects to happen. When the amount is greater than the balance, exactly what should the software do? It forces us and our customers to consider details that probably wouldn’t have come up otherwise.

Other tools we can use to visualise system behaviour and rules include Venn diagrams (have we tested every part of the diagram?), state transition diagrams and state transition tables (have we tested every transition from every state?), logic flow diagrams (have we tested every branch and every path?), and good old-fashioned truth tables – the top half of a decision table.

Isn’t This Testing?

“But, Jason, this sounds awfully like what testers do!”

Yup 🙂

Tests are to specifications what experiments are to hypotheses.

If I say “It should throw an error when the account holder tries to withdraw more than their balance” before any code’s been written to do that, I’m specifying what should happen. Hypothesis.

If I try to withdraw £100 from an account with a balance of £99, then that’s a test of whether the software satisfies it’s specification. It’s a test of what does happen. Experiment.

This is why I strongly recommend teams bring testing experts into requirements discussions. You’re far more likely to get a complete specification when someone in the room is thinking “Ah, but what if A and B, but not C?”

You can, of course, learn to think more like a tester. I did, so it can’t be that hard.

But there’s really no substitute for someone with deep and wide testing experience in the room.

If a function or a feature is straightforward, we can probably figure out what test cases we’d need to cover in our heads. My initial guesses at tests for the withdraw function were pretty good, it turned out.

But when they’re not straightforward, or when the scenario’s high risk, I’ve found these techniques very valuable.

As a bottom line, I’ve found that tests of some kind are table stakes. They’re the least I’ll include in my specification.

Shared Language

Another thing I’ve found that helps to minimise misinterpretations is establishing a shared model of the concepts we’re talking about in our specifications.

In a training exercise I run often, pairs are asked to use Test-Driven Development to create a simple online retail program. They’re given a set of requirements expressed in plain English and the idea is that they agree tests with the customer (one of them plays that role) to pin down what they think the requirements mean.

e.g.

Add item – add an item to an order. An order item has a product and a quantity. There must be sufficient stock of that product to fulfil the order

Total including shipping – calculate the total amount payable for the order, including shipping to the address

Confirm – when an order is confirmed, the stock levels of every product in the items are adjusted by the item quantity, and then the order is added to the sales history.

A couple of years back, I changed the exercise by giving them a “walking skeleton” – essentially a “Hello, world!” project for their tech stack with a dummy test and a CI build script set up and ready to go – to get them started.

And in that project I added a bare-bones domain model – just classes, fields and relationships – that modeled the concepts used in the requirements.

In UML, it looked something like this.

Before I added a domain model, pairs would come up with distinctly different interpretations of the requirements.

With the addition of a domain model, 90% of pairs would land on pretty much the same interpretation. Such is the power of a shared conceptual model of what it is we’re actually talking about.

It doesn’t need to be code or a UML diagram – but some expression in some form we hopefully can all understand of the concepts in our requirements and how they’re related evidently cuts out a lot of misunderstandings.

Precision In UX & UI Design

And, of course, if we’re trying to describe a user interface, pictures can really help there. Wireframes and mock-ups are great, but if we’re trying to describe dynamic behaviour – what happens when I click that button? – I highly recommend storyboards.

A storyboard is just a sequence of snapshots of the UI in specific test scenarios that illustrates what happens with each user interaction. Here’s a great example.

Source: Annie Hay Design https://anniehaydesign.weebly.com/app-design/storyboarding

It’s another way of visualising a test case, just from the user’s perspective. In that sense, it can be a powerful tool in user experience design, helping stakeholders to come to a shared understanding of the user’s journey, and potentially revealing problems with the design early.

Precision != BDUF

Before anybody jumps in with accusations of Big Design Up-Front (BDUF), a quick reminder that I would never suggest trying to specify everything, then implement it, then test it, then merge and release it in one pass. I trust you know me better than that.

When clarity’s needed, I have a pretty well-stocked toolbox of techniques for providing it, as and when it’s needed in a highly iterative process delivering working software in thin slices – one feature at a time, one scenario at a time, one outcome at a time, and one example at a time. Solving one problem at a time in tight feedback loops.

Taking small steps with continuous feedback and opportunities to steer is highly compatible with working with LLM-based coding assistants. It’s actually kind of essential, really. Folks talking about specifying e.g., a whole feature “precisely” and then leaving the agent(s) to get on with it are… Well, you probably know what I think. I’ve seen those trains come off the rails so many times.

And with each step, I stay on-task. I’ll rarely, for example, model domain concepts that aren’t involved in the test cases I’m working on. I’m not one of these “First, I model ALL THE THINGS, then I think about the user’s goals” guys.

And using tests as specifications goes hand-in-glove with a test-driven approach to development, which you may have heard I’m quite partial to.

Believe it or not, agility and precision are completely compatible. How precise you’re being, and the size of the steps you’re taking that end in user feedback from working software, are orthogonal concerns. If you look in the original XP books, you’ll even find – gasp! – UML diagrams.

Hopefully you get some ideas about the kinds of things we can include in a specification to make it more precise, more complete and more consistent.

But at the very least, you might begin to rethink just how good your current specifications actually are.

Prompts Aren’t Code and LLMs Aren’t Compilers

One final thought. The formal systems of computer programming – programming languages, compilers, machine code and so on – and the “turtles” in an LLM-based stack are very different.

Prompts – even expressed in formal languages – aren’t code, and LLMs aren’t compilers. They will rarely produce the exact same output given the exact same input. It’s a category mistake to believe otherwise.

This means that no matter how precise our inputs are, they will not be processed precisely or predictably. Expect surprises.

But less ambiguity will – and I’ve tested this a lot – reduce the number of surprises. And refutability gives us a way to spot the brown M&Ms in the output more easily.

It’s easier to know when the model got it wrong.

Clean Contexts

You’ve probably heard of “clean code” (and the “clean coder”, and “clean architecture”, and other things Bob Martin has added the word “clean” in front of to get another book out of it).

In this dawning age of “AI”-assisted software development, I’d like to propose clean contexts.

What is a “clean context”? Well, I’m glad you asked.

A clean context:

  • Addresses one problem – one failing test, one code quality rule, one refactoring etc.
  • Is small enough to stay inside the model’s effective context limit – which is going to be orders of magnitude smaller than the advertised maximum context
  • Uses clear and consistent shared language – if you’ve been calling it “sales tax”, don’t suddenly start calling it “VAT”
  • Clarifies with examples that can be used as success criteria (i.e., tests) – The code samples used in training were paired with usage examples, so it improves matching
  • Only contains information pertinent to the task – don’t divert the model’s attention (literally)
  • Only contains accurate information (“ground truth”) – the code and the architecture as it is now (not a bunch of changes back when you asked the tool to summarise it), the test failure message, the mutation testing results and so on. Ground your interactions in reality.
  • Only contains working code – if the model breaks the code, don’t feed it back to it. It can’t tell broken code from working code, and you’ll pollute the context. Revert and try again. The exception to this is bug fixes, of course. But if the model introduced the bug – git reset –hard
  • Contains code that doesn’t go outside the model’s data distribution – LLMs famously choke on code that lacks clarity, is overly complex and lacks separation of concerns because it’s far outside the distribution of examples they were trained on. When it comes to gnarly legacy code, I’ve had more success breaking it down myself initially before letting Claude loose on it. Y’know, like how an adult bird chews the food first before feeding it to its chicks.

And remember that a prompt isn’t the entire context. Claude Code and Cursor will use static analysis to determine what source code needs to be added. Context files may be added (e.g., CLAUDE.md). And of course, everything in the conversation- your (or your agent’s) prompts and the model’s responses – are all part of the context. When an LLM “hallucinates”, that becomes part of the context, and the model has no way of determining fact from its own fiction. It’s all just context to a language model.

This is why I purge and then construct a new, task-specific context with each interaction. Many users are reporting how much more accurate LLMs tend to be with a fresh context.

Our goal with a clean context is to minimise ambiguity and the risk of misinterpretation, to minimise attention dilution and context drift, context pollution and context “rot”, and as much as possible, stay within the LLM’s training data distribution.

Basically, we’re aiming to maximise the chances of an accurate prediction from the LLM, and spend less time cleaning up mistakes and digging the tool out of “doom loops”.

Importantly, working in small steps – solving one problem at a time – opens up many more opportunities after each step to get feedback from testing, code review and merging, so clean contexts are highly compatible with much more iterative approaches.

Just as Continuous Delivery enables us to make progress by putting one foot in front of the other, ensuring a working product after every step, we also aim to start every step with a clean context that significantly reduces the risk of a stumble.

The Great Filter (Or Why High Performance Still Eludes Most Dev Teams, Even With AI)

In my post about The Gorman Paradox, I compare the lack of any evidence of “AI”-assisted productivity gains to be found out here in the Real WorldTM with the famous Fermi Paradox that asks, if the universe is teeming with intelligent life, where is everybody?

It’s been over 3 years, and we’ve seen no uptick in products being added to the app stores. We’ve seen no rising tide on business bottom lines. We’ve seen no impact on national GDPs.

There is a likely explanation, and it’s the most obvious one: “AI”-assisted coding doesn’t actually make the majority of dev teams more productive. For sure, it produces more code. But, on average, it creates no net additional value.

The DORA data does find some teams reaping modest gains in terms of software delivery lead times without sacrificing reliability, and – interestingly – the data shows that those high-performing teams using “AI” were already high-performing without it.

The majority of teams showed that “AI” actually slowed them down, and these were the teams who were already pretty slow before “AI”. Attaching a code-generating firehose to the process just made them marginally slower.

The differentiator? Are the high-performing teams super-skilled programmers? Are they getting paid more? Are they putting something in the office water supply?

It turns out that what separates the teams who get a negative boost from the teams who get a positive boost is that the latter have addressed the bottlenecks in their development process.

Blocking activities, like detailed up-front design, after-the-fact testing, Pull Request code reviews, and big merges to the main branch, have been turned into continuous activities.

Teams work in much smaller batches and in much tighter feedback loops, designing, testing, inspecting and merging many times an hour instead of every few days.

Work doesn’t sit in queues waiting for someone’s attention. There are very few traffic lights between the developer’s desktop and the outside world to slow that traffic down.

And this means that changes can make it into the hands of users very rapidly, with highly automated, highly reliable, frictionless delivery pipelines that – as the supermarket ads used to say – get the peas from the farmer’s field to your table in no time at all.

The just-in-time grocery supply chains of supermarkets are a good analogy for the processes high-performing teams are using. Supermarkets don’t buy a year’s supply of fresh peas once a year. They buy tomorrow’s supply today, and their formidable logistical capabilities get those peas on the shelves pronto.

Those formidable logistical capabilities didn’t just appear, either. They’re the product of many decades of investment. Supermarket chains have sunk billions into getting better at it, so they can maximise cash flow by minimising the amount of working capital they have committed at any time.

They don’t want millions of pounds-worth of produce sitting in warehouses making them no money.

And businesses don’t want millions of pounds-worth of software changes sitting in queues waiting to be released. They want them out there in the hands of users, creating value in the form of learning what works and what doesn’t. Software that can’t be used has no value.

Walk into any large organisation and take a snapshot of how much investment in developed code is “in progress”. For some, it literally is million of pounds-worth – tens or hundreds of thousands of pounds, multiplied by dozens or hundreds of teams.

The impact on a business of being able to out-learn the competition can be so profound, we might ask ourselves “Why isn’t everybody doing this?” Can you imagine a supermarket chain deciding not to bother with JIT supply? They wouldn’t last long.

It’s come into focus even more sharply with the rise of “AI”-assisted software development. It’s quite clear now that even modest productivity gains lie on the other side of the spectrum with teams who have addressed their bottlenecks and have low-friction delivery pipelines.

I see a “Great Filter” that continues to prevent the large majority of dev teams making it to that Nirvana. It requires a big, ongoing investment in the software development capability needed.

We’re talking about investment in people and skills. We’re talking about investment in teams and organisational design. We’re talking about investment in tooling and automation. We’re talking about investment in research and experimentation. We’re talking about investment in talent pipelines and outreach. We’re talking about investment in developer communities and the profession of software development.

Typically, I’ve seen that companies who manage to progress from the bottleneck-ridden ways of working to highly iterative, frictionless methods needed to invest 20-25% of their entire development budget in building and maintaining that capability.

And building that kind of capability takes years.

You can’t buy it. You can’t install it. You can’t have it flown in fresh from Silicon Valley.

And, like organ transplants, any attempt to transplant that kind of capability into your business will be met with organisational anti-bodies protecting the status quo.

And that, folks, is The Great Filter.

Most organisations are simply not prepared to make that kind of commitment in time, effort and money.

Sure, they want the business benefits of faster lead times, more reliable releases, and a lower cost of change. But they’re just not willing to pony up to get it.

On a daily basis, I see people online warning us not to “get left behind by AI”. The reality is that the people who really are getting left behind are the ones who think that the bottlenecks and blockers they’ve struggled with in the past will magically get out of the way of the code-generating firehose.

Low-performing teams, now grappling with the downstream chaos caused by “AI” code generation, will probably always be the norm. And the value of this technology will probably never be realised by those businesses.

If you’re on of the few who are serious about building software development capability, my training courses in the technical practices that enable rapid, reliable and sustained evolution of software to meeting changing needs are half price if you confirm your booking by Jan 31st.

Yes, Maintainability Still Matters in “AI”-assisted Coding

A couple of people have asked, in relation to my 2-day Software Design Principles training course, whether maintainability matters anymore.

Perhaps they’ve read some of the wrong-headed posts here about why LLM-generated code doesn’t need to be understandable or maintainable by humans.

Putting aside the undeniable fact that these tools are nowhere near that reliable, in reality, code maintainability matters just as much – if not more – when LLMs are working with it.

First, and hopefully you’ve figured this out by now, “AI”-assisted programming without a good suite of fast-running regression tests is very, very risky. Fast tests have such a huge impact on the cost and the risk of changing code that Michael Feathers defines “legacy code” as code that lacks them.

More teams are discovering that they need to be constantly assessing the “strength” of the automated tests their “AI” assistant generates – they’re notorious for weak tests, and for cheating to get tests passing.

I highly recommend regular mutation testing to check for gaps in your test suites.

Clarity matters, because… well… language models. If I’m asking Claude to add a premium tier to video rentals pricing, but the code’s talking about “vd_prc_1” and “tr_rate_fs”, it hasn’t got much to match on. Concepts need to be clearly signposted and consistent with the language we use to describe our requirements.

Duplication’s a problem, because logic repeated 5x takes up 5x the context, and also models might not actually “spot” the repetition, so there’s a risk of drift.

Complexity’s a big problem. LLMs don’t like complex patterns. Overly complex code is likely to fall outside the data distribution, leading to low-confidence matches and low-accuracy predictions.

And then there’s separation of concerns…

LLMs are trained on a huge amount of code snippets of the Stack Overflow variety that contain little or no modularity. That’s their comfort zone, and code they generate will tend to be like that, too.

The irony is that, while they suck at generating effectively modular code – cohesive, loosely-coupled modules that localise the ripple effect of changes – they also suck at modifying code that isn’t highly modular. The wider the ripple effect, the more code gets brought into play, and the further out-of-distribution the context grows.

In this way, they’ll tend to paint themselves into a corner as the code grows. So we really need to keep on top on modular design.

So, yes, maintainability matters in “AI”-assisted coding. A LOT.

<shameless-plug>

If you think your team could use some levelling up or a refresher on software design principles, my training's half-price if you confirm your booking by Jan 31st. Link in my profile.

</shameless-plug>

Walking Skeletons, Delivery Pipelines & DevOps Drills

On my 3-day Code Craft training workshop (and if you’re reading this in January 2026, training’s half-price if you confirm your booking by Jan 31st), there’s a team exercise where the group need to work together to deliver a simple program to the customer’s (my) laptop where I can acceptance-test it.

It’s primarily an exercise in Continuous Delivery, bringing together many of the skills explored earlier in the course like Test-Driven Development and Continuous Integration.

But it also exercises the muscles individual or pair-programmed exercise don’t reach. Any problem, even a simple one like the Mars Rover, tends to become much more complicated when we tackle it as a team. It requires a lot of communication and coordination. A team will typically take more time to complete it.

And it also exercises muscles that developers these days have never used before. In 2026, the average developer has never created, say, a command-line project from scratch in their tech stack. They’ve never set up a repo using their version control tool. They’ve never created a build script for Continuous Integration builds. They’ve never written a script to automatically deploy working software.

In the age of “developer experience”, a lot of people have these things done for them. Entry-level devs land on a project and it’s all just there.

That may seem like a convenience initially, but it comes with a sort of learned helplessness, with total reliance on other people to create and adapt build and deployment logic when it’s needed. A lot of developers would be on a significant learning curve if they ever needed to get a project up and running or to change, say, a build script.

It’s the delivery pipeline that frustrates most teams’ attempts to get any functionality in front of the customer in this exercise.

I urge them at the start to get that pipeline in place first. Code that can’t be used has no value. They may have written all of it, but if I can’t test it on my machine – nil points. Just like in real life.

They’re encouraged to create a “walking skeleton” for their tech stack – e.g., a command-line program that outputs “Hello, world!”, and has one dummy unit test.

This can then be added to a new GitHub repository, and the rest of the team can be invited to collaborate on it. That’s the first part of the pipeline.

Then someone can create a build script that runs the tests, and is triggered by pushes to the main (trunk) branch. On GitHub, if we keep our technical architecture vanilla for our tech stack (e.g., a vanilla Java/Maven project structure), GitHub actions can usually generate a script for us. It might need a tweak or two – the right version of Java, for example – but it will get us in the ballpark.

So now everyone in the team can clone a repo that has a skeleton project with a dummy unit test and a simple output to check that it’s working end to end.

That’s the middle of the pipeline. We now have what we need to at least do Continuous Integration.

The final part of the pipeline is when the food makes it to the customer’s table. I remind teams that my laptop is a developer’s machine, and that I have versions of Python, Node.js, Java and .NET installed, as well as a Git client.

So, they could write a batch script that clones the repo, builds the software (e.g., runs pip install for a Python project), and runs the program. When I see “Hello, world!” appear on my screen, we have lift-off. The team can begin implementing the Mars Rover, and whenever a feature is complete, they can ping me and ask me to run that script again to test it.

And thus, value begins to flow, in the form of meaningful user feedback from working software. (Aww, bless. Did you think the software was the value? No, mate. The value’s in what we learn, not what we deliver.)

And, of course, in the real world, that delivery pipeline will evolve, adding more quality gates (e.g., linting), parallelising test execution as the suite gets larger, progressing to more sophisticated deployment models and that sort of thing, as needs change.

DevOps – the marriage of software development and operations – means that the team writing the solution code also handles these matters. We don’t throw it over the wall to a separate “DevOps” team. That’s kind of the whole point of DevOps, really. When we need a change to, say, the build script, we – the team – make that change.

But you might be surprised how many people who describe themselves as “DevOps Engineers” wouldn’t even know where to start. (Or maybe you wouldn’t.)

It’s not their fault if they’ve been given no exposure to operations. And it’s not every day that we start a project from scratch, so the opportunities to gain experience are few and far between.

Given just how critical these pipelines are to our delivery lead times, it’s surprising how little time and effort many organisations invest in getting good at them. It should be a core competency in software development.

It’s especially mysterious why so many businesses allow it to become a bottleneck by favouring specialised teams instead of T-shaped DevOps software engineers who can do most of it themselves instead of waiting for someone else to do it. Teams could have a specialised expert on hand for the rare times when deep expertise is really needed.

If the average developer knew the 20% they’d need 80% of the time to create and change delivery pipelines for their tech stack(s), there’d be a lot less waiting on “DevOps specialists” (which is an oxymoron, of course).

Just as a contractor who has to move house often tends to become very efficient at it, developers who have to get delivery pipelines up and running often tend to be much better at the yak shaving it involves.

So I encourage teams to make these opportunities by doing regular “DevOps drills” for their tech stacks. Get a Node Express “Hello, world” pipeline up and running from scratch. Get a Spring Boot pipeline up and running from scratch. etc.

Typically, I see teams doing them monthly, and as they gain confidence, varying the parameters (e.g., parallel test execution, deployment to a cluster and so on), and making the quality gates more sophisticated (security testing, linting, mutation testing and so on), while learning how to optimise pipelines to keep them as frictionless as possible.

Why Does Test-Driven Development Work So Well In “AI”-assisted Programming?

In my series on The AI-Ready Software Developer, I propose a set of principles for getting better results using LLM-based coding assistants like Claude Code and Cursor.

Users of these tools report how often and how easily they go off the rails, producing code that doesn’t do what we want and frequently breaking code that was working. As the code grows, these risks grow with them. On large code bases, they can really struggle.

From experiment and from real-world use, I’ve seen a number of things help reduce those risks and keep the “AI” on the rails.

  • Working in smaller steps
  • Testing after every step
  • Reviewing code after every step
  • Refactoring code as soon as problems appear
  • Clarifying prompts with examples

Smaller Steps

Human programmers have a limited capacity for cognitive load. There’s only so much we can comfortably wrap our heads around with any real focus, and when we overload ourselves, mistakes becomes much more likely. When we’re trying to spin many plates, the most likely result is broken plates.

LLMs have a similarly-limited capacity for context. While vendors advertise very impressive maximum context sizes of hundreds of thousands of tokens, research – and experience – shows that they have effective context limits that are orders of magnitude smaller.

The more things we ask models to pay attention to, the less able they are to pay attention to any of them. Accuracy drops of a cliff once the context goes beyond these limits.

After thousands of hours working with “AI” coding assistants, I’ve found I get the best results – the fewest broken plates – when I ask the model to solve one problem at a time.

Continuous Testing

If I make one change to the code, and test it straight away, if tests fail then I wouldn’t need to be a debugging genius to figure out which change broke the code. It’s either a quick fix, or a very cheap undo.

If I make ten changes and then test it, it’s going to take significantly longer, potentially, to debug. And if I have to revert to the last known working version, it’s 10x the work and the time lost.

An LLM is more likely to generate breaking changes than a skilled programmer, so frequent testing is even more essential to keep us close to working code.

And if the model’s first change breaks the code, that broken code is now in its context and it – and I – don’t know it’s broken yet. So the model is predicting further code changes on top of a polluted context.

Many of us have been finding that a lot less rework is required when we test after every small step rather than saving up testing for the end of a batch of work.

There’s an implication here, though. If we testing and re-testing continuously, that suggests that testing very fast.

Continuous Inspection

Left to their own devices, LLMs are very good at generating code they’re pretty bad at modifying later.

Some folks rely on rules and guardrails about code quality which are added to the context with every code-generating interaction with the model. This falls foul of the effective context limits of even the hyperscale LLMs. The model may “obey” – remember, they don’t in reality, they match and predict – some of these rules, but anyone who’s spent more than a few minutes attempting this approach will know that they rarely consistently obey all of them.

And filling up the context with rules runs the risk of “distracting” the LLM from the task at hand.

A more effective approach is to keep the context specific to the task – the problem to be solved – and then, when we’ve got something that works, we can turn our attention to maintainability.

After I’ve seen all my tests pass, I then do a code review, checking everything in the diff between the last working version and the latest. Because these diffs are small – one problem at a time – these code reviews are short and very focused, catching “code smells” as soon as they appear.

The longer I let the problems build up, the more the model ends up wading through it’s own “slop”, making every new change riskier and riskier.

I pay attention to pretty much the same things I would if I was writing all the code myself:

  • Clarity (LLMs really benefit from this, because… language model, duh!)
  • Complexity – the model needs the code likely to be affected in its context. More code, bigger context. Also, the more complex it is, the more likely it is to end up outside of the model’s training data distribution. Monkey no see, monkey can’t do.
  • Duplication – oh boy, do LLMs love duplicating code and concepts! Again, this is a context size issue. If I duplicate the same logic 5x, and need to make a change to the common logic, that’s 5x the code and 5x the tokens. But also, duplication often signposts useful abstractions and a more modular design. Talking of which…
  • Separation of Concerns – this is a big one. If I ask Claude Code to make a change to a 1,000-line class with 25 direct dependencies, that’s a lot of context, and we’re way outside the distribution. Many people have reported how their coding assistant craps out on code that lacks separation of concerns. I find I really have to keep on top of it. Modules should have one reason to change, and be loosely-coupled to other parts of the system.

On top of these, there are all kinds of low-level issues – security vulnerabilities, hanging imports, dead code etc etc – that I find I need to look for. Static analysis can help me check diffs for a whole range of issues that would otherwise by easy to miss by me, or by an LLM doing the code review. I’m seeing a lot of developers upping their game with linting as they use “AI” more in their work.

Continuous Refactoring

Of course, finding code quality issues is only academic if we don’t actually fix them. And, for the reasons I’ve already laid out – we want to give the model the smoothest surface to travel on – fix them immediately.

And I don’t fix all the problems at once. I fix one problem at a time, again for reasons already stated.

And after I fix each problem, I run the tests again, in case the fix broke anything.

This process of fixing one “code smell” at a time, testing throughout, is called refactoring. You may well have heard of it. You may even think you’re doing it. There’s a very high probability that you’re not.

Clarifying With Examples

Here’s an experiment you can try for yourself. Prepare two prompts for a small code project. In one prompt, try to describe what you want as precisely as possible in plain language, without giving any examples.

The total of items in the basket is the sum of the item subtotals, which are the item price multiplied by the item quantity

In the second version, give the exact same requirements, but using examples.

The total of items in a shopping basket is the sum of item subtotals:

item #1: price = 9.99, quantity = 1

item #2: price – 11.99, quantity = 2

shopping basket total = (9.99 * 1) + (11.99 * 2) = 33.97

See what kind of results you get with both approaches. How often does the model misinterpret precisely-described requirements vs. requirements accompanied by examples?

It’s worth knowing that code-generating LLMs are typically trained on code samples that are paired with examples like this. When we include examples, we’re giving the model more to match on, limiting the search space to examples that do what we want.

Examples help prevent LLMs grabbing the wrong end of the prompt, and many users have found them to greatly improve accuracy in generated code.

Harking back to the need for very fast tests, these examples make an ideal basis for fast-running automated “unit” tests (where “units” = units of behaviour). It would make good sense to ask our coding assistant to generate them for us, because we’re going to be needing them soon enough.

Putting It All Together

If we were to imagine a workflow that incorporates all of these principles – small steps, continuous testing, continuous inspection, continuous refactoring, clarifying with examples – it would look very familiar to the small percentage of developers who practice Test-Driven Development.

TDD has been around for several decades, and builds on practices that have been around even longer. It’s a tried-and-tested approach that’s been enabling the rapid, reliable and sustainable evolution of working software for those in the know. If you look inside the “elite-performing” teams in the DORA data – the ones delivering the most reliable software with the shortest lead times and the lowest cost of change – you’ll find they’re pretty much all doing TDD, or something very like TDD.

TDD specifies what we want software to do using examples, in the form of tests. (Hence, “test-driven”).

It works in micro-iterations where we write a test that fails because it requires something the software doesn’t do yet. Then we write the simplest code- the quickest thing we can think of – to get the tests passing. When all the tests are passing, then we review the changes we’ve made, and if necessary refactor the code to fix any quality problems. Once we’re satisfied that the code is good enough – both working and easy to change – we move on to the next failing test case. And rinse and repeat until our feature or our change is complete.

TDD practitioners work one feature at a time, one usage scenario at a time, one outcome at a time and one example at a time, and one refactoring at a time. Basically, we solve one problem at a time.

And we’re continuously running our tests at every step to ensure the code is always working. While automated tests are a side-effect of driving design using tests, they’re a damned useful one! And because we’re only writing code that’s needed to pass tests, all of our code will end up being tested. It’s a self-fulfilling prophecy.

Embedded in that micro-cycle, many practitioners also use version control to ensure they’re making progress in safe, easily-reverted steps, progressing from one working version of the code to the next.

Some of us have discovered the benefits of a “commit on green, revert on red” approach to version control. If all the tests pass, we commit the changes. If any tests fail, we do a hard reset back to the previous working commit. This means that broken versions of the code don’t end up in the context for the next interaction. (Remember that LLMs can’t distinguish between working code and broken code – it’s all just context.)

The beauty of TDD is that the benefits can be yours whether you’re using “AI” or not. Which is why I now teach it both ways.

The key to being effective with “AI” coding assistants is being effective without them.

Shameless Plug

Test-Driven Development is not a skill that you can just switch on, whether you’re doing it with “AI” or without. It takes a lot of practice to get the hang of it, and especially to build the discipline – the habits – of TDD.

An alarming number of TDD tutorials aren’t actually teaching TDD. (And the more people learn from them, the more bad tutorials we’ll no doubt see.)

If your team wants training in Test-Driven Development, including how to do it effectively using tools like Claude Code and Cursor, my 2-day TDD training workshop is half-price if you confirm your booking by January 31st.