A Programmer’s “Breadboard”?

I’m a fan of rapid feedback. When I write software, I prefer to find out if I’m on the right track as soon as possible.

There are all sorts of techniques I’ve tried for getting customer feedback without having to go to the effort of delivering production-quality software, which is a time-consuming and expensive way of getting feedback.

I’ve tried UI wire frames and storyboards, which do tend to get the message across, but suffer from two major drawbacks: one is that we can’t run them and see if they work on a real problem, and the other is that we have to commit to a UI design early in order to explore the logic of our software. Once a design’s been “made flesh”, it tends to stick, even if much better designs are possible.

I’ve tried exploring with test cases – examples, basically – but have found that often too abstract for customers to get a real sense of the design.

The thing I’ve tried that gets the most valuable feedback really is to put working software in front of end users and let them try it for themselves. But, like I said, working software is expensive to create.

In the 1990s, when Rapid Development was at its peak, tools appeared that allowed us to “slap together” working prototypes quickly. They typically had names with “Visual” or “Builder” in them, and they worked by dragging and dropping GUI components onto forms or windows or panels, and we could bind those controls to databases and add a little code or a simple macro to glue it all together into something that kind of sort of does what we need.

Then we would take our basic prototype to the customer (sometimes we actually created it in front of the customer), and let them take it for a spin. In fairly rapid iterations, we’d refine the prototype based on the customer’s feedback until we got it in the ballpark.

Then – and this is where it all went wrong – we’d say “Okay, great. Now can we have £500,000 to build it properly in C++?” And they’d say “No, thanks. I’m good with this version.” And then we’d ship the prototype that was made of twigs and string (and Visual Basic for Applications) and live with the very expensive consequences of allowing a business to rely on software that isn’t production quality. (Go to any bank and ask them how many Excel spreadsheets they rely on for enterprise, mission-critical applications. It’ll boggle your mind.)

The pioneers of Extreme Programming learned from this that we should never put working software in front of our customers that isn’t production-ready, because if they like it they’ll make us ship it.

Sketches don’t suffer this drawback, because we very obviously can’t ship a drawing on a whiteboard or in Visio. (Although we did try briefly in the late 90s and early 00s.)

Now, in electronics, it’s possible to create a working prototype that is very obviously not a finished product and that no customer would tell you to ship. Here’s guitar pedal designer Dean Wampler showing us his breadboard that he uses to explore pedal designs.

This is the thing with a breadboard guitar pedal: you can plug a guitar into one end and plug the output into a real guitar amp and hear how the pedal will sound.

It looks nothing like a finished pedal you would buy in a shop, and you certainly couldn’t take it on the road. A production-quality Wampler pedal is much smaller, much more robust and much more user-friendly.

wampler

Of course, these days, it’s entirely possible to design a pedal on a computer and simulate how it will sound. Some guitar amp manufacturers design their amps that way. But you still want players to be able to plug in their guitar and see how it sounds (and how it feels to play through it, which is tricky with software simulations because of latency.)

So breadboards in guitar electronics design persist. There’s no substitute for the real thing (until virtual breadboards catch up).

And this all got me to thinking: what do we have that’s like a breadboard?

Pictures won’t cut it because users can’t run a picture and play around with it. They have to use their imaginations to interpret how the software will respond to their actions. It’s like showing someone a set of production design sketches and asking them “So, how did you like the movie?”

High-fidelity prototypes won’t cut it because customers make us ship them, and the business landscape is already drowning in legacy systems from the 90s that were only intended to be for illustration purposes.

I’m thinking is there a way of throwing together a working app quickly for customer feedback – Microsoft Access-style – but that doesn’t bind us to a UI design too early, and very obviously can’t be shipped? And, very usefully, that could be evolved into a production-quality version without necessarily having to rewrite the whole thing from scratch.

Right now, nothing springs to mind.

(Talking of Scratch…)

 

 

Code, Meet World #NoBacklogs

Do we dare to imagine Agile without backlogs? How would that work? How would we know what to build in this iteration? How would we know what’s coming in future iterations?

Since getting into Agile Software Development (and it’s precursors, like DSDM and Extreme Programming), I’ve gradually become convinced that most of us have been doing it fundamentally wrong.

It’s a question of emphasis. What I see is thousands of teams working their way through a product plan. They deliver increments of the plan every week or three, and the illusion of being driven by feedback instead of by the plan is created by showing each increment to a “customer” and asking “Waddaya think?”

In the mid-90s, I worked as the sole developer on a project where my project managers – two of them! – would make me update their detailed plan every week. It was all about delivering the plan, and every time the plan bumped into reality, the whole 6-month plan had to be updated all over again.

This was most definitely plan-driven software development. We planned, planned, and planned again. And at no point did anybody suggest that maybe spending 1 day a week updating a detailed long-term plan might be wasted effort.

Inspired by that experience, I investigated alternative approaches to planning that could work in the real world. And I discovered one in a book by Tom Gilb called Principles of Software Engineering Management. Tom described an approach to planning that chimed with my own experience of how real projects worked in the real world.

It’s a question of complexity. Weather is very, very complicated. And this makes weather notoriously difficult to predict in detail. The further ahead we look, the less accurate our predictions tend to be. What will the weather be like tomorrow? We can take a pretty good guess with the latest forecasting models. What will the weather be like the day after tomorrow? We can take a guess, but it’s less likely to be accurate. What will the weather be like 6 weeks next Tuesday from now? Any detailed prediction is very likely to be wrong. That’s an inherent property of complex systems: they’re unpredictable in the long term.

Software development is also complex. Complex in the code, its thousands of component parts, and the interactions between them. Complex in the teams, which are biological systems. Complex in how the software will interact with the real world. There are almost always things we didn’t think of.

So the idea that we can predict what features a software system will need in detail, looking months – or even just weeks ahead – seemed a nonsense to me.

But, although complex systems can be inherently unpredictable in detail, they tend to be – unless they’re completely chaotic – roughly predictable in general terms.

We can’t tell you what the exact temperature will be outside the Dog & Fox in Wimbledon Village at 11:33am on August 13th 2025, but we can pretty confidently predict that it will not be snowing.

And we can’t be confident that we’ll definitely need a button marked “Sort A-Z” on a web page titled “Contacts” that displays an HTML table of names and addresses, to be delivered 3 months from now in the 12th increment of a React web application. But we can be confident that users will need to find an address to send their Christmas card to.

The further we look into the future, the less detailed our predictions need to become if they are to be useful in providing long-term direction. And they need to be less detailed to avoid the burden of continually updating a detailed plan that we know is going to change anyway.

This was a game-changer for me. I realised that plans are perishable goods. Plans rot. Curating a detailed 6-month plan, to me, was like buying a 6-month supply of tomatoes. You’ll be back at the supermarket within a fortnight.

I also realised that you’ve gotta eat those tomatoes before they go bad. It’s the only way to know if they’re good tomatoes. Features delivered months after they were conceived are likely rotten – full of untested assumptions piled on top of untested assumptions about what the end users really need. In software, details are very highly biodegradable.

So we need to test our features before they go off. And the best place to test them is in the real world, with real end users, doing real work. Until our code meets the real world, it’s all just guesswork.

Of course, in some domains, releasing software into production every week – or every day even – is neither practical nor desirable. I wouldn’t necessarily recommend it for a nuclear power station, for example.

And in these situations where releases create high risk, or high disruption to end users, we can craft simulated release environments where real end users can try the software in an as-real-as-we-can-make-it world.

If detailed plans are only likely to survive until the next release, and if the next release should be at most a couple of weeks away, then arguably we should only plan in detail – i.e., at the level of features – up to the next release.

Beyond that, we should consider general goals instead of features. In each iteration, we ask “What would be the simplest set of features that might achieve this goal?” If the feature set is too large to fit in that iteration, we can break the goal down. We build that feature set, and release for end user testing in the real (or simu-real) world to see if it works.

Chances are, it won’t work. It might be close, but usually there’s no cigar. So we learn from that iteration and feed the lessons back in to the next. Maybe extra features are required. Maybe features need tweaking. Maybe we’re barking up the wrong tree and need to completely rethink.

Each iteration is therefore about achieving a goal (or a sub-goal), not about delivering a set of features. And the output of each release is not features, but what we learn from watching real end users try to achieve their goal using the features. The output of software releases is learning, not features.

This also re-frames the definition of “done”. We’re not done because we delivered the features. We’re only done when we’ve achieved the goal. Maybe we do that in one iteration. Maybe we need more throws of the dice to get there.

So this model of software development sees cross-functional teams working as one to achieve a goal, potentially making multiple passes at it, and working one goal at a time. The goal defines the team. “We’re the team who enables diners to find great Chinese food”. “We’re the team who puts guitar players in touch with drummers in their town.” “We’re the team who makes sure patients don’t forget when their repeat prescriptions need to be ordered.”

Maybe you need a programmer to make that happen. Maybe you need a web designer to make that happen. Maybe you need a database expert to make that happen. The team is whoever you need to achieve that goal in the real world.

Now I look at the current state of Agile, and I see so many teams munching their way through a list of features, and so few teams working together to solve real end users’ problems. Most teams don’t even meet real end users, and never see how the software gets used in the real world. Most teams don’t know what problem they’re setting out to solve. Most teams are releasing rotten tomatoes, and learning little from each release.

And driving all of this, most teams have a dedicated person who manages that backlog of features, tweaking it and “grooming” it every time the plan needs to change. This is no different to my experiences of updating detailed project plans in the 90s. It’s plan-driven development, plain and simple.

Want to get unstuck from working through a detailed long-term plan? Then ditch the backlog, get with your real customer, and start solving real problems together in the real world.

Do We Confuse Work With Value?

Here’s a little thought experiment. Acme Widgets need a new stock control system. They invite software development companies to bid. Big Grey IT Corporation put forward a bid for the system that will take 20 weeks with a team of 50 developers, each developer getting paid £2,000 a week. Their total bid for the system is £2 million.

Acme’s CEO is about to sign off on the work when he gets a call from Whizzo Agile Inc, who have read the tender specification and claim they can deliver the exact same system in the exact same time with a team of just 4 crack developers.

Assuming both teams would deliver the exact same outcome at the exact same time (and, yes, in reality a team of 50 would very probably deliver late or not at all, statistically), how much should Whizzo Agile charge?

It’s a purely hypothetical question, but I’ve seen similar scenarios play out in real life. A government department invites bids for an IT system. The big players put in ludicrously huge bids (e.g., £400 million for a website), and justify them by massive over-staffing of the project. Smaller players – the ones who can actually afford to tender and who have the government contacts – tend to get shut out, no matter how good their bids look.

And I’ve long wondered if the problem here is how we tend to confuse work with value. Does that calculation at the back of the customer’s mind go something like: “Okay, so Option A is £2M for 5,000 developer days, and Option B is £0.5M for 400 developer days”? With Option A, the customer calculates they’ll get more work for their money.

But the value delivered is identical (in theory – in reality a smaller team would probably do a better job).  They get the same software in the same time frame, and they get it at a quarter of the price. It’s just that the developers on the small team get paid a lot more to deliver it. I’ve watched so many managers turn down Option B over the last 30 years.

A company considering a bid of £2M to build a software system is announcing that the value of that system to their business is significantly more than £2M. Let’s say, for the sake of argument, it’s double. Option A brings them a return of £2M. Option B brings a return of £3.5M. In those terms, is it a good business decision to go with Option A?

In other areas of business, choosing to go with Option A would get you sacked. Why, in software development, is it so often the other way around?

 

The Value’s Not In Features, It’s In Learning

A small poll I ran on the Codemanship Twitter account seems to confirm what I’ve observed and heard in the field about “agile” development teams still being largely plan-driven.

If you’re genuinely feedback-driven, then your product backlog won’t survive much further out than the next cycle of customer feedback. Maintaining backlogs that look months ahead is a sign that just maybe you’re incrementally working through a feature list instead of iteratively solving a set of business problems.

And this cuts to the core of a major, fundamental malaise in contemporary Agile Software Development. Teams are failing to grasp that the “value” that “flows” in software development is in what we learn with each iteration, not in the features themselves.

Perhaps a better name for “features” might be “guesses” – we’re guessing what might be needed to solve a problem. We won’t know until we’ve tried, though. So each release is a vital opportunity to test our assumptions and feed back what we learn into the next release.

I see teams vigorously defending their product backlogs from significant change, and energetically avoiding feedback that might reveal that we got it wrong this time. Folk have invested a lot in the creation of their backlog – often envisioning a whole product in significant detail – and can take it pretty personally when the end users say “Nope, this isn’t working”.

With a first release – when our code meets the real world for the first time – I expect a lot of change to a product vision. With learning and subsequent iterations of the design, the product vision will usually stabilise. But when we track how much the backlog changes with each release on most teams, we see mostly tweaking. Initial product visions – which, let’s be clear, are just educated guesses at best – tend to remain largely intact. Once folk are invested in a solution, they struggle to let go of it.

Teams with a strong product vision often suffer from confirmation bias when considering end user feedback. (Remember: the “customer” on many products is just as invested in the product vision if they’ve actively participated in its creation.) Feedback that supports their thesis tends to be promoted. Feedback that contradicts their guesswork gets demoted. It’s just human nature, but its skewing effect on the design process usually gets overlooked.

The best way to avoid becoming wedded to a detailed product vision or plan is not to have a detailed product vision or plan. Assume as little as possible to make something simple we can learn from in the next go-round. Focus on achieving long-term goals, not on delivering detailed plans.

In simpler terms: ditch the backlogs.

Codemanship’s Code Craft Road Map

One of the goals behind my training courses is to help developers navigate all the various disciplines of what we these days call code craft.

It helps me to have a mental road map of these disciplines, refined from three decades of developing software professionally.

codecraftroadmap

When I posted this on Twitter, a couple of people got in touch to say that they find it helpful, but also that a few of the disciplines were unfamiliar to them. So I thought it might be useful to go through them and summarise what they mean.

  • Foundations – the core enabling practices of code craft
    • Unit Testing – is writing fast-running automated tests to check the logic of our code, that we can run many times a day to ensure any changes we’ve made haven’t broken the software. We currently know of no other practical way of achieving this. Slow tests cause major bottlenecks in the development process, and tend to produce less reliable code that’s more expensive to maintain. Some folk say “unit testing” to mean “tests that check a single function, or a single module”. I mean “tests that have no external dependencies (e.g., a database) and run very fast”.
    • Version Control – is seat belts for programmers. The ability to go back to a previous working version of the code provides essential safety and frees us to be bolder with our code experiments. Version Control Systems these days also enable more effective collaboration between developers working on the same code base. I still occasionally see teams editing live code together, or even emailing source files to each other. That, my friends, is the hard way.
    • Evolutionary Development – is what fast-running unit tests and version control enable. It is one or more programmers and their customers collectively solving problems together through a series of rapid releases of a working solution, getting it less wrong with each pass based on real-world feedback. It is not teams incrementally munching their way through a feature list or any other kind of detailed plan. It’s all about the feedback, which is where we learn what works and what doesn’t. There are many takes on evolutionary development. Mine starts with a testable business goal, and ends with that goal being achieved. Yours should, too. Every release is an experiment, and experiments can fail. So the ability to revert to a previous version of the code is essential. Fast-running unit tests help keep changes to code safe and affordable. If we can’t change the code easily, evolution stalls. All of the practices of code craft are designed to enable rapid and sustained evolution of working software. In short, code craft means more throws of the dice.
  • Team Craft – how developers work together to deliver software
    • Pair Programming – is two programmers working side-by-side (figuratively speaking, because sometimes they might not even be on the same continent), writing code in real time as a single unit. One types the code – the “driver” – and one provides high-level directions – the “navigator”. When we’re driving, it’s easy to miss the bigger picture. Just like on a car journey, in the days before GPS navigation. The person at the wheel needs to be concentrating on the road, so a passenger reads the map and tells them where to go. The navigator also keeps an eye out for hazards the driver may have missed. In programming terms, that could be code quality problems, missing tests, and so on – things that could make the code harder to change later. In that sense, the navigator in a programming pair acts as a kind of quality gate, catching problems the driver may not have noticed. Studies show that pair programming produces better quality code, when it’s done effectively. It’s also a great way to share knowledge within a team. One pairing partner may know, for example, useful shortcuts in their editor that the other doesn’t. If members of a team pair with each other regularly, soon enough they’ll all know those shortcuts. Teams that pair tend to learn faster. That’s why pairing is an essential component of Codemanship training and coaching. But I appreciate that many teams view pairing as “two programmers doing the work of one”, and pair programming can be a tough sell to management. I see it a different way: for me, pair programming is two programmers avoiding the rework of seven.
    • Mob Programming – sometimes, especially in the early stages of development, we need to get the whole team on the same page. I’ve been using mob programming – where the team, or a section of it, all work together in real-time on the same code (typically around a big TV or projector screen) – for nearly 20 years. I’m a fan of how it can bring forward all those discussions and disagreements about design, about the team’s approach, and about the problem domain, airing all those issues early in the process. More recently, I’ve been encouraging teams to mob instead of having team meetings. There’s only so much we can iron out sitting around a table talking. Eventually, I like to see the code. It’s striking how often debates and misunderstandings evaporate when we actually look at the real code and try our ideas for real as a group. For me, the essence of mob programming is: don’t tell me, show me. And with more brains in the room, we greatly increase the odds that someone knows the answer. It’s telling that when we do team exercises on Codemanship workshops, the teams that mob tend to complete the exercises faster than the teams who work in parallel. And, like pair programming, mobbing accelerates team learning. If you have junior or trainee developers on your team, I seriously recommend regular mobbing as well as pairing.
  • Specification By Example – is using concrete examples to drive out a precise understanding of what the customer needs the software to do. It is practiced usually at two levels of abstraction: the system, and the internal high-level design of the code.
    • Test-Driven Development – is using tests (typically internal unit tests) to evolve the internal design of a system that satisfies an external (“customer”) test. It mandates discovery of internal design in small and very frequent feedback loops, making a few design decisions in each feedback loop. In each feedback loop, we start by writing a test that fails, which describes something we need the code to do that it currently doesn’t. Then we write the simplest solution that will pass that test. Then we review the code and make any necessary improvements – e.g. to remove some duplication, or make the code easier to understand – before moving on to the next failing test. One test at a time, we flesh out a design, discovering the internal logic and useful abstractions like methods/functions, classes/modules, interfaces and so on as we triangulate a working solution. TDD has multiple benefits that tend to make the investment in our tests worthwhile. For a start, if we only write code to pass tests, then at the end we will have all our solution code covered by fast-running tests. TDD produces high test assurance. Also, we’ve found that code that is test-driven tends to be simpler, lower in duplication and more modular. Indeed, TDD forces us to design our solutions in such a way that they are testable. Testable is synonymous with modular. Working in fast feedback loops means we tend to make fewer design decisions before getting feedback, and this tends to bring more focus to each decision. TDD, done well, promotes a form of continuous code review that few other techniques do. TDD also discourages us from writing code we don’t need, since all solution code is written to pass tests. It focuses us on the “what” instead of the “how”. Overly complex or redundant code is reduced. So, TDD tends to produce more reliable code (studies find up to 90% less bugs in production), that can be re-tested quickly, and that is simpler and more maintainable. It’s an effective way to achieve the frequent and sustained release cycles demanded by evolutionary development. We’ve yet to find a better way.
    • Behaviour-Driven Development – is working with the customer at the system level to precisely define not what the functions and modules inside do, but what the system does as a whole. Customer tests – tests we’ve agreed with our customer that describe system behaviour using real examples (e.g., for a £250,000 mortgage paid back over 25 years at 4% interest, the monthly payments should be exactly £1,290) – drive our internal design, telling us what the units in our “unit tests” need to do in order to deliver the system behaviour the customer desires. These tests say nothing about how the required outputs are calculated, and ideally make no mention of the system design itself, leaving the developers and UX folk to figure those design details out. They are purely logical tests, precisely capturing the domain logic involved in interactions with the system. The power of BDD and customer tests (sometimes called “acceptance tests”) is how using concrete examples can help us drive out a shared understanding of what exactly a requirement like “…and then the mortgage repayments are calculated” really means. Automating these tests to pull in the example data provided by our customer forces us to be 100% clear about what the test means, since a computer cannot interpret an ambiguous statement (yet). Customer tests provide an outer “wheel” that drives the inner wheel of unit tests and TDD. We may need to write a bunch of internal units to pass an external customer test, so that outer wheel will turn slower. But it’s important those wheels of BDD and TDD are directly connected. We only write solution code to pass unit tests, and we only write unit tests for logic needed to pass the customer test.
  • Code Quality – refers specifically to the properties of our code that make it easier or harder to change. As teams mature, their focus will often shift away from “making it work” to “making it easier to change, too”. This typically signals a growth in the maturity of the developers as code crafters.
    • Software Design Principles – address the underlying factors in code mechanics that can make code harder to change. On Codemanship courses, we teach two sets of design principles: Simple Design and Modular Design.
      • Simple Design
        • The code must work
        • The code must clearly reveal it’s intent (i.e., using module names, function names, variable names, constants and so on, to tell the story of what the code does)
        • The code must be low in duplication (unless that makes it harder to understand)
        • The code must be the simplest thing that will work
      • Modular Design (where a “module” could be a class, or component, or a service etc)
        • Modules should do one job
        • Modules should know as little about each other as possible
        • Module dependencies should be easy to swap
    • Refactoring – is the discipline of improving the internal design of our software without changing what it does. More bluntly, it’s making the code easier to change without breaking it. Like TDD, refactoring works in small feedback cycles. We perform a single refactoring – like renaming a class – and then we immediately re-run our tests to make sure we didn’t break anything. Then we do another refactoring (e.g., move that class into a different package) and test again. And then another refactoring, and test. And another, and test. And so on. As you can probably imagine, a good suite of fast-running automated tests is essential here. Refactoring and TDD work hand-in-hand: the tests make refactoring safer, and without a significant amount of refactoring, TDD becomes unsustainable. Working in these small, safe steps, a good developer can quite radically restructure the code whilst ensuring all along the way that the software still works. I was very tempted to put refactoring under Foundation, because it really is a foundational discipline for any kind of programming. But it requires a good “nose” for code quality, and it’s also an advanced skill to learn properly. So I’ve grouped it here under Code Quality. Developers need to learn to recognise code quality problems when they see them, and get hundreds of hours of practice at refactoring the code safely to eliminate them.
    • Legacy Code – is code that is in active use, and therefore probably needs to be updated and improved regularly, but is too expensive and risky to change. This is usually because the code lacks fast-running automated tests. To change legacy code safely, we need to get unit tests around the parts of the code we need to change. To achieve that, we usually need to refactor that code to make it easy to unit test – i.e., to remove external dependencies from that code. This takes discipline and care. But if every change to a legacy system started with these steps, over time the unit test coverage would rise and the internal design would become more and more modular, making changes progressively easier. Most developers are afraid to work on legacy code. But with a little extra discipline, they needn’t be. I actually find it very satisfying to rehabilitate software that’s become a millstone around our customers’ necks. Most code in operation today is legacy code.
    • Continuous Inspection – is how we catch code quality problems early, when they’re easier to fix. Like anything with the word “continuous” in the title, continuous inspection implies frequent automated checking of the code for cod quality “bugs” like functions that are too big or too complicated, modules with too many dependencies and so on. In traditional approaches, teams do code reviews to find these kinds of issues. For example, it’s popular these days to require a code review before a developer’s changes can be merged into the master branch of their repo. This creates bottlenecks in the delivery process, though. Code reviews performed by people looking at the code are a form of manual testing. You have to wait for someone to be available to do it, and it may take them some time to review all the changes you’ve made. More advanced teams have removed this bottleneck by automating some or all of their code reviews. It requires some investment to create an effective suite of code quality gates, but the pay-off in speeding up the check-in process usually more than pays for it. Teams doing continuous inspection tend to produce code of a significantly higher quality than teams doing manual code reviews.
  • Software Delivery – is all about how the code we write gets to the operational environment that requires it. We typically cover it in two stages: how does code get from the developer’s desktop into a shared repository of code that could be built, tested and released at any time? And how does that code get from the repository onto the end user’s smartphone, or the rented cloud servers, or the TV set-top box as a complete usable product?
    • Continuous Integration – is the practice of developers frequently (at least once a day) merging their changes into a shared repository from which the software can be built, tested and potentially deployed. Often seen as purely a technology issue – “we have a build server” – CI is actually a set of disciplines that the technology only enables if the team applies them. First, it implies that developers don’t go too long before merging their changes into the same branch – usually the master branch or “trunk”. Long-lived developer branches – often referred to as “feature branches” – that go unmerged for days prevent frequent merging of (and testing of merged) code, and is therefore most definitely not CI. The benefit of frequent tested merges is that we catch conflicts much earlier, and more frequent merges typically means less changes in each merge, therefore less merge conflicts overall. Teams working on long-lived branches often report being stuck in “merge hell” where, say, at the end of the week everyone in the team tries to merge large batches of conflicting changes. In CI, once a developer has merged their changes to the master-branch, the code in the repo is built and the tests are run to ensure none of those changes has “broken the build”. It also acts as a double-check that the changes work on a different machine (the build server), which reduces the risk of configuration mistakes. Another implication of CI – if our intent is to have a repository of code that can be deployed at any time – is that the code in master branch must always work. This means that developers need to check before they merge that the resulting merged code will work. Running a suite of good automated tests beforehand helps to ensure this. Teams who lack those tests – or who don’t run them because they take too long – tend to find that the code in their repo is permanently broken to some degree. In this case, releases will require a “stabilisation” phase to find the bugs and fix them. So the software can’t be released as soon as the customer wants.
    • Continuous Delivery – means ensuring that our software is always shippable. This encompasses a lot of disciplines. If the is code sitting on developers’ desktops or languishing in long-lived branches, we can’t ship it. If the code sitting in our repo is broken, we can’t ship it. If there’s no fast and reliable way to take the code in the repo and deploy it as a working end product to where it needs to go, we can’t ship it. As well as disciplines like TDD and CI, continuous delivery also requires a very significant investment in automating the delivery pipeline – automating builds, automating testing (and making those test run fast enough), automating code reviews, automating deployments, and so on. And these automated delivery processes need to be fast. If your builds take 3 hours – usually because the tests take so long to run – then that will slow down those all-important customer feedback loops, and slow down the process of learning from our releases and evolving a better design. Build times in particular are like the metabolism of your development process. If development has a slow metabolism, that can lead to all sorts of other problems. You’d be surprised how often I’ve seen teams with myriad difficulties watch those issues magically evaporate after we cut their build+test time down from hours to minutes.

Now, most of this stuff is known to most developers – or, at the very least, they know of them. The final two headings caused a few scratched heads. These are more advanced topics that I’ve found teams do need to think about, but usually after they’ve mastered the core disciplines that come before.

  • Managing Code Craft
    • The Case for Code Craft – acknowledges that code craft doesn’t exist in a vacuum, and shouldn’t be seen as an end in itself. We don’t write unit tests because, for example, we’re “professionals”. We write unit tests to make changing code easier and safer. I’ve found it helps enormously to both be clear in my own mind about why I’m doing these things, as well as in persuading teams that they should try them, too. I hear it from teams all the time: “We want to do TDD, but we’re not allowed”. I’ve never had that problem, and my ability to articulate why I’m doing TDD helps.
    • Code Craft Metrics – once you’ve made your case, you’ll need to back it up with hard data. Do the disciplines of code craft really speed up feedback cycles? Do they really reduce bug counts, and does that really save time and money? Do they really reduce the cost of changing code? Do they really help us to sustain the pace of innovation for longer? I’m amazed how few teams track these things. It’s very handy data to have when the boss comes a’knockin’ with their Micro-Manager hat on, ready to tell you how to do your job.
    • Scaling Code Craft – is all about how code craft on a team and within a development organisation just doesn’t magically happen overnight. There are lots of skills and ideas and tools involved, all of which need to be learned. And these are practical skills, like riding a bicycle. You can;t just read a book and go “Hey, I’m a test-driven developer now”. Nope. You’re just someone who knows in theory what TDD is. You’ve got to do TDD to learn TDD, and lot’s of it. And all that takes time. Most teams who fail to adopt code craft practices do so because they grossly underestimated how much time would be required to learn them. They approach it with such low “energy” that the code craft learning curve might as well be a wall. So I help organisations structure their learning, with a combination of reading, training and mentoring to get teams on the same page, and peer-based practice and learning. To scale that up, you need to be growing your own internal mentors. Ad hoc, “a bit here when it’s needed”, “a smigen there when we get a moment” simply doesn’t seem to work. You need to have a plan, and you need to invest. And however much you were thinking of investing, it’s not going to be enough.
  • High-Integrity Code Craft
    • Load-Bearing Code – is that portion of code that we find in almost any non-trivial software that is much more critical than the rest. That might be because it’s on an execution path for a critical feature, or because it’s a heavily reused piece of code that lies on many paths for many features. Most teams are not aware of where their load-bearing code is. Most teams don’t give it any thought. And this is where many of the horror stories attributed to bugs in software begin. Teams can improve at identifying load-bearing code, and at applying more exhaustive and rigorous testing techniques to achieve higher levels of assurance when needed. And before you say “Yeah, but none of our code is critical”, I’ll bet a shiny penny there’s a small percentage of your code that really, really, really needs to work. It’s there, lurking in most software, just waiting to send that embarrassing email to everyone in your address book.
    • Guided Inspection – is a powerful way of testing code by reading it. Many studies have shown that code inspections tend to find more bugs than any other kind of testing. In guided inspections, we step through our code line by line, reasoning about what it will do for a specific test case – effectively executing the code in our heads. This is, of course, labour-intensive, but we would typically only do it for load-bearing code, and only when that code itself has changed. If we discover new bugs in an inspection, we feed that back into an automated test that will catch the bug if it ever re-emerges, adding it to our suite of fast-running regression tests.
    • Design By Contract – is a technique for ensuring the correctness of the interactions between components of our system. Every interaction has a contract: a pre-condition that describes when a function or service can be used (e.g., you can only transfer money if your account has sufficient funds), and a post-condition that describes what that function or service should provide to the client (e.g., the money is deducted from your account and credited to the payee’s account). There are also invariants: things that must always be true if the software is working as required (e.g., your account never goes over it’s limit). Contracts are useful in two ways: for reasoning about the correct behaviour of functions and services, and for embedding expectations about that behaviour inside the code itself as assertions that will fail during testing if an expectation isn’t satisfied. We can test post-conditions using traditional unit tests, but in load-bearing code, teams have found it helpful to assert pre-conditions to ensure that not only do functions and services do what they’re supposed to, but they’re only ever called when they should be. DBC presents us with some useful conceptual tools, as well as programming techniques when we need them. It also paves the way to a much more exhaustive kind of automated testing, namely…
    • Property-Based Testing – sometimes referred to as generative testing, is a form of automated testing where the inputs to the tests themselves are programmatically calculated. For example, we might test that a numerical algorithm works for a range of inputs from 0…1000, at increments of 0.01. or we might test that a shipping calculation works for all combinations of inputs of country, weight class and mailing class. This is achieved by generalising the expected results in our tests, so instead of asserting that the square root of 4 is 2, we might assert that the square root of any positive number multiplied by itself is equal to the original number. These properties of correct test results look a lot like the contracts we might write when we practice Design By Contract, and therefore we might find experience in writing contracts helpful in building that kind of declarative style of asserting. The beauty of property-based tests is that they scale easily. Generating 1,000 random inputs and generating 10,000 random inputs requires a change of a single character in our test. One character, 9,000 extra test cases. Two additional characters (100,000) yields 99,000 more test cases. Property-based tests enable us to achieve quite mind-boggling levels of test assurance with relatively little extra test code, using tools most developers already know.

So there you have it: my code craft road map, in a nutshell. Many of these disciplines are covered in introductory – but practical – detail in the Codemanship TDD course book

If your team could use a hands-on introduction to code craft, our 3-day hands-on TDD course can give them a head-start.

The Return of Micro-Methods

Many moons ago, I experimented with what I call micro-methods for software development teams. A micro-method is a thought experiment that constrains teams with just three rules. Outside of those rules, teams can do whatever they want.

Here’s an example of a micro-method called “Hard Agile”:

  1. If any changes in the repo go unreleased for more than 7 days, the repo is reverted to the last release with no back up
  2. All local copies of code and all non-trunk branches are deleted daily with no back-ups
  3. All check-ins are mutation tested. If the mutation coverage is less than 100% (or as near as margins for error might allow, depending on the tool used), the check-in is rejected

The purpose of the thought experiment is to imagine what kind of behaviour might emerge from the team that strictly follows these three rules, but no others.

In the case of Hard Agile, teams can’t go for more than a week without releasing, or they lose all their unreleased changes. Developers can’t go for more than a day without merging to master, or they lose all their un-merged changes. And developers can’t check in code that can be broken without any automated tests catching it.

There’s nothing in here about requirements, or about code reviews, or about planning, or any of that stuff. The team would be left to figure that out for themselves.

But they would have no choice but to release the software at least once a week, so they’d be getting feedback from real end users frequently and – if they want the product to succeed – they’d need to act on that feedback.

They’d have no choice but to merge to master at least once a day, so more frequent releases would be practical and merge conflicts would be spotted earlier.

And they’d have no choice but to ensure all of their code is meaningfully tested with automated tests, so their releases will be more reliable and easier to change without breaking the software. They’d also probably need to follow design idioms that favour testability and need to make the software simpler because of the “test automation tax” they’ll have to pay for every line of solution code they want to add, so there’d be a likely side-benefit in design simplicity and modularity that studies have shown tend to result. We’ve tended to find that the optimum way to ensure all our code is tested is to only write solution code to pass tests. Sound familiar?

And, of course, a theoretical rules can be fitted with sliders. Maybe we can revert the repo if changes go unreleased for more than 3 days, or even one day? Maybe we can delete local copies and branches twice a day, or every hour?

Such a set-up, I reckon, might be useful for training teams – starting with nice, easy limits and gradually shrinking them to help ingrain these cadences until they become second nature.

Once a team’s “inner egg timer” is developed enough, we could remove the rules and see what behaviour emerges.

Ant Colonies, Agile At Scale & The Illusion Of Control

Just a brief thought this morning about how susceptible we are to the illusion of control over complex systems (including organisations), and how that illusion has given rise to the myth of “scaled” software development processes.

Consider an ant colony: there could be millions of individual ants making up such an organisation. Working together, somehow, the colony appears to act as one – making decisions, performing work (like foraging for food) and building complex structures that look for all the world like they were designed by some overall architect.

But this appearance of central control is an illusion. All of that complex behaviour emerges through the millions of individuals doing their own things, but following simple rules that themselves have emerged through the process of evolution. Ant species who didn’t evolve to follow these simple rules were selected for extinction.

But this illusion of control is very seductive, and we seem to be hardwired to believe it: whether we believe that ant nests are designed, or we believe that the rain is controlled by the rain gods. In reality – and it’s ironic that we really only came to understand this in the age of computers, when we could see complex order emerge as the result of simple rules being applied over and over again in front of our very eyes – in truly complex systems, order can only emerge. It cannot be imposed from outside or above. Any attempt to do so will usually produce a short-term perturbation, but the underlying rules of the system will soon enough return it to it’s emergent form – a process called homeostasis.

And it’s homeostasis that explains why top-down “transformations” and re-organisations ultimately don’t work. If we don’t change the underlying – usually unwritten – rules that determine how individuals behave and interact, the old order will reestablish itself not long after the consultants and coaches have left. It therefore also explains why these “transformations” never seem to end, because management find they have to apply continuous never-ending effort to try to stop the old order re-establishing itself. It’s expensive, exhausting and ultimately futile.

To change the behaviour of ant colonies, you must change the DNA of the ants.

But how do we change the “DNA” of the individuals in our organisation? Well, how did nature do it? Evolution by natural selection. The ants in the colony have behaviours that are beneficial to the colony because if they weren’t, the colony wouldn’t survive. So when we see ant colonies today, we only see the ones where the ants have the good DNA. The bad DNA was selected for extinction.

Likewise, in businesses, it matters a great deal what behaviours in individuals are rewarded and what behaviours get selected for extinction. Quite often, we see behaviour being rewarded that harms our organisation. A classic example is bloating teams. Managers get rewarded more for managing larger teams. But in software development, larger teams tend to achieve less (and when they reach a certain size, they tend to achieve nothing) at much greater cost to the organisation.

It’s in the managers’ interests to grow the teams, but it’s in the organisation’s interest that teams be small. Incentives are not aligned, and damage to the organisation tends to result. Sometimes fatal damage. I’m thinking of a software company that had 1200 people in engineering, performing the work of maybe 30-40 good engineers. Their HQ is a supermarket now.

Reward the wrong behaviours, promote people for the wrong reasons, and the individuals in your organisation will receive those signals loud and clear and adapt their behaviour accordingly. Many software organisations have a knack for losing their best people and retaining the ones doing all the damage.

In nature, when an ant colony dies, the ants in it die. When a business dies, the ants move to other colonies, taking their dysfunctions with them. Just as people within an organisation are often rewarded for behaviours that harm the organisation, our hiring processes often reward those same behaviours. “How many people in your last team? How many $millions was in the budget you controlled?” And so on. When I read job specs for the software industry, I despair at just how sought-after harmful behaviours tend to be.

And, at the risk of getting a little meta, nowhere is this more visible than in the spreading of the illusion of control itself. When a top-down transformation finally implodes, the armies of certified-up-to-the-eyeballs coaches and consultants the transformation created are scattered in the wind, spreading this illusion to miriad other organisations. Until – let’s face it – the spreading of these behaviors becomes the only purpose of these behaviours. A virus, essentially. It exists to spread, to consume and re-purpose its hosts, and then spread again.

So company extinction doesn’t filter out the bad DNA. It just spreads it, like a supernova spreads radioactive elements. Not only do we need to change the incentives within our organisations to reward beneficial behaviours, we also need to take a long, hard look at our hiring practices and ask ourselves “What is this filtering out? What is it letting in?”

 

 

Codemanship’s Code Craft Road Map

If you check out the suggested reading list at the back of the Codemanship TDD course book, you’ll find what is essentially my road map for mastering code craft. There are 8 books that make up what I have found to be the essential code craft skills, namely:

  • Unit test design
  • Test-Driven Development
  • Software Design Principles
  • Refactoring
  • Specification By Example
  • Changing legacy code
  • Continuous Integration
  • Continuous Delivery

These are the core skills needed to rapidly iterate working software so we can learn our way to a solution that’s fit for purpose and that will be easy to change as the problem changes, so we can keep it fit for purpose for as long as the customer needs.

They are foundational skills that – if it were up to me – we’d instill in all developers.

You cannot learn these skills in a 3-day training workshop with maybe 1.5 days’ hands-on practical experience. Realistically, it’s going to take a smart person 2-3 years to read these books and learn to effectively apply the ideas. My TDD course is your road map for that journey.

But, I’m sad to say, it’s a journey that maybe only 20-25% of developers who come on the course actually take.

I’m painfully aware that, for the majority, a code craft training course is like spending 3 days being shown how to cook healthy meals, and then being sent back to your day job flipping burgers. Some will be inspired to try the recipes themselves, but have to do it in their own kitchens and in their own time. The usual outcome – after they’ve mastered the techniques – is that these developers outgrow their organisations and move on, taking the skills with them.

As an outside observer who stays in touch with hundreds of people who came on courses, I can see this happening. Developers who came on a course, got inspired, and invested their own time, rarely stay put. Their current organisation begins to frustrate them – “We make burgers! That’s just how it’s done here!” – and they’ve made themselves much more bankable in the meantime. On average, developers with good code craft skills earn 27% more.

With very few exceptions, the organisations where developers got inspired and stayed have been the ones who gave them the time and the support to continue learning. Now they mentor junior developers in those same skills – some are even trainers themselves now – and that initial investment in them pays dividends for their employers.

Of course, you might be reading this and thinking “We better not train our developers. They’ll just leave.” But here’s the thing: you need these skills. They’re foundational, remember. If you want reliable software that does what the business needs, when it needs it – and is easy to change as those needs change – then there are no shortcuts. Employers are falling over themselves to hire people with these skills, but where are developers learning them in the first place?

It’s not the training that’s the issue here. Our TDD course just kick-starts the learning process. By giving teams a practical hands-on introduction to the key ideas, and a road map to take with them on their journey, you can save a good deal of time and money on wrong turns and dead ends. (It took me well over a decade, a tonne of trial-and-error, and a library full of books to get a handle on these skills and distill it down.)

But they still have to go on the journey. And if you don’t invest in that, they’ll take the journey by themselves and will very likely be delivering the benefits to some other employer soon enough.

 

Do Our Tools Need A Rethink?

In electronics design – a sector I spent a bit of time in during the 90s – tool developers recognised the need for their software to integrate with other software involved in the design and manufacturing process. Thanks to industry data standards, a PCB design created in one tool could be used to generate a bill of parts in a management tool, to simulate thermal and electromagnetic emissions in another tool, and drive pick-and-place equipment on an assembly line.

I marveled at how seamlessly the tools in this engineering ecosystem worked together, saving businesses eye-boggling amounts of money every year. Software can work wonders.

So it’s been disappointing to see just how disconnected and clunky our own design and development systems have turned out to be in the software industry itself. (Never live in a builder’s house!) Our ecosystem is largely made up of Heath Robinson point solutions – a thing that runs unit tests, a thing that tracks file versions, a thing that builds software from source files, a thing that executes customer tests captured in text files – all held together with twigs and string. There are no industry data interchange standards for these tools. Unit test results come in whatever shape the specific unit test tool developers decided. Customer tests come in whatever shape the specific customer testing tool developers decided. Build scripts come in whatever shape the build tool developers decided. And so on.

When you run the numbers, taking into account just how many different tools there are and therefore how any potential combinations of tools might be used in a team’s delivery pipeline, it’s brain-warping.

I see this writ large in the amount of time and effort it takes teams to get their pipeline up and running, and in the vastly larger investment needed to connect that pipeline to visible outputs like project dashboards and build monitors.

It occurs to me that if the glue between the tools was limited to a handful of industry standards, a lot of that work wouldn’t be necessary. It would be far easier, say, to have burn-down charts automatically refreshed after customer tests have been run in a build.

For this to happen, we’d need to rethink our tools in the context of wider workflows – something we’re notoriously bad at. The bigger picture.

Perhaps this is a classic illustration of what you end up with when you have an exclusively feature/solution or product focus, yes? Unit tests, customer tests, automated builds, static analysis results, commits, deployments – these are all actors in a bigger drama. The current situation is indicative of actors who only read their parts, though.

Code Craft Bootstrapped

I’ll be blogging about this soon, but just wanted to share some initial thoughts on a phenomenon I’ve observed in very many development teams. A lot of teams confuse their tools with associated practices.

“We do TDD” often really means “We’re using JUnit”. “We refactor” often means “We use Resharper”. “We do CI” often means “We’re using Jenkins”. And so on.

As two current polls I’m running strongly suggest, a lot of teams who think they’re doing Continuous Integration appear to develop on long-lived branches (e.g., “feature branches”). But because they’re using the kind of tools we associate with CI, they believe that’s what they’re doing.

This seems to me to be symptomatic of our “solution first” culture in software development. Here’s a solution. Solution to what, exactly? We put the cart before the horse, adopting, say, Jenkins before we think about how often we merge our changes and how we can test those merges frequently to catch conflicts and configuration problems earlier.

Increasingly, I believe that developers should learn the practices first – without the tools. It wasn’t all that long ago when many of these tools didn’t exist, after all. And all the practices predate the tools we know today. You can write automated tests in a main() method, for example, and learn the fundamentals of TDD without a unit testing framework. (Indeed, as you refactor the test code, you may end up discovering a unit testing framework hiding inside the duplication.)

Talking of refactoring, once upon a time we had no automated refactoring tools beyond cut, copy and paste and maybe Find/Replace. Maybe developers will grok refactoring better if they start learning to do refactorings the old-school way?

And for many years we automated our builds using shell scripts. Worked just fine. We just got into the habit of running the script on the build machine every time we checked in code.

These tools make it easier to apply these practices, and help us scale them up by taking out a lot of donkey work. But I can’t help wondering if starting without them might help us focus on the practices initially, as well as maybe helping us to develop a real appreciation for how much they can help – when used appropriately.