For Distributed Teams, Code Craft is Critical

Right now, most software teams all around the world are working from home. Many have not done it before, and are on a learning curve that means last week’s productivity won’t be returning for a while.

I’ve worked on distributed teams many times, and – through Codemanship – trained and mentored dozens of teams remotely. One thing I’ve learned from all that remote development experience is that coding discipline becomes super-important.

Just as distributed systems amplify every design flaw, turning what would be a headache in a monolith into a major outbreak in a service-oriented architecture, distributed working amplifies team dysfunctions as the communication pathways take on extra weight.

Here’s how code craft can help:

  • Unit tests – keeping the software working is Distributed Dev Team 101. Open Source projects rely on suites of fast-running tests to protect against check-ins that break the code.
  • Continuous Integration – is how distributed teams communicate their changes to each other. Co-located teams should merge their changes to the master branch often and be build aware, keeping one eye on other people’s merges to see what’s changed. But it’s much easier on co-located teams to keep everyone in step because we can see and talk to each other about the changes we’re making. If remote developers do infrequent large merges, integration hell gets amplified tenfold by the extra communication barriers.
  • Test-Driven Development – a lot of the communication between developers, and between developers and their customers, can be handwavy and vague. And if communication is easy – like on a co-located team – we just go around a few more times until we converge on what’s required. But when communication is harder, like in distributed teams, a few more goes around gets very expensive. Using executable tests as specifications removes the ambiguity. It should do exactly this. Also, TDD – done well – produces suites of useful, fast-running automated tests. It’s a win-win.
  • Design Principles – Well-factored code is very important to co-located teams, and super-duper-important to distributed teams. Let’s count the ways:
    • Simple Design
      • Code should work – if it don’t work, we can’t ship it. Any changes that break the code block the team. It’s a big deal on a co-located team, but it’s a really big deal on a distributed team.
      • Code should clearly communicate its intent – code should speak for itself, and when developers are working remotely, and communicating requires extra effort, this is especially true. The easier code is to understand, the less teleconferences required to understand it.
      • Code should be free of duplication – so much duplication in software is duplication of concepts. This often occurs when developers on teams work in isolation, unaware that someone else has already added a module that does what their module also does. Devs need to be aware of duplication in the code – Continuous Integration and merge awareness helps – and clued up to when they should refactor it and when they should leave it alone.
      • Code should be as simple as we can make it – every line of code that has to be maintained as another straw on the camel’s back. When the camel’s back stretches between multiple locations – possibly in multiple time zones – the impact of every additional straw is felt many-fold.
    • Modular Design
      • Modules should do one job – the ability to change the behaviour of a system by just editing one module is critical to a team’s ability to make the changes they need without treading on the toes of other developers. On distributed teams, multiple developers all making changes to one module for multiple reasons can lead to some spectacular merge train wrecks.
      • Modules should hide their internal workings – the more modules are coupled to each other, the bigger and wider the impact of even the smallest changes will be felt. Imagine your distributed team is working precariously balanced on high wires that are all interconnected. What you don’t want is for one person to start violently shaking their wire, sending ripples throughout the network. Or it could all come tumbling down. Again, it’s bad on co-located teams, but it’s Double-Plus-Triple-Word-Score-Bad on distributed teams. Ever dependency can bring pain.
      • Modules should not depend directly on implementations of other modules – it’s good architecture generally for modules not to bind directly to implementations of the other modules they use, for a variety of reasons. But it’s especially important when teams aren’t co-located. Taken together, the first three principles of modular design are better known as “Separation of Concerns”. Or, as I like to call it, the Principle of Somebody Else’s Problem. If my module needs to send an email, I shouldn’t need to know how emails are actually sent – all that detail should be hidden from me – and I should be able to work on my code without having to actually send emails when I test it. Sending emails is somebody else’s problem. It’s particularly useful in a test-driven approach to design to be able to write a test for code that has external dependencies – things it uses that other developers are working on – without actually binding directly to the implementation of that external component so that we can swap in a test double that pretends to do that job. That’s how you scale TDD. That’s how you make TDD work in distributed teams, too.
      • Module interfaces should be designed from the client’s point of view – tied together with TDD, we can specify modules very precisely from the outside: this is what it should look like (interface) and this is what it should do (tests). Imagine your distributed team is making a jigsaw: the hard way to do it is to have each person go off and make a piece of the jigsaw and then hope that they all fit together at the end. The smart way to do it is to define the shapes of the pieces as parts of the whole puzzle, and then have people implement the pieces based in the interfaces and tests agreed. You do this by designing systems from the outside in, defining modules by how they will be used from the client code’s POV. This also helps to restrict public interfaces to only what client’s need to see, hiding internal details, improving encapsulation and reducing coupling. Coupling on distributed teams can be very, very expensive.
    • Refactoring – the still-rather-too-rare discipline of reshaping code without breaking the software is the means by which we achieve good design. Try as we might to never write code that’s hard to understand, or has duplication, or is overly complex, or too tightly coupled, we’ll always need to clean up our code as we go. If the impact of poor design is amplified on distributed teams, the importance of refactoring must be proportionally amplified. The alternative is relying on after-the-fact code reviews (e.g., in GitFlow), which will become multiple times the bottleneck they already were when your team was co-located and you could just pop over to Mary’s desk and ask.

Underpinning all of this is a need for levels of delivery process automation – automated testing, automated builds, automated deployments, automated code reviews – that the majority of teams are nowhere near.

And then there’s the interpersonal: the communication, the coordination, the planning and tracking, the collaborative design. It takes a big investment to make a distributed Agile team as productive as a co-located team.

All the Jiras and GitHubs and cloud-based build pipelines and remote whiteboards and shared IDEs and Zoom meetings in the world won’t save you if the code craft isn’t up to snuff, though. It’s foundational to delivering as a distributed team.

If you want to know more about code craft, visit www.codemanship.com

 

The Experience Paradox

One sentiment I hear very often from managers is how very difficult it is to hire experienced developers. This, of course, is a self-fulfilling prophecy. If you won’t hire developers without experience, how will inexperienced developers get the jobs they need to gain that experience?

I simultaneously hear from new developers – recent graduates, boot camp survivors, and so on – that they really struggle to land that first development job because they don’t have the experience.

When you hear both of these at the same time, it becomes a conversation. Unfortunately, it’s a conversation conducted from inside soundproof booths, and I’m seeing no signs that this very obvious situation will be squared any time soon.

I guess we have to add it to the list of things that every knows is wrong, but everyone does anyway. (Like adding more developers to teams when the schedule’s slipping.)

Organisations should hire for potential as well as experience. People who have the potential to be good developers can learn from people with experience. It’s a match made in heaven, and the end result is a larger pool of experienced developers. This problem can fix it itself, if we were of a mind to let it. We all know this. So why don’t we do it?

At the other end of the spectrum, I hear many, many managers say “This person is too experienced to be a developer”. And at the same time, I hear many, many very experienced developers struggle to find work. This, too, is a problem that creates itself. Typically, there are two reasons why managers rule out the most experienced developers:

  • They expect to get paid more (because they usually achieve more)
  • They won’t put up with your bulls**t

Less experienced developers may be more malleable in terms of how much – or little – you can pay them, how much unpaid overtime they may be willing to tolerate, and how willing they might be to cut corners when ordered to. They may have yet to learn what their work is really worth to you. They may have yet to experience burnout. They may yet to have lived with a large amount of technical debt.

Developers with 20+ years experience, who’ve been around the block a few times and know the score, don’t fit it into the picture of developers as fungible resources.

By freezing out inexperienced developers and very experienced developers, employers create the exact situation they complain endlessly about – lack of experienced developers. If it were my company and my money on the line, I’d hire developers with decades of experience specifically to mentor the inexperienced developers with potential I’d also be hiring.

Many employers, of course, argue that training up developers is too much of a financial risk. Once they’re good enough, won’t they leave for a better job? The clue is in the question, though. They leave for a better job. Better than the one you’re offering them once they qualify. Don’t just be a stepping stone, be a tropical island – a place they would want to stay and work.

If you think training up developers is going to generate teams of cheaper developers who’ll work harder and longer for less, then – yes – they’ll leave at the first opportunity. Finding a better job won’t be hard.

Are Only 1 in 4 Developers Trusted?

Following on from my previous post about Trust & Agility, I ran a straw poll on the @codemanship Twitter account as a sort of litmus test of how much trust organisations put in their developers.

A new monitor is a decision of the same order of magnitude of risk of the proverbial “$100 decision” that I’ve used for many years now when assessing the capacity for trust – and therefore for decentralised decision-making – that agility requires.

If I wanted a new monitor for my home office, that would be entirely my decision. There was, of course, a time when it would not have been. That was when I was a child. If 13-year-old me wanted a new monitor, I would have had to refer that decision up to my parents.

When I got jobs, I made my own money and bought my own monitors. That’s called “being a grown-up”.

I’ve worked in organisations where even junior developers were handed corporate credit cards – with monthly limits – so they could “just go get it”, and not bother management with insignificant details. Smaller decisions get made when they need to be made, and they don’t get caught in a bottleneck. It really works.

So why are 3 out of 4 developers still not trusted to be grown-ups?

UPDATE:

A few people have got in touch giving me the impression that they took this to be literally about buying monitors or about company credit cards. These are just simple examples. They indicate a wider problem often. Developers who I’ve seen wait weeks for a monitor, or a laptop, or a software license, have tended to be the ones who “wanted to do TDD but weren’t allowed”, or who “wanted 20% but the boss said ‘no’ “, or who wanted to go to a conference but it was denied. These tend to be the teams that can’t self-organise to get things done. In my 20 years’ experience of Agile Software Development, the correlation has been very close between teams of developers who have to ask permission for a new monitor – just as an example (IT’S NOT ABOUT BUYING MONITORS!) – and teams who have to ask for permission to automate deployments or to talk to the end users directly.

Trust & Agility

I find it useful to revisit the Manifesto for Agile Software Development when I think I need a reminder of what it was originally all about. One of the 12 principles of Agile software is:

Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done

Now, it’s just one little word, buried in that text, but in my experience it makes all the difference: trust.

In two decades of kind of sort of doing Agile Software Development, I’ve found one simple metric that has tended to be the best predictor of whether an organisation is capable of agility: how much does it cost here to make a $100 decision?

Let me give you an example. I worked on a contract once where the team needed a couple of whiteboards in the room. I’ve bought those kinds of whiteboards. I know what they cost (a few hundred pounds), and I know they can be acquired within 24 hours very easily.

If I want a whiteboard for my home office, I go online to one of the thousands of retailers who sell them, whip out my credit card, and that whiteboard will be in my office and covered with boxes and arrows the next morning.

In this particular client organisation it took multiple meetings and hours of everyone’s time. It got escalated to the IT director, who – presumably – had nothing more important to think about. We spent thousands of pounds deciding to buy a few hundred pounds-worth of whiteboards, and for several weeks we struggled with design discussions at a critically early stage of the work for the want of a few square metres of something cheap we could draw on. And in the end, the IT director didn’t buy us two basic boards. He bought us a £4,000 electronic whiteboard that we never used. Eventually, real whiteboards did appear, but not before a heap more discussions.

Now, it shouldn’t have come as a surprise, because the organisation in question was in the business of selling shrink-wrapped bureaucracy. But the whiteboard debacle turned out to be an omen of what was to come. I’ve seen that many, many times in my career.

These days, I test for that metric when I get the chance. As a small, niche supplier of training and coaching services, it’s in my interest to try and avoid overly bureaucratic clients. The cost of doing business with them often outweighs the benefits. And, let’s face it, their developers are unlikely to be given the time and support they need to learn code craft. Agility is not for you.

What has this got to do with trust? And what does trust have to do with agility?

It’s simple: the reason we couldn’t just go out and buy whiteboards is because the managers didn’t trust us to make that decision. So we had to wait. And wait. And wait.

Agility demands that the people doing the work, whenever possible, make the decisions about how the work is done. Management have to support them in that, and remove any obstacle getting in the team’s way. Lack of whiteboard space in the early iterations of a design process was an obstacle, created by lack of trust, that cost us time and money and stopped us getting stuff done.

Want stuff to get done? Then you need to trust the people doing it to just get on with it. Escalating what are relatively trivial decisions costs time. A whiteboard. A monitor. More memory.  A WebStorm license. These are all small, low-risk decisions that hold teams back when they’re delayed.

Making small, low-risk decisions quickly is foundational to agility. Fail fast. If an organisation can’t make a $100 decision quickly and easily, then the more it costs them to decide and to act, the further agility will be out of reach.

Sure, if the team wants to move to Sweden, or buy a minibus, that size of decision will need to be discussed at a higher level of budgetary authority. But a whiteboard? Or a laptop? Or – goddamit – stationary?! The sad fact is that far too many teams have absolutely no budgetary discretion at all (unless they want to have a meeting, in which case – bizarrely – the sky’s the limit).

Of course, some will say “Trust has to be earned”, but for teams this is a Catch 22. They have to deliver to earn the trust to make their own decisions, but they’re less likely to deliver if they can’t make those decisions. It’s a self-fulfilling prophecy. Teams who aren’t trusted will struggle to earn trust.

My solution has always been to trust myself and make the decision anyway. This has not gone down well with some managers, who demand compliance from their teams. This is why I and the corporate world have tended to be incompatible, and why I now run my own business and buy my own whiteboards.

Lack of trust is usually a symptom of fear of failure. It’s a learned behaviour built on historical failures – usually by completely different teams of completely different people. When we refuse to give Team A autonomy because Teams B, C, and D failed in the past, then we’re punishing the children for the sins of their ancestors.

It takes courage – one of the four core Agile values – to press the reset button and say “Okay I’m going to let Team A do their thing and I won’t interfere unless they screw up”. It takes even more courage – and honesty – to recognise that a team that’s not delivering might actually be being held back by our lack of trust, and to relinquish some control.

This is why so many organisations fail to achieve agility. Managers are afraid to take their hands off the wheel and let the team drive. So many “Agile transformations” – especially at scale – push in the exact opposite direction, attempting to impose processes and standards that constrain teams while giving managers nothing more than a greater illusion of control. This is the antithesis of agility.

I’ve only seen it work when managers genuinely empower their teams. As terrifying as it may sound to hand a company credit card to people who cover their laptops in Marvin The Martian stickers and come to work on tiny scooters, the alternative is that you accept the inevitable bureaucracy that will result if you won’t trust your teams.

 

Code, Meet World #NoBacklogs

Do we dare to imagine Agile without backlogs? How would that work? How would we know what to build in this iteration? How would we know what’s coming in future iterations?

Since getting into Agile Software Development (and it’s precursors, like DSDM and Extreme Programming), I’ve gradually become convinced that most of us have been doing it fundamentally wrong.

It’s a question of emphasis. What I see is thousands of teams working their way through a product plan. They deliver increments of the plan every week or three, and the illusion of being driven by feedback instead of by the plan is created by showing each increment to a “customer” and asking “Waddaya think?”

In the mid-90s, I worked as the sole developer on a project where my project managers – two of them! – would make me update their detailed plan every week. It was all about delivering the plan, and every time the plan bumped into reality, the whole 6-month plan had to be updated all over again.

This was most definitely plan-driven software development. We planned, planned, and planned again. And at no point did anybody suggest that maybe spending 1 day a week updating a detailed long-term plan might be wasted effort.

Inspired by that experience, I investigated alternative approaches to planning that could work in the real world. And I discovered one in a book by Tom Gilb called Principles of Software Engineering Management. Tom described an approach to planning that chimed with my own experience of how real projects worked in the real world.

It’s a question of complexity. Weather is very, very complicated. And this makes weather notoriously difficult to predict in detail. The further ahead we look, the less accurate our predictions tend to be. What will the weather be like tomorrow? We can take a pretty good guess with the latest forecasting models. What will the weather be like the day after tomorrow? We can take a guess, but it’s less likely to be accurate. What will the weather be like 6 weeks next Tuesday from now? Any detailed prediction is very likely to be wrong. That’s an inherent property of complex systems: they’re unpredictable in the long term.

Software development is also complex. Complex in the code, its thousands of component parts, and the interactions between them. Complex in the teams, which are biological systems. Complex in how the software will interact with the real world. There are almost always things we didn’t think of.

So the idea that we can predict what features a software system will need in detail, looking months – or even just weeks ahead – seemed a nonsense to me.

But, although complex systems can be inherently unpredictable in detail, they tend to be – unless they’re completely chaotic – roughly predictable in general terms.

We can’t tell you what the exact temperature will be outside the Dog & Fox in Wimbledon Village at 11:33am on August 13th 2025, but we can pretty confidently predict that it will not be snowing.

And we can’t be confident that we’ll definitely need a button marked “Sort A-Z” on a web page titled “Contacts” that displays an HTML table of names and addresses, to be delivered 3 months from now in the 12th increment of a React web application. But we can be confident that users will need to find an address to send their Christmas card to.

The further we look into the future, the less detailed our predictions need to become if they are to be useful in providing long-term direction. And they need to be less detailed to avoid the burden of continually updating a detailed plan that we know is going to change anyway.

This was a game-changer for me. I realised that plans are perishable goods. Plans rot. Curating a detailed 6-month plan, to me, was like buying a 6-month supply of tomatoes. You’ll be back at the supermarket within a fortnight.

I also realised that you’ve gotta eat those tomatoes before they go bad. It’s the only way to know if they’re good tomatoes. Features delivered months after they were conceived are likely rotten – full of untested assumptions piled on top of untested assumptions about what the end users really need. In software, details are very highly biodegradable.

So we need to test our features before they go off. And the best place to test them is in the real world, with real end users, doing real work. Until our code meets the real world, it’s all just guesswork.

Of course, in some domains, releasing software into production every week – or every day even – is neither practical nor desirable. I wouldn’t necessarily recommend it for a nuclear power station, for example.

And in these situations where releases create high risk, or high disruption to end users, we can craft simulated release environments where real end users can try the software in an as-real-as-we-can-make-it world.

If detailed plans are only likely to survive until the next release, and if the next release should be at most a couple of weeks away, then arguably we should only plan in detail – i.e., at the level of features – up to the next release.

Beyond that, we should consider general goals instead of features. In each iteration, we ask “What would be the simplest set of features that might achieve this goal?” If the feature set is too large to fit in that iteration, we can break the goal down. We build that feature set, and release for end user testing in the real (or simu-real) world to see if it works.

Chances are, it won’t work. It might be close, but usually there’s no cigar. So we learn from that iteration and feed the lessons back in to the next. Maybe extra features are required. Maybe features need tweaking. Maybe we’re barking up the wrong tree and need to completely rethink.

Each iteration is therefore about achieving a goal (or a sub-goal), not about delivering a set of features. And the output of each release is not features, but what we learn from watching real end users try to achieve their goal using the features. The output of software releases is learning, not features.

This also re-frames the definition of “done”. We’re not done because we delivered the features. We’re only done when we’ve achieved the goal. Maybe we do that in one iteration. Maybe we need more throws of the dice to get there.

So this model of software development sees cross-functional teams working as one to achieve a goal, potentially making multiple passes at it, and working one goal at a time. The goal defines the team. “We’re the team who enables diners to find great Chinese food”. “We’re the team who puts guitar players in touch with drummers in their town.” “We’re the team who makes sure patients don’t forget when their repeat prescriptions need to be ordered.”

Maybe you need a programmer to make that happen. Maybe you need a web designer to make that happen. Maybe you need a database expert to make that happen. The team is whoever you need to achieve that goal in the real world.

Now I look at the current state of Agile, and I see so many teams munching their way through a list of features, and so few teams working together to solve real end users’ problems. Most teams don’t even meet real end users, and never see how the software gets used in the real world. Most teams don’t know what problem they’re setting out to solve. Most teams are releasing rotten tomatoes, and learning little from each release.

And driving all of this, most teams have a dedicated person who manages that backlog of features, tweaking it and “grooming” it every time the plan needs to change. This is no different to my experiences of updating detailed project plans in the 90s. It’s plan-driven development, plain and simple.

Want to get unstuck from working through a detailed long-term plan? Then ditch the backlog, get with your real customer, and start solving real problems together in the real world.

Do We Confuse Work With Value?

Here’s a little thought experiment. Acme Widgets need a new stock control system. They invite software development companies to bid. Big Grey IT Corporation put forward a bid for the system that will take 20 weeks with a team of 50 developers, each developer getting paid £2,000 a week. Their total bid for the system is £2 million.

Acme’s CEO is about to sign off on the work when he gets a call from Whizzo Agile Inc, who have read the tender specification and claim they can deliver the exact same system in the exact same time with a team of just 4 crack developers.

Assuming both teams would deliver the exact same outcome at the exact same time (and, yes, in reality a team of 50 would very probably deliver late or not at all, statistically), how much should Whizzo Agile charge?

It’s a purely hypothetical question, but I’ve seen similar scenarios play out in real life. A government department invites bids for an IT system. The big players put in ludicrously huge bids (e.g., £400 million for a website), and justify them by massive over-staffing of the project. Smaller players – the ones who can actually afford to tender and who have the government contacts – tend to get shut out, no matter how good their bids look.

And I’ve long wondered if the problem here is how we tend to confuse work with value. Does that calculation at the back of the customer’s mind go something like: “Okay, so Option A is £2M for 5,000 developer days, and Option B is £0.5M for 400 developer days”? With Option A, the customer calculates they’ll get more work for their money.

But the value delivered is identical (in theory – in reality a smaller team would probably do a better job).  They get the same software in the same time frame, and they get it at a quarter of the price. It’s just that the developers on the small team get paid a lot more to deliver it. I’ve watched so many managers turn down Option B over the last 30 years.

A company considering a bid of £2M to build a software system is announcing that the value of that system to their business is significantly more than £2M. Let’s say, for the sake of argument, it’s double. Option A brings them a return of £2M. Option B brings a return of £3.5M. In those terms, is it a good business decision to go with Option A?

In other areas of business, choosing to go with Option A would get you sacked. Why, in software development, is it so often the other way around?

 

The Value’s Not In Features, It’s In Learning

A small poll I ran on the Codemanship Twitter account seems to confirm what I’ve observed and heard in the field about “agile” development teams still being largely plan-driven.

If you’re genuinely feedback-driven, then your product backlog won’t survive much further out than the next cycle of customer feedback. Maintaining backlogs that look months ahead is a sign that just maybe you’re incrementally working through a feature list instead of iteratively solving a set of business problems.

And this cuts to the core of a major, fundamental malaise in contemporary Agile Software Development. Teams are failing to grasp that the “value” that “flows” in software development is in what we learn with each iteration, not in the features themselves.

Perhaps a better name for “features” might be “guesses” – we’re guessing what might be needed to solve a problem. We won’t know until we’ve tried, though. So each release is a vital opportunity to test our assumptions and feed back what we learn into the next release.

I see teams vigorously defending their product backlogs from significant change, and energetically avoiding feedback that might reveal that we got it wrong this time. Folk have invested a lot in the creation of their backlog – often envisioning a whole product in significant detail – and can take it pretty personally when the end users say “Nope, this isn’t working”.

With a first release – when our code meets the real world for the first time – I expect a lot of change to a product vision. With learning and subsequent iterations of the design, the product vision will usually stabilise. But when we track how much the backlog changes with each release on most teams, we see mostly tweaking. Initial product visions – which, let’s be clear, are just educated guesses at best – tend to remain largely intact. Once folk are invested in a solution, they struggle to let go of it.

Teams with a strong product vision often suffer from confirmation bias when considering end user feedback. (Remember: the “customer” on many products is just as invested in the product vision if they’ve actively participated in its creation.) Feedback that supports their thesis tends to be promoted. Feedback that contradicts their guesswork gets demoted. It’s just human nature, but its skewing effect on the design process usually gets overlooked.

The best way to avoid becoming wedded to a detailed product vision or plan is not to have a detailed product vision or plan. Assume as little as possible to make something simple we can learn from in the next go-round. Focus on achieving long-term goals, not on delivering detailed plans.

In simpler terms: ditch the backlogs.