Are Only 1 in 4 Developers Trusted?

Following on from my previous post about Trust & Agility, I ran a straw poll on the @codemanship Twitter account as a sort of litmus test of how much trust organisations put in their developers.

A new monitor is a decision of the same order of magnitude of risk of the proverbial “$100 decision” that I’ve used for many years now when assessing the capacity for trust – and therefore for decentralised decision-making – that agility requires.

If I wanted a new monitor for my home office, that would be entirely my decision. There was, of course, a time when it would not have been. That was when I was a child. If 13-year-old me wanted a new monitor, I would have had to refer that decision up to my parents.

When I got jobs, I made my own money and bought my own monitors. That’s called “being a grown-up”.

I’ve worked in organisations where even junior developers were handed corporate credit cards – with monthly limits – so they could “just go get it”, and not bother management with insignificant details. Smaller decisions get made when they need to be made, and they don’t get caught in a bottleneck. It really works.

So why are 3 out of 4 developers still not trusted to be grown-ups?

UPDATE:

A few people have got in touch giving me the impression that they took this to be literally about buying monitors or about company credit cards. These are just simple examples. They indicate a wider problem often. Developers who I’ve seen wait weeks for a monitor, or a laptop, or a software license, have tended to be the ones who “wanted to do TDD but weren’t allowed”, or who “wanted 20% but the boss said ‘no’ “, or who wanted to go to a conference but it was denied. These tend to be the teams that can’t self-organise to get things done. In my 20 years’ experience of Agile Software Development, the correlation has been very close between teams of developers who have to ask permission for a new monitor – just as an example (IT’S NOT ABOUT BUYING MONITORS!) – and teams who have to ask for permission to automate deployments or to talk to the end users directly.

Trust & Agility

I find it useful to revisit the Manifesto for Agile Software Development when I think I need a reminder of what it was originally all about. One of the 12 principles of Agile software is:

Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done

Now, it’s just one little word, buried in that text, but in my experience it makes all the difference: trust.

In two decades of kind of sort of doing Agile Software Development, I’ve found one simple metric that has tended to be the best predictor of whether an organisation is capable of agility: how much does it cost here to make a $100 decision?

Let me give you an example. I worked on a contract once where the team needed a couple of whiteboards in the room. I’ve bought those kinds of whiteboards. I know what they cost (a few hundred pounds), and I know they can be acquired within 24 hours very easily.

If I want a whiteboard for my home office, I go online to one of the thousands of retailers who sell them, whip out my credit card, and that whiteboard will be in my office and covered with boxes and arrows the next morning.

In this particular client organisation it took multiple meetings and hours of everyone’s time. It got escalated to the IT director, who – presumably – had nothing more important to think about. We spent thousands of pounds deciding to buy a few hundred pounds-worth of whiteboards, and for several weeks we struggled with design discussions at a critically early stage of the work for the want of a few square metres of something cheap we could draw on. And in the end, the IT director didn’t buy us two basic boards. He bought us a £4,000 electronic whiteboard that we never used. Eventually, real whiteboards did appear, but not before a heap more discussions.

Now, it shouldn’t have come as a surprise, because the organisation in question was in the business of selling shrink-wrapped bureaucracy. But the whiteboard debacle turned out to be an omen of what was to come. I’ve seen that many, many times in my career.

These days, I test for that metric when I get the chance. As a small, niche supplier of training and coaching services, it’s in my interest to try and avoid overly bureaucratic clients. The cost of doing business with them often outweighs the benefits. And, let’s face it, their developers are unlikely to be given the time and support they need to learn code craft. Agility is not for you.

What has this got to do with trust? And what does trust have to do with agility?

It’s simple: the reason we couldn’t just go out and buy whiteboards is because the managers didn’t trust us to make that decision. So we had to wait. And wait. And wait.

Agility demands that the people doing the work, whenever possible, make the decisions about how the work is done. Management have to support them in that, and remove any obstacle getting in the team’s way. Lack of whiteboard space in the early iterations of a design process was an obstacle, created by lack of trust, that cost us time and money and stopped us getting stuff done.

Want stuff to get done? Then you need to trust the people doing it to just get on with it. Escalating what are relatively trivial decisions costs time. A whiteboard. A monitor. More memory.  A WebStorm license. These are all small, low-risk decisions that hold teams back when they’re delayed.

Making small, low-risk decisions quickly is foundational to agility. Fail fast. If an organisation can’t make a $100 decision quickly and easily, then the more it costs them to decide and to act, the further agility will be out of reach.

Sure, if the team wants to move to Sweden, or buy a minibus, that size of decision will need to be discussed at a higher level of budgetary authority. But a whiteboard? Or a laptop? Or – goddamit – stationary?! The sad fact is that far too many teams have absolutely no budgetary discretion at all (unless they want to have a meeting, in which case – bizarrely – the sky’s the limit).

Of course, some will say “Trust has to be earned”, but for teams this is a Catch 22. They have to deliver to earn the trust to make their own decisions, but they’re less likely to deliver if they can’t make those decisions. It’s a self-fulfilling prophecy. Teams who aren’t trusted will struggle to earn trust.

My solution has always been to trust myself and make the decision anyway. This has not gone down well with some managers, who demand compliance from their teams. This is why I and the corporate world have tended to be incompatible, and why I now run my own business and buy my own whiteboards.

Lack of trust is usually a symptom of fear of failure. It’s a learned behaviour built on historical failures – usually by completely different teams of completely different people. When we refuse to give Team A autonomy because Teams B, C, and D failed in the past, then we’re punishing the children for the sins of their ancestors.

It takes courage – one of the four core Agile values – to press the reset button and say “Okay I’m going to let Team A do their thing and I won’t interfere unless they screw up”. It takes even more courage – and honesty – to recognise that a team that’s not delivering might actually be being held back by our lack of trust, and to relinquish some control.

This is why so many organisations fail to achieve agility. Managers are afraid to take their hands off the wheel and let the team drive. So many “Agile transformations” – especially at scale – push in the exact opposite direction, attempting to impose processes and standards that constrain teams while giving managers nothing more than a greater illusion of control. This is the antithesis of agility.

I’ve only seen it work when managers genuinely empower their teams. As terrifying as it may sound to hand a company credit card to people who cover their laptops in Marvin The Martian stickers and come to work on tiny scooters, the alternative is that you accept the inevitable bureaucracy that will result if you won’t trust your teams.

 

Extreme Programming Is Dead. Long Live Extreme Programming.

So, hey. Back in them olden times when architects ruled the world, and methodology was an actual thing – despite nobody having any idea what that meant, and I can tell you that from first-hand experience  – there was a way of delivering working software that end users could actually use.

It went un-articulated for decades, until some folk actually articulated it. They gave it a name, and that name was  – disappointingly – “Extreme Programming”. To quote from The Hitchhiker’s Guide To The Galaxy when Deep Thought named the computer to solve the meaning of life, the universe and everything that would come after it: “Oh”.

But don’t be fooled by the name. XP is how computer programmers writing software for customers had been winning at writing software for customers for decades. There’s literally nothing new in it. Its brilliance is in how it named the thing those programmers were doing. It gave them form. It gave them voice.

There’s nothing extreme about it. Unless you think that delivering software customers need is extreme.

But since the rise of “Agile”, those software development truisms have been lost in a fog of the same illusion of control that gave rise to the Big Methods of the 1990s. SAFe is just SSADM with agile tinsel draped around it. LeSS is even less than that. Scrum has become a monster, spewing out an exponentially growing army of consultants 70% of whom have never delivered a working software product in their lives.

Cut through all that crap and what’s left is those decades-old insights into what delivers reliable, maintainable working software that customers can change their minds about. And those Big Agile ideas may now dominate the world of Big IT, but teams who dare to deliver working software still cleave to those original insights. There are no shortcuts.

 

 

 

A Programmer’s “Breadboard”?

I’m a fan of rapid feedback. When I write software, I prefer to find out if I’m on the right track as soon as possible.

There are all sorts of techniques I’ve tried for getting customer feedback without having to go to the effort of delivering production-quality software, which is a time-consuming and expensive way of getting feedback.

I’ve tried UI wire frames and storyboards, which do tend to get the message across, but suffer from two major drawbacks: one is that we can’t run them and see if they work on a real problem, and the other is that we have to commit to a UI design early in order to explore the logic of our software. Once a design’s been “made flesh”, it tends to stick, even if much better designs are possible.

I’ve tried exploring with test cases – examples, basically – but have found that often too abstract for customers to get a real sense of the design.

The thing I’ve tried that gets the most valuable feedback really is to put working software in front of end users and let them try it for themselves. But, like I said, working software is expensive to create.

In the 1990s, when Rapid Development was at its peak, tools appeared that allowed us to “slap together” working prototypes quickly. They typically had names with “Visual” or “Builder” in them, and they worked by dragging and dropping GUI components onto forms or windows or panels, and we could bind those controls to databases and add a little code or a simple macro to glue it all together into something that kind of sort of does what we need.

Then we would take our basic prototype to the customer (sometimes we actually created it in front of the customer), and let them take it for a spin. In fairly rapid iterations, we’d refine the prototype based on the customer’s feedback until we got it in the ballpark.

Then – and this is where it all went wrong – we’d say “Okay, great. Now can we have £500,000 to build it properly in C++?” And they’d say “No, thanks. I’m good with this version.” And then we’d ship the prototype that was made of twigs and string (and Visual Basic for Applications) and live with the very expensive consequences of allowing a business to rely on software that isn’t production quality. (Go to any bank and ask them how many Excel spreadsheets they rely on for enterprise, mission-critical applications. It’ll boggle your mind.)

The pioneers of Extreme Programming learned from this that we should never put working software in front of our customers that isn’t production-ready, because if they like it they’ll make us ship it.

Sketches don’t suffer this drawback, because we very obviously can’t ship a drawing on a whiteboard or in Visio. (Although we did try briefly in the late 90s and early 00s.)

Now, in electronics, it’s possible to create a working prototype that is very obviously not a finished product and that no customer would tell you to ship. Here’s guitar pedal designer Dean Wampler showing us his breadboard that he uses to explore pedal designs.

This is the thing with a breadboard guitar pedal: you can plug a guitar into one end and plug the output into a real guitar amp and hear how the pedal will sound.

It looks nothing like a finished pedal you would buy in a shop, and you certainly couldn’t take it on the road. A production-quality Wampler pedal is much smaller, much more robust and much more user-friendly.

wampler

Of course, these days, it’s entirely possible to design a pedal on a computer and simulate how it will sound. Some guitar amp manufacturers design their amps that way. But you still want players to be able to plug in their guitar and see how it sounds (and how it feels to play through it, which is tricky with software simulations because of latency.)

So breadboards in guitar electronics design persist. There’s no substitute for the real thing (until virtual breadboards catch up).

And this all got me to thinking: what do we have that’s like a breadboard?

Pictures won’t cut it because users can’t run a picture and play around with it. They have to use their imaginations to interpret how the software will respond to their actions. It’s like showing someone a set of production design sketches and asking them “So, how did you like the movie?”

High-fidelity prototypes won’t cut it because customers make us ship them, and the business landscape is already drowning in legacy systems from the 90s that were only intended to be for illustration purposes.

I’m thinking is there a way of throwing together a working app quickly for customer feedback – Microsoft Access-style – but that doesn’t bind us to a UI design too early, and very obviously can’t be shipped? And, very usefully, that could be evolved into a production-quality version without necessarily having to rewrite the whole thing from scratch.

Right now, nothing springs to mind.

(Talking of Scratch…)

 

 

Code, Meet World #NoBacklogs

Do we dare to imagine Agile without backlogs? How would that work? How would we know what to build in this iteration? How would we know what’s coming in future iterations?

Since getting into Agile Software Development (and it’s precursors, like DSDM and Extreme Programming), I’ve gradually become convinced that most of us have been doing it fundamentally wrong.

It’s a question of emphasis. What I see is thousands of teams working their way through a product plan. They deliver increments of the plan every week or three, and the illusion of being driven by feedback instead of by the plan is created by showing each increment to a “customer” and asking “Waddaya think?”

In the mid-90s, I worked as the sole developer on a project where my project managers – two of them! – would make me update their detailed plan every week. It was all about delivering the plan, and every time the plan bumped into reality, the whole 6-month plan had to be updated all over again.

This was most definitely plan-driven software development. We planned, planned, and planned again. And at no point did anybody suggest that maybe spending 1 day a week updating a detailed long-term plan might be wasted effort.

Inspired by that experience, I investigated alternative approaches to planning that could work in the real world. And I discovered one in a book by Tom Gilb called Principles of Software Engineering Management. Tom described an approach to planning that chimed with my own experience of how real projects worked in the real world.

It’s a question of complexity. Weather is very, very complicated. And this makes weather notoriously difficult to predict in detail. The further ahead we look, the less accurate our predictions tend to be. What will the weather be like tomorrow? We can take a pretty good guess with the latest forecasting models. What will the weather be like the day after tomorrow? We can take a guess, but it’s less likely to be accurate. What will the weather be like 6 weeks next Tuesday from now? Any detailed prediction is very likely to be wrong. That’s an inherent property of complex systems: they’re unpredictable in the long term.

Software development is also complex. Complex in the code, its thousands of component parts, and the interactions between them. Complex in the teams, which are biological systems. Complex in how the software will interact with the real world. There are almost always things we didn’t think of.

So the idea that we can predict what features a software system will need in detail, looking months – or even just weeks ahead – seemed a nonsense to me.

But, although complex systems can be inherently unpredictable in detail, they tend to be – unless they’re completely chaotic – roughly predictable in general terms.

We can’t tell you what the exact temperature will be outside the Dog & Fox in Wimbledon Village at 11:33am on August 13th 2025, but we can pretty confidently predict that it will not be snowing.

And we can’t be confident that we’ll definitely need a button marked “Sort A-Z” on a web page titled “Contacts” that displays an HTML table of names and addresses, to be delivered 3 months from now in the 12th increment of a React web application. But we can be confident that users will need to find an address to send their Christmas card to.

The further we look into the future, the less detailed our predictions need to become if they are to be useful in providing long-term direction. And they need to be less detailed to avoid the burden of continually updating a detailed plan that we know is going to change anyway.

This was a game-changer for me. I realised that plans are perishable goods. Plans rot. Curating a detailed 6-month plan, to me, was like buying a 6-month supply of tomatoes. You’ll be back at the supermarket within a fortnight.

I also realised that you’ve gotta eat those tomatoes before they go bad. It’s the only way to know if they’re good tomatoes. Features delivered months after they were conceived are likely rotten – full of untested assumptions piled on top of untested assumptions about what the end users really need. In software, details are very highly biodegradable.

So we need to test our features before they go off. And the best place to test them is in the real world, with real end users, doing real work. Until our code meets the real world, it’s all just guesswork.

Of course, in some domains, releasing software into production every week – or every day even – is neither practical nor desirable. I wouldn’t necessarily recommend it for a nuclear power station, for example.

And in these situations where releases create high risk, or high disruption to end users, we can craft simulated release environments where real end users can try the software in an as-real-as-we-can-make-it world.

If detailed plans are only likely to survive until the next release, and if the next release should be at most a couple of weeks away, then arguably we should only plan in detail – i.e., at the level of features – up to the next release.

Beyond that, we should consider general goals instead of features. In each iteration, we ask “What would be the simplest set of features that might achieve this goal?” If the feature set is too large to fit in that iteration, we can break the goal down. We build that feature set, and release for end user testing in the real (or simu-real) world to see if it works.

Chances are, it won’t work. It might be close, but usually there’s no cigar. So we learn from that iteration and feed the lessons back in to the next. Maybe extra features are required. Maybe features need tweaking. Maybe we’re barking up the wrong tree and need to completely rethink.

Each iteration is therefore about achieving a goal (or a sub-goal), not about delivering a set of features. And the output of each release is not features, but what we learn from watching real end users try to achieve their goal using the features. The output of software releases is learning, not features.

This also re-frames the definition of “done”. We’re not done because we delivered the features. We’re only done when we’ve achieved the goal. Maybe we do that in one iteration. Maybe we need more throws of the dice to get there.

So this model of software development sees cross-functional teams working as one to achieve a goal, potentially making multiple passes at it, and working one goal at a time. The goal defines the team. “We’re the team who enables diners to find great Chinese food”. “We’re the team who puts guitar players in touch with drummers in their town.” “We’re the team who makes sure patients don’t forget when their repeat prescriptions need to be ordered.”

Maybe you need a programmer to make that happen. Maybe you need a web designer to make that happen. Maybe you need a database expert to make that happen. The team is whoever you need to achieve that goal in the real world.

Now I look at the current state of Agile, and I see so many teams munching their way through a list of features, and so few teams working together to solve real end users’ problems. Most teams don’t even meet real end users, and never see how the software gets used in the real world. Most teams don’t know what problem they’re setting out to solve. Most teams are releasing rotten tomatoes, and learning little from each release.

And driving all of this, most teams have a dedicated person who manages that backlog of features, tweaking it and “grooming” it every time the plan needs to change. This is no different to my experiences of updating detailed project plans in the 90s. It’s plan-driven development, plain and simple.

Want to get unstuck from working through a detailed long-term plan? Then ditch the backlog, get with your real customer, and start solving real problems together in the real world.

Do We Confuse Work With Value?

Here’s a little thought experiment. Acme Widgets need a new stock control system. They invite software development companies to bid. Big Grey IT Corporation put forward a bid for the system that will take 20 weeks with a team of 50 developers, each developer getting paid £2,000 a week. Their total bid for the system is £2 million.

Acme’s CEO is about to sign off on the work when he gets a call from Whizzo Agile Inc, who have read the tender specification and claim they can deliver the exact same system in the exact same time with a team of just 4 crack developers.

Assuming both teams would deliver the exact same outcome at the exact same time (and, yes, in reality a team of 50 would very probably deliver late or not at all, statistically), how much should Whizzo Agile charge?

It’s a purely hypothetical question, but I’ve seen similar scenarios play out in real life. A government department invites bids for an IT system. The big players put in ludicrously huge bids (e.g., £400 million for a website), and justify them by massive over-staffing of the project. Smaller players – the ones who can actually afford to tender and who have the government contacts – tend to get shut out, no matter how good their bids look.

And I’ve long wondered if the problem here is how we tend to confuse work with value. Does that calculation at the back of the customer’s mind go something like: “Okay, so Option A is £2M for 5,000 developer days, and Option B is £0.5M for 400 developer days”? With Option A, the customer calculates they’ll get more work for their money.

But the value delivered is identical (in theory – in reality a smaller team would probably do a better job).  They get the same software in the same time frame, and they get it at a quarter of the price. It’s just that the developers on the small team get paid a lot more to deliver it. I’ve watched so many managers turn down Option B over the last 30 years.

A company considering a bid of £2M to build a software system is announcing that the value of that system to their business is significantly more than £2M. Let’s say, for the sake of argument, it’s double. Option A brings them a return of £2M. Option B brings a return of £3.5M. In those terms, is it a good business decision to go with Option A?

In other areas of business, choosing to go with Option A would get you sacked. Why, in software development, is it so often the other way around?

 

The Value’s Not In Features, It’s In Learning

A small poll I ran on the Codemanship Twitter account seems to confirm what I’ve observed and heard in the field about “agile” development teams still being largely plan-driven.

If you’re genuinely feedback-driven, then your product backlog won’t survive much further out than the next cycle of customer feedback. Maintaining backlogs that look months ahead is a sign that just maybe you’re incrementally working through a feature list instead of iteratively solving a set of business problems.

And this cuts to the core of a major, fundamental malaise in contemporary Agile Software Development. Teams are failing to grasp that the “value” that “flows” in software development is in what we learn with each iteration, not in the features themselves.

Perhaps a better name for “features” might be “guesses” – we’re guessing what might be needed to solve a problem. We won’t know until we’ve tried, though. So each release is a vital opportunity to test our assumptions and feed back what we learn into the next release.

I see teams vigorously defending their product backlogs from significant change, and energetically avoiding feedback that might reveal that we got it wrong this time. Folk have invested a lot in the creation of their backlog – often envisioning a whole product in significant detail – and can take it pretty personally when the end users say “Nope, this isn’t working”.

With a first release – when our code meets the real world for the first time – I expect a lot of change to a product vision. With learning and subsequent iterations of the design, the product vision will usually stabilise. But when we track how much the backlog changes with each release on most teams, we see mostly tweaking. Initial product visions – which, let’s be clear, are just educated guesses at best – tend to remain largely intact. Once folk are invested in a solution, they struggle to let go of it.

Teams with a strong product vision often suffer from confirmation bias when considering end user feedback. (Remember: the “customer” on many products is just as invested in the product vision if they’ve actively participated in its creation.) Feedback that supports their thesis tends to be promoted. Feedback that contradicts their guesswork gets demoted. It’s just human nature, but its skewing effect on the design process usually gets overlooked.

The best way to avoid becoming wedded to a detailed product vision or plan is not to have a detailed product vision or plan. Assume as little as possible to make something simple we can learn from in the next go-round. Focus on achieving long-term goals, not on delivering detailed plans.

In simpler terms: ditch the backlogs.