When I teach teams and managers about the feedback loops of software development, I try to stress the two most important loops – the ones that define agility.
Releases are where the value gets delivered, and – more importantly – where the end user feedback starts, so we can learn what works and what doesn’t and adapt for the next release.
The sooner we can release, the sooner we can start learning and adapting. So agile teams release frequently.
But frequency of releases doesn’t really define agility. I see teams who release every day or every week, but feature requests still take months to get into production.
That feature-to-production lead time is our best measure of how responsive to change we’re really being. How soon can we adapt to customer feedback?
For a portfolio of software products, a client of mine plotted average feature-to-production lead times against the average time it took to build and test the product.
We see a correlation between that feature-to-production lead time and the innermost loop of software delivery – build & test time.
Of course, this is a small data set, and all the usual caveats about “lies, damned lies and statistics” apply (I would love to do a bigger study, if anyone’s interested in participating).
But I’ve seen this distribution multiple times, and experienced it – and observed many, many teams experiencing it – in the field.
Products with slow build & test cycles tend to have much older backlogs. Indeed, backlogs themselves are a sign of slow lead times. I explained the causal mechanism for this in a previous post about Inner-Loop Agility. When we want to optimise nested loops, we get the biggest improvements in overall cycle time by focusing on the innermost loop.
Now, here’s the thing: everything that goes on between releases is really just guesswork. The magic happens when real end users get real working software, and we get to see how good our guesses were, and make more educated guesses in the next release cycle. We learn our way to value.
That’s why Inner-Loop Agility is so important, and why I’ve chosen to focus entirely on it as a trainer, coach and consultant. I can’t guarantee that you’re building the right thing (you almost certainly aren’t, no matter how well you plan), but I can offer you more throws of the dice.
Over the last couple of decades, I’ve witnessed more than my fair share of “Agile transformations”, and seen most of them produce disappointing results. In this post, I’m going to explain why they failed, and propose a way to beat the trend.
First of all, we should probably ask ourselves: what is an Agile transformation? This might seem like an obvious question, but you’d be surprised just how difficult it is to pin down any kind of accepted definition.
For some, it’s a process of adopting certain processes and practices, like the rituals of Scrum. If we do the rituals, then we’re Agile. Right?
Not so fast, buddy!
This is what many call “Cargo Cult Agility”. If we wear the right clothes and make offerings to the right gods, we’ll be Agile.
If we lose the capital “A”, and talk instead about agility, what is the goal of an agile transformation? To enable organisations to change direction quickly, I would argue.
How do we make organisations more responsive to change? The answer lies in that organisation’s feedback loops.
In software development, the most important feedback loop comes from delivering working software and systems to end users. Until our code hits the real world, it’s all guesswork.
So if we can speed up our release cycles so we can get more feedback sooner, and maintain the pace of those releases for as long as the business needs us to – i.e., the lifetime of that software – then we can effectively out-learn our competition.
Given how important the release cycle is, then, it’s no surprise that most Agile (with a capital “A”) transformations tend to focus on that feedback loop. But this is a fundamental mistake. The release cycle contains inner loops – wheels within wheels within wheels. If our goal is to speed up this outer feedback loop, we should be focusing most of our attention on the innermost feedback loops.
To understand why, let’s think about how we go about speeding up nested loops in code.
Here’s some code that loops through a collection of releases. Each release loops through a list of features, and each feature has a list of scenarios that the system has to handle to implement that feature. For each scenario, it runs a build & test cycle multiple times. It’s a little model of a software development process.
Think of the development process as a set of gears. The largest gear turns the slowest, and drives a smaller, faster gear, which drives an even smaller and faster gear and so on.
In each loop, I’ve built in a delay of 10 ms to approximate the overhead of performing that particular loop (e.g., 10 ms to plan a release).
When I run this code, it takes 1 m 53 s to execute. Our release cycles are slow.
Now, here’s where most Agile transformations go wrong. They focus most of their attention on those outer loops. This produces very modest improvements in release cycle time.
Let’s “optimise” the three outer loops, reducing the delay by 90%.
When I run this optimised code, it executes in 1 m 44 s. That’s only a 9% improvement in release cycle time, and we had to work on three loops to get it.
This time, let’s ignore those outer loops and just work on the innermost loop – build & test.
Now it finished in just 22 seconds. That’s an 81% improvement, just from optimising that innermost loop.
When we look at the output from this code, it becomes obvious why.
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
------BUILD & TEST
Of course, this is a very simplistic model of a much more complex reality, but the principle at any scale works just as well, and the results I’ve seen over the years bear it out: to reduce release cycle times, focus your attention on the innermost feedback loops. I call this Inner-Loop Agility.
Think of the micro-iterations of Test-Driven Development, refactoring and Continuous Integration. They all involve one key step – the part where we find out if the software works – which is to build and test it. We test it at every green light in TDD. We test it after every refactoring. We test it before we check in our changes (and afterwards, on a build server to rule out configuration differences with our desktops).
In Agile Software Development, we build and test our code A LOT – many times an hour. And we can only do this if building and testing our code is fast. If it takes an hour, then we can’t have Inner Loop Agility. And if we can’t have Inner-Loop Agility, we can’t have fast release cycles.
Of course, we could test less often. That always ends well. Here’s the thing, the more changes we make to the code before we test it, the more bugs we introduce and then catch later. The later we catch bugs, the more they cost to fix. When we test less often, we tend to end up spending more and more of our cycle time fixing bugs.
It’s not uncommon for teams to end up doing zero-feature releases, where there’s just a bunch of bug fixes and no value-add for the customer in each release.
A very common end result of a costly Agile transformation is often little more than Agility Theatre. Sure, we do the sprints. We have the stand-ups. We estimate the story points. But’s it ends up being all work and little useful output in each release. The engine’s at maximum revvs, but our car’s going nowhere.
Basically, the gears of our development process are the wrong way round.
There’s no real mystery about why Agile transformations tend to focus most of their attention on the outer feedback loops.
Firstly, the people signing the cheques understand those loops, and can actively engage with them – in the mistaken belief that agility is all about them.
Secondly, the $billion industry – the “Agile-Industrial Complex” – that trains and mentors organisations during these transformations is largely made up of coaches and consultants who have either a lapsed programming background, or no programming background at all. In a sample of 100 Agile Coach CV’s, I found that 70% had no programming background, and a further 20% hadn’t done it for at least a decade. 90% of Agile Coaches can’t help you with the innermost feedback loops. Or to put it more bluntly, 90% of Agile Coaches focus on the feedback loops that deliver the least impressive reductions in release cycle time.
Just to be clear, I’m not suggesting these outer feedback loops don’t matter. There’s usually much work to be done at all levels from senior management down to help organisations speed up their cycle times, and to attempt it without management’s blessing is typically folly. Improving build and test cycles requires a very significant investment – in skills, in time, in resource – and that shouldn’t be underestimated.
But to focus almost exclusively on the outer feedback loops produces very modest results, and it’s arguably where Agile transformations have gained their somewhat dismal reputation among business stakeholders and software professionals alike.
I train and coach developers and teams in the technical practices of Agile Software Development like Test-Driven Development, Refactoring and Continuous Integration. I’m one of a rare few who exclusively does that. Clients really struggle to find Agile technical coaches these days.
There seems to be no shortage of help on the management practices and the process side of Agile, though. That might be a supply-and-demand problem. A lot of “Agile transitions” seem to focus heavily on those aspects, and the Agile coaching industry has risen to meet that demand with armies of certified bods.
I’ve observed, though, that without effective technical practices, agility eludes those organisations. You can have all the stand-ups and planning meetings and burn-down charts and retrospectives you like, but if your teams are unable to rapidly and sustainably evolve your software, it amounts to little more than Agility Theatre.
Agility Theatre is when you have all the ceremony of Agile Software Development, but none of the underlying technical discipline. It’s a city made of chipboard facades, painted to look like the real thing to the untrained eye from a distance.
In Agile Software Development, there’s one metric that matters: how much does it cost to change our minds? That’s kind of the point. In this rapidly changing, constantly evolving world, the ability to adapt matters. It matters more than executing a plan. Because plans don’t last long in the 21st century.
I’ve watched some pretty big, long-established, hugely successful companies brought down ultimately by their inability to change their software and core systems.
And I’ve measured the difference the technical practices can make to that metric.
Teams who write automated tests after the code being tested tend to find that the cost of changing their software rises exponentially over the average lifespan of 8 years. I know exactly what causes this. Test-after tends to produce a surfeit of tests that hit external dependencies like databases and web services, and test suites that run slow.
If your tests run slow, then you’ll test less often, which means bugs will be caught later, when they’re more expensive to fix.
Teams whose test suites run slow end up spending more and more of their time – and your money – fixing bugs. Until, one day, that’s pretty much all they’re doing.
Teams who write their tests first have a tendency to end up with fast-running test suites. It’s a self-fulfilling prophecy – using unit tests as specifications unsurprisingly produces code that is inherently more unit-testable, as we’re forced to stub and mock those expensive external dependencies.
This means teams that go test-first can test more frequently, catching bugs much sooner, when they’re orders of magnitude cheaper to fix. Teams who go test-first spend a lot less time fixing bugs.
The upshot of all this is that teams who go test-first tend to have a much shallower cost-of-change curve, allowing them sustain the pace of software evolution for longer. Basically, they outrun the test-after teams.
Now, I’m not going to argue that breaking work down into smaller batch sizes and scheduling deliveries more frequently can’t make a difference. But what I will argue is that if the technical discipline is lacking, all that will do is enable you to observe – in almost real time – the impact of a rising cost of change.
You’ll be in a car, focusing on where to go next, while your Miles Per Gallon rises exponentially. You reach a point where the destination doesn’t matter, because you ain’t going nowhere.
As the cost of changes rises, it piles on the risk of building the wrong thing. Trying to get it right first time is antithetical to an evolutionary approach. I’ve worked with analysts and architects who believed they could predict the value of a feature set, and went to great lengths to specify the Right Thing. In the final reckoning, they were usually out by a country mile. No matter how hard we try to predict the market, ultimately it’s all just guesswork until our code hits the real world.
So the ability to change our minds – to learn from the software we deliver and adapt – is crucial. And that all comes down to the cost of change. Over the last 25 years, it’s been the best predictor I’ve personally seen of long-term success or failure of software-dependent businesses. It’s the entropy of tech.
You may be a hugely successful business today – maybe even the leader in your market – but if the cost of changing your code is rising exponentially, all you’re really doing is market research for your more agile competitors.
I’ve thought a lot in recent years about how our profession is kind of fundamentally broken, and how we might be able to fix it.
The more I consider it, the more I think the underlying dysfunction revolves around software development teams, and the way they’re perceived as having only transient value.
Typically, when a business wants some new software, it builds a team specifically to deliver it. This can take many months and cost a lot of money. First, you have to find the people with the skills and experience you need. That in itself usually works out expensive – to the tune of tens of thousands of pounds per developer – before you’ve paid them a penny.
But the work doesn’t end there. Once you’ve formed your team, you then need to go through the “storming and norming” phases of team-building, during which they figure out how to work together. This, too, can work out very expensive.
So a formed, stormed and normed software team represents a very significant investment before you get a line of working code.
And, as we know, some teams never get past the forming stage, being stuck permanently in storming and norming and never really finding a satisfactory way to move forward together as they all pull in different directions.
The high-performing teams – the ones who work well together and can deliver good, valuable working software – are relative rarities, then: the truffles of the software industry.
Indeed, I’ve seen on many occasions how the most valuable end product from a software development effort turned out to be the team itself. They work well together, they enjoy working together, and they’re capable of doing great work. It’s just a pity the software itself was such a bad idea in the first place.
It seems somewhat odd then that businesses are usually so eager to break up these teams as soon as they see the work is “done”. It’s a sad fact of tech that the businesses who rely on the people who make it prefer to suffer us for as short a time as possible.
And this is where I think we got it wrong: should it be up to the customer to decide when to break up a high-performing dev team?
I can think of examples where such teams seized the day and, upon receiving their marching orders, set up their own company and bid for projects as a team, and it’s worked well.
This is very different to the standard model of development outsourcing, where a consultancy is effectively just a list of names of developers who might be thrown together for a specific piece of work, and then disbanded just as quickly at the end. Vanishingly few consultancies are selling teams. Most have to go through the hiring and team-building process themselves to fulfil their bids, acting as little more than recruitment agencies – albeit more expensive ones.
But I can’t help thinking that it’s teams that we should be focusing on, and teams our profession should be organising around:
Teams as the primary unit of software delivery
Teams as the primary commercial unit, self-organising and self-managing – possibly with expert helps for accounts and HR etc. Maybe it’s dev teams who should be outsourcing?
Teams as the primary route for training and development in our profession – i.e., through structured long-term apprenticeships
I have a vision of a software development profession restructured around teams. We don’t work for big companies who know nothing about software development. We work in partnerships that are made up of one or more teams, each team member specialised enough for certain kinds of work but also generalised enough to handle a wide range of work.
Each team would take on a small number of apprentices, and guide and mentor them – investing in training and development over a 3-5 year programme of learning and on-the-job experience – to grow the 10% of new developers our industry needs each year.
Each team would manage itself, work directly with customers. This should be part of the skillset of any professional developer.
Each team would make its own hiring decisions when it feels it needs specialised expertise from outside, or needs to grow (although my feelings on team size are well known), or wants to take on apprentices. So much that’s wrong with our industry stems from hiring decisions being taken by unqualified managers – our original sin, if you like.
And, for sure, these teams wouldn’t be immutable forever and all time. There would be an organic process of growth and change, perhaps of splitting into new teams as demand grows, and bringing in new blood to stop the pond from stagnating. But, just as even though pretty much every cell in my body’s been replaced many times but I’m somehow still recognisably me, it is possible with ongoing change to maintain a pattern of team identity and cohesion. There will always be a background level of forming, storming and norming. The trick is to keep that at a manageable level so we can keep delivering in the foreground.
Okay, so this Sunday morning rant’s been a long time coming. And, for sure, I’ve expressed similar sentiments before. But I don’t think I’ve ever dedicated a whole blog post to this, so here goes. You may want to strap in.
20 years ago, a group of prominent software folk gathered at a ski resort in Utah to fix software development. Undoubtedly, it had become broken.
Broken by heavyweight, command-and-control processes. Broken by unrealistic and oftentimes downright dishonest plan-driven management that tried to impose the illusion of predictability to something that’s inherently unpredictable. Broken by huge outsourced teams and $multi-million – sometimes even $multi-billion – contracts that, statistically, were guaranteed to fail, crushed by their own weight. Broken by the loss of basic technical practices and the influx of low-skilled programmers to fuel to the first dotcom boom, all in the name of ballooning share prices of start-ups – many of which never made a red cent of profit.
All of this needed fixing. The resulting Manifesto for Agile Software Development attempted to reset the balance towards lightweight, feedback-driven ways of working, towards smaller, self-organising teams, towards continuous and rich face-to-face communication, and towards working software as the primary measure of progress.
Would that someone in that room had been from a marketing and communications background. A fundamental mistake was made at that meeting: they gave it a name.
And so, Agile Software Development became known as “another way of doing software development”. We could choose. We could be more Agile (with a capital “A”). Or, we could stick with our heavyweight, command-and-control, plan-driven, document-driven approach. Like Coke Zero and Original Coke.
The problem is that heavyweight, command-and-control, plan-driven, document-driven approaches tend to fail. Of course, for the outsourcing companies and the managers, they succeed in their underlying intention, which is to burn through a lot of money before the people signing the cheques realise. Which is why that approach still dominates today. I call it Mortgage-Driven Development. You may know it as “Waterfall”.
But if we measure it by tangible results achieved, Mortgage-Driven Development is a bust. We’ve known throughout my entire lifetime that it’s a bust. Winston Royce warned us it was a bust in 1970. No credible, informed commentator on software development has recommended we work that way for more than 50 years.
And yet, still many do. (The main difference in 2021 being that a lot of them call it “Agile”. Oh, the irony.)
How does Mortgage-Driven Development work, then? Well – to cut a long story short – badly, if you measure it by tangible customer outcomes like useful working software and end user problems being solved. If you measure it by the size of developers’ houses, though, it works really, really well.
MDD works from a very simple principle – namely that our customer shouldn’t find out that we’ve failed until a substantial part of our mortgage has been paid off. The longer we can delay the expectation of seeing working software in production, the more of our mortgage we can pay off before they realise there is no working software that can be released into production.
Progress in MDD is evidenced by documentation. The more of it we generate, the more progress is deemed to have been achieved. I’ve had to drag customers kicking and screaming to look at actual working software. But they’re more than happy to look at a 200-page architecture document purporting to describe the software, or a wall-sized Gantt chart with a comforting “You are here” to make the customer think progress has actually been made.
Of course, when I say “more than happy to look at”, they don’t actually read the architecture document – nobody does, and that includes the architects who write them – or give the plan anything more than a cursory glance. They’re like a spare tire in the boot of your car, or a detailed pandemic response plan sitting on a government server. There’s comfort in knowing it merely exists, even if – when the time comes – they are of no actual use.
Why customers and managers don’t find comfort in visible, tangible software is anybody’s guess. It could come down to personality types, maybe.
Teams who deliver early and often present the risk of failing fast. I took over a team for a small development shop who had spent a year going around in circles with their large public sector client. No software had been delivered. With me in the chair, and a mostly new team of “Agile” software developers, we delivered working software within three weeks from a standing start (we even had to build our own network, connected to the Internet by my 3G dongle). At which point, the end client decided this wasn’t working out, and canned the contract.
That particular project lives in infamy – recruiters would look at my CV and say “Oh, you worked on that?” It was viewed as failure. I view it as a major success. The end client paid for a year’s worth of nothing, and because nothing had been delivered, they didn’t realise it had already failed. They’d been barking up entirely the wrong tree. It took us just three weeks to make that obvious.
Saving clients millions of pounds by disproving their ideas quickly might seem like a good thing, but it runs counter to the philosophy of Mortgage-Driven Development.
I’ve been taken aside and admonished for “actually trying to succeed” with a software project. Some people view that as risky, because – in their belief system – we’re almost certainly going to fail, and therefore all efforts should be targeted at billing as much as possible and at escaping ultimate blame.
And, to me, this thing called Agile Software Development has always essentially just been “trying to succeed at delivering software”. We’re deliberately setting out to give end users what they need, and to do it in a way that gives them frequent opportunities to change their minds – including about whether they see any value in continuing.
The notion that we can do that without frequent feedback from end users trying working software is palpable nonsense – betting the farm on a proverbial “hole in one”. Nature solved the problem of complex system design, and here’s a heads-up: it isn’t a design committee, or a Gantt chart, or a 200-page architecture document.
Waterfall doesn’t work and never did. Big teams typically achieve less than small teams. Command-and-control is merely the illusion of control. Documents are not progress. And your project plan is a fiction.
When we choose to go down that road, we’re choosing to live in a lie.
Over the last 10 months we’ve seen how different governments have handled the COVID-19 pandemics in their own countries, and how nations have been impacted very differently as a result.
While countries like Italy, the United Kingdom and Belgium have more than 100 deaths per 100,000 of the population, places where governments acted much faster and more decisively, like New Zealand have a far lower mortality rate (in the case of NZ, 0.5 deaths per 100,000).
Our government made the argument that they had to balance saving lives with saving the economy. But this, it transpires, is a false dichotomy. In 2020, the UK saw GDP shrink by an estimated 11.3%. New Zealand’s economy actually grew slightly by 0.4%.
For sure, during their very stringent measures to tackle the virus, their economy shrank like everyone else’s. But having very effectively made their country COVID-free, it bounced back in a remarkable V-shaped recovery. Life in countries that took the difficult decisions earlier has mostly returned to normal. Shops, bars, restaurants, theatres and sports stadiums are open, and NZ is very much open for business.
The depressing fact is that countries like the UK made a logical error in trying to keep the economy going when they should have been cracking down on the spread of the virus. In March, cases were doubling roughly twice a week, and every week’s delay in acting cost four times as many lives. Delaying for 2 weeks in March meant that infection cases sored to a level that made the subsequent lockdown much, much longer. Hence there was a far greater impact on the economy.
Eventually, by early July, cases in the UK had almost disappeared. At which point, instead of doubling down on the measures to ensure a COVID-free UK, the government made the same mistake all over again. They opened everything up again because they mistakenly calculated that they had to get the economy moving as soon as possible.
Cases started to rise again – albeit at a slower rate this time, as most people were still taking steps to reduce risks of infection – and around we went a second time.
The next totally predictable – and totally predicted – lockdown again came weeks too late in November.
And again, as soon as they saw that cases were coming down, they reopened the economy.
We’re now in our third lockdown, and this one looks set to last until late Spring at the earliest. This time, we have vaccines on our side, and life will hopefully get back to relative normal in the summer, but the damage has been done. And, yet again, the damage is far larger than it needed to be.
50,000 families have lost their homes since March 2020. Thousands of businesses have folded. Theatres may never reopen, and city centres will probably never recover as home-working becomes the New Normal.
By trying to trade-off saving lives against the economy, countries like the UK have ended up with the worst of both worlds: one of the highest mortality rates in Europe, and one of the worst recessions.
You see, it’s not saving lives or saving the economy. It’s saving lives and saving the economy. The same steps that would have saved more lives would have made the lockdowns shorter, and therefore brought economic recovery faster.
Why am I telling you all this? Well, we have our own false dichotomies in software. The most famous one being the perceived trade-off between quality and time or cost.
An unwillingness to invest in, say, more testing sooner in the mistaken belief that it will save time leads many teams into deep water. Over three decades, I’ve seen countless times how this leads to software that’s both buggier and costs more to deliver and to maintain – the worst of both worlds.
The steps we can take to improve the quality of our software turn out to be the same steps that help us deliver it sooner, and maintain it for longer for less money. Time “wasted” writing developer tests, for example, is actually an order of magnitude more time saved downstream (where “downstream” could just as easily mean “later today” as “after release”).
But the urge to cut corners and do trade-offs is strong, especially in highly politicised environments where leaders are rarely thinking past the next headline (or in our case, the next meeting with the boss). It’s a product of timid leadership, and one-dimensional, short-term reasoning.
When we go by the evidence, we see that many trade-offs are nothing of the sort.
In any complex creative endeavour – and, yes, software development is a creative endeavour – feedback is essential to getting it right (or, at least, less wrong).
The best development approaches tend to be built around feedback loops, and the last few decades of innovation in development practices and processes have largely focused on shrinking those feedback loops so we can learn our way to Better faster.
When we test our software, that’s a feedback loop, for example. Although far less common these days, there are still teams out there doing it manually. Their testing feedback loops can last days or even weeks. Many teams, though, write fast-running automated tests, and can test the bulk of their code in minutes or even seconds.
What difference does it make if your tests take days instead of seconds?
To illustrate, I’m going to draw a parallel with movie production. Up until the late 1960s, feedback loops in movie making were at best daily. Footage shot on film during the day were processed by a lab and then watched by directors, producers, editors and so on at the end of the day. Hence the movie industry term “dailies”. If a shot didn’t come out right – maybe the performance didn’t fit into the context of that scene with a previous scene (the classic “boom microphone in shot” or “character just ran 6 miles but is mysteriously not out of breath” spring to mind) – chances are the production team wouldn’t know until they saw the footage later.
That could well mean going back and reshooting some scenes. That means calling back the actors and the crew, and potentially remounting the whole thing if the sets have already been pulled down. Expensive. Sometimes prohibitively expensive, which is why lower-budget productions had little choice but to keep those shots in their theatrical releases.
In the 1960s, comedy directors like Jerry Lewis and Blake Edwards pioneered the use of Video assist. These were systems that enabled the same footage to be recorded simultaneously on film and on videotape, so actors and directors could watch takes back as soon as they’d been captured, and correct mistakes there and then when the actors, crew, sets and so on were all still there. Way, way cheaper than remounting.
The speed of testing feedback in software development has a similar impact. If I make a change that breaks the code, and my code is tested overnight, say, then I probably won’t know it’s broken until the next day (or the next week, or the next month, or the next year when a user reports the bug).
But I’ve already moved on. The sets have been dismantled, so to speak. To fix a bug long after the fact requires the equivalent of remounting a shoot in movies. Time has to be scheduled, the developer has to wrap their head around that code again, and the bug fix has to go through the whole testing and release process again. Far more expensive. Often orders of magnitude more expensive. Sometimes prohibitively expensive, which is why many teams ship software they know has bugs, but they just don’t have budget to fix them (or, at least, they believe they’re not worth fixing.)
If my code is tested in seconds, that’s like having Video assist. I can make one change and run the tests. If I broke the code, I’ll know there and then, and can easily fix it while I’m still in the zone.
Just as Video assist helps directors make better movies for less money, fast-running automated tests can help us deliver more reliable software with less effort. This is a measurable effect (indeed, it has been measured), so we know it works.
One thing I come across very often is development teams who have adopted processes or practices that they believe are helping them go faster, but that are probably making no difference, or even slowing them down.
The illusion of productivity can be very seductive. When I bash out code without writing tests, or without refactoring, it really feels like I’m getting sh*t done. But when I measure my progress more objectively, it turns out I’m not.
That could be because typing code faster – without all those pesky interruptions – feels like delivering working software faster. But it usually takes longer to get something working when we take less care.
We seem hardwired not to notice how much time we spend fixing stuff later that didn’t need to be broken. We seem hardwired not to notice the team getting bigger and bigger as the bug count and the code smells and the technical debt pile up. We seem hardwired not to notice the merge hell we seem to end up in every week as developers try to get their changes into the trunk.
We just feel like…
Not writing automated tests is one classic example. I mean, of course unit tests slow us down! It’s, like, twice as much code! The reality, though, is that without fast-running regression tests, we usually end up spending most of our time fixing bugs when we could be adding value to the product. The downstream costs typically outweigh the up-front investment in unit tests. Skipping tests is almost always a false economy, even on relatively short projects. I’ve measured myself with and without unit tests, and on ~1 hour exercises, and I’m slightly faster with them. Typing is not the bottleneck.
Another example is when teams mistakenly believe that working on separate branches of the code will reduce bottlenecks in their delivery pipelines. Again, it feels like we’re getting more done as we hack away in our own isolated sandboxes. But this, too, is an illusion. It doesn’t matter how many lanes the motorway has if every vehicle has to drive on to the same ferry at the end of it. No matter how many parallel dev branches you have, there’s only one branch deployments can be made from, and all those parallel changes have to somehow make it into that branch eventually. And the less often developers merge, the more changes in each merge. And the more changes in each merge, the more conflicts. And, hey presto, merge hell.
Closely related is the desire of many developers to work without “interruptions”. It may feel like sh*t’s getting done when the developers go off into their cubicles, stick their noise-cancelling headphones on, and hunker down on a problem. But you’d be surprised just how much communication and coordination’s required to avoid some serious misunderstandings. I recall working on a team where we ended up with three different architectures and four customer tables in the database, because my colleagues felt that standing around a whiteboard drawing pictures – or, as they called it, “meetings” – was a waste of valuable Getting Sh*t Done time. With just a couple of weeks of corrective work, we were able to save ourselves 20 minutes around a whiteboard. Go us!
I guess my message is simple. In software development, productivity doesn’t look like this:
I’m always surprised at how few organisations track some pretty fundamental stats about software development, because if they did then they might notice what’s been killing their business.
It’s a picture I’ve seen many, many times; a software product or system is created, and it goes live. But it has bugs. Many bugs. So, a bigger chunk of the available development time is used up fixing bugs for the second release. Which has even more bugs. Many, many bugs. So an even bigger chunk of the time is used to fix bugs for the third release.
It looks a little like this:
Over the lifetime of the software, the proportion of development time devoted to bug fixing increases until that’s pretty much all the developers are doing. There’s precious little time left for new features.
Naturally, if you can only spare 10% of available dev time for new features, you’re going to need 10 times as many developers. Right? This trend is almost always accompanied by rapid growth of the team.
So the 90% of dev time you’re spending on bug fixing is actually 90% of the time of a team that’s 10x as large – 900% of the cost of your first release, just fixing bugs.
So every new feature ends up in real terms costing 10x in the eighth release what it would have in the first. For most businesses, this rules out change – unless they’re super, super successful (i.e., lucky). It’s just too damned expensive.
And when you can’t change your software and your systems, you can’t change the way you do business at scale. Your business model gets baked in – petrified, if you like. And all you can do is throw an ever-dwindling pot of money at development just to stand still, while you watch your competitors glide past you with innovations you’ll never be able to offer your customers.
What happens to a business like that? Well, they’re no longer in business. Customers defected in greater and greater numbers to competitor products, frustrated by the flakiness of the product and tired of being fobbed off with promises about upgrades and hotly requested features and fixes that never arrived.
Now, this effect is entirely predictable. We’ve known about it for many decades, and we’ve known the causal mechanism, too.
The longer a bug goes undetected, exponentially the more it costs to fix. In terms of process, the sooner we test new or changed code, the cheaper the fix is. This effect is so marked that teams actually find that if they speed up testing feedback loops – testing earlier and more often – they deliver working softwarefaster.
This is very simply because they save more time downstream on bug fixes than they invest in earlier and more frequent testing.
The data used in the first two graphs was taken from a team that took more than 24 hours to build and test their code.
Here’s the same stats from a team who could build and test their code in less than 2 minutes (I’ve converted from releases to quarters to roughly match the 12-24 week release cycles of the first team – this second team was actually releasing every week):
This team has nearly doubled in size over the two years, which might sound bad – but it’s more of a rosy picture than the first team, whose costs spiraled to more than 1000% of their first release, most of which was being spent fixing bugs and effectively going round and round in circles chasing their own tails while their customers defected in droves.
I’ve seen this effect repeated in business after business – of all shapes and sizes: software companies, banks, retail chains, law firms, broadcasters, you name it. I’ve watched $billion businesses – some more than a century old – brought down by their inability to change their software and their business-critical systems.
And every time I got down to the root cause, there they were – slow tests.
I ran a little poll through the Codemanship twitter account yesterday, and thought I’d share the result with you.
There are two things that strike me about the results. Firstly, it looks like teams who actively involve user experience experts throughout the design process are very much in the minority. To be honest, this comes as no great surprise. My own observations of development teams over years tend to see UXD folks getting involved early on – often before any developers are involved, or any customer tests have been discussed – in a kind of a Waterfall fashion. “We’re agile. But the user interface design must not change.”
To me, this is as nonsensical as those times when I’ve arrived on a project that has no use cases or customer tests, but somehow magically has a very fleshed-out database schema that we are not allowed to change.
Let’s be clear about this: the purpose of the user experience is to enable the user to achieve their goals. That is a discussion for everybody involved in the design process. It’s also something that is unlikely we’ll get right first time, so iterating the UXD multiple times with the benefit of end user feedback almost certainly will be necessary.
The most effective teams do not organise themselves into functional silos of requirements analysis, UXD, architecture, programming, security, data management, testing, release and operations & support and so on, throwing some kind of output (a use case, a wireframe, a UML diagram, source code, etc) over the wall to the next function.
The most effective teams organise themselves around achieving a goal. Whoever’s needed to deliver on that should be in the room – especially when those goals are being discussed and agreed.
I could have worded the question in my poll “User Experience Designers: when you explore user goals, how often are the developers involved?” I suspect the results would have been similar. Because it’s the same discussion.
On a movie production, you have people who write scripts, people who say the lines, people who create sets, people who design costumes, and so on. But, whatever their function, they are all telling the same story.
The realisation of working software requires multiple disciplines, and all of them should be serving the story. The best teams recognise this, and involve all of the disciplines early and throughout the process.
But, sadly, this still seems quite rare. I hear lip service being paid, but see little concrete evidence that it’s actually going on.
The second thing I noticed about this poll is that, despite several retweets, the response is actually pretty low compared to previous polls. This, I suspect, also tells a story. I know from both observation and from polls that teams who actively engage with their customers – let alone UXD professionals etc – in their BDD/ATDD process are a small minority (maybe about 20%). Most teams write the “customer tests” themselves, and mistake using a BDD tool like Cucumber for actually doing BDD.
But I also get a distinct sense, working with many dev teams, that UXD just isn’t on their radar. That is somebody else’s problem. This is a major, major miscalculation – every bit as much as believing that quality assurance is somebody else’s problem. Any line of code that doesn’t in some way change the user’s experience – and I use the term “user” in the wider sense that includes, for example, people supporting the software in production, who will have their own user experience – is a line of code that should be deleted. Who is it for? Whose story does it serve?
We are all involved in creating the user experience. Bad special effects can ruin a movie, you know.
We may not all be qualified in UXD, of course. And that’s why the experts need to be involved in the ongoing design process, because UX decisions are being taken throughout development. It only ends when the software ends (and even that process – decommissioning – is a user experience).
Likewise, every decision a UI designer takes will have technical implications, and they may not be the experts in that. Which is why the other disciplines need to be involved from the start. It’s very easy to write a throwaway line in your movie script like “Oh look, it’s Bill, and he’s brought 100,000 giant fighting robots with him”, but writing 100,000 giant fighting robots and making 100,000 giant fighting robots actually appear on the screen are two very different propositions.
So let’s move on from the days of developers being handed wire-frames and told to “code this up”, and from developers squeezing input validation error messages into random parts of web forms, and bring these – and all the other – disciplines together into what I would call a “development team”.