Do Our Tools Need A Rethink?

In electronics design – a sector I spent a bit of time in during the 90s – tool developers recognised the need for their software to integrate with other software involved in the design and manufacturing process. Thanks to industry data standards, a PCB design created in one tool could be used to generate a bill of parts in a management tool, to simulate thermal and electromagnetic emissions in another tool, and drive pick-and-place equipment on an assembly line.

I marveled at how seamlessly the tools in this engineering ecosystem worked together, saving businesses eye-boggling amounts of money every year. Software can work wonders.

So it’s been disappointing to see just how disconnected and clunky our own design and development systems have turned out to be in the software industry itself. (Never live in a builder’s house!) Our ecosystem is largely made up of Heath Robinson point solutions – a thing that runs unit tests, a thing that tracks file versions, a thing that builds software from source files, a thing that executes customer tests captured in text files – all held together with twigs and string. There are no industry data interchange standards for these tools. Unit test results come in whatever shape the specific unit test tool developers decided. Customer tests come in whatever shape the specific customer testing tool developers decided. Build scripts come in whatever shape the build tool developers decided. And so on.

When you run the numbers, taking into account just how many different tools there are and therefore how any potential combinations of tools might be used in a team’s delivery pipeline, it’s brain-warping.

I see this writ large in the amount of time and effort it takes teams to get their pipeline up and running, and in the vastly larger investment needed to connect that pipeline to visible outputs like project dashboards and build monitors.

It occurs to me that if the glue between the tools was limited to a handful of industry standards, a lot of that work wouldn’t be necessary. It would be far easier, say, to have burn-down charts automatically refreshed after customer tests have been run in a build.

For this to happen, we’d need to rethink our tools in the context of wider workflows – something we’re notoriously bad at. The bigger picture.

Perhaps this is a classic illustration of what you end up with when you have an exclusively feature/solution or product focus, yes? Unit tests, customer tests, automated builds, static analysis results, commits, deployments – these are all actors in a bigger drama. The current situation is indicative of actors who only read their parts, though.

4 Out Of 5 Developers Would Choose To Stay Developers. Is It Time We Let Them?

Following on from yesterday’s post about squaring the circle of learning and mentoring in software development, a little poll I ran on Twitter clearly shows that a large majority of software developers would prefer to stay hands-on if they had the choice.

I’ve seen many developers over my 28-year career reluctantly pushed into management roles, and heard so very many talk about how much they miss making software with their own hands. But in too many organisations, the only way to progress in terms of seniority and pay is to move away from code.

Some choose not to progress, making do with the pay and the authority of a developer and biting their tongues when managers who haven’t touched code in years tell them to do silly things. But then we often find that ageism starts to kick in eventually, making it harder and harder to get hired in those hands-on roles. “Why is she still coding?” There’s an assumption among hirers that to still be in these “less senior” roles at, say, 45 is a failure to launch, and not a success in being exactly where you want to be, doing what you love.

A conversation I had recently with a team highlighted what can go wrong when you promote your best developers into non-development positions. They found themselves have to refer technical decisions up to people who no longer had a practical grasp of the technology, and this created a huge communication overhead that wouldn’t have been necessary had the decision-making authority been given to the people responsible for making those decisions work.

I’ve always believed that authority and responsibility go hand-in-hand. Anyone who is given the responsibility for making something happen should also be given the necessary authority to decide how to make it happen.

Not all developers welcome responsibility, of course. In some large organisations, I’ve seen teams grow comfortable with the top-down bureaucracy. They get used to people making the decisions for them, and become institutionalised in much the same way soldiers or prisoners do. What’s for dinner? Whatever they give us. When do we go to bed? Whenever they say. What unit testing tool should we use? Whichever one they tell us to.

But most developers are grown-ups. In their own lives, they make big decisions all the time. They buy houses. They have kids. They choose schools. They vote. It’s pretty wretched, then, seeing teams not being trusted to even purchase laptops for themselves. When teams are trusted, and given both responsibility and authority for getting things done, they tend to rise to that.

And developers should be trusted with their own careers, too. If they were, then I suspect there’d be a lot more active coders with decades of experience to share with the ever-growing number of new developers coming into the industry.

How A Developer’s Career *Should* Work

Apprenticeships are in the news today here in the UK, with the rather shocking revelation that since the government introduced their new apprenticeship levy scheme – where employers must pay 0.5% of their income into a special government-run “bank” to be used to fund apprenticeships of all kinds – the number of people starting apprenticeships has actually fallen.

I met with the people in charge of the scheme, and was less than hopeful that it would actually work – especially in our industry. Time and again I see decision makers vastly underestimate the time and resources needed to grow a software developer, and their planned software developer apprenticeship looked lacking, too. Later, I heard from multiple employers who’d taken on dev apprentices, and they were really struggling with the lack of practical support. Who will teach these young developers? Who will mentor them? How long is it really going to take before we can leave them to work unsupervised?

The sad fact is that many of these employers just heard “cheap developers” and didn’t stop to think “Ah, but why so cheap?” The brutal answer is: because they’re not developers. Yet. The whole point of the apprenticeship is that you turn them into developers. And training a software developer takes a lot of time and lot of money.

If you saw that one house on a street was half the price of the other houses, would you not stop to ask “why?” In the case of apprentices, it’s because you only bought the land. You have to build a house on it.

The main sticking point here is that somebody who knows what they’re doing has to make themselves available to help the apprentices learn. And they need to be very available, because that requires a big investment in time.

Our industry, though, has structured itself to make this investment unworkable. The most senior developers are either too busy getting shit done, or they’re not active developers any more. With the best will in the world, no amount of transferrable skills are going to get transferred if the person who has all that useful knowledge last programmed in COBOL on an IBM mainframe. It would be like being taught economics by someone who only speaks Anglo-Saxon.

In order to square this circle, our industry needs to be restructured to make sustained, in-depth skills transfer possible.

This is how it could work:

  • At the start of a developer’s career, the most productive thing they can do with their time is learn. Career’s should start with a few months of nothing but learning. All day. Every day. A coding boot camp might be a model to follow here – provided we all acknowledge that the learning doesn’t end with boot cam graduation. It’s just a kick-start.
  • After graduating boot camp, developers become apprentices. They work on real teams doing real work 3-4 days a week, with a the other 1-2 days released for further dedicated, structured learning. This would continue for 2-3 years as they build their skills and their confidence to a point where employers feel happy leaving them to work unsupervised. It might even lead to a degree, to validate their progress.
  • Once they’ve completed their apprenticeship, developers pay their dues and return the investment employers have made in them by delivering working software of real value, while continuing to gain experience and learn more and more. There might be a decade or more of this real-world work. They continue to be mentored by more experienced developers, but in a more hands-off kind of way. A nudge here, a kind word there etc. Enlightened employers will recognise that dedicated learning time is still a wise investment, throughout a developer’s career. They may still devote 10-20% of their time to this, but at this level of achievement, it’s more like doing a PhD. We might expect developers to eventually add their own contributions to the software development landscape in this phase of their career. Maybe write a useful new tool, or invent a new technique. Maybe speak at conferences. Maybe write a book.
  • During this – and I hesitate to use this term – “journeyman” phase, developers may find they’re called upon increasingly more to mentor less experienced developers, and to share their knowledge freely. I believe this is an important part of a developer’s progress. I’ve found that what really tests my understanding of something is trying to explain it to other people. An increasing emphasis on sharing knowledge, on mentoring, and especially on leading by example, would mark the later stages of this phase.
  • Eventually, developers reach a phase in their career where the most productive use of their time is teaching. This is the “profess” in our profession. And this is where we square the circle. Who is going to do the teaching in the boot camps? Who is going to train and mentor the apprentices? Simple answer: we are.

Now, for sure, not every developer will be cut out for that, and not every developer will want to go down that route. Some will become managers, and that’s fine. We need more developers in technology management positions, frankly. But your average corporation doesn’t need 20 CTOs. It may well need 20 active mentors, though – keeping hands-on with the latest tools and technologies so they can offer practical help to 100 less experienced developers.

At present, in the vast majority of organisations, no such career path exists. We are set up to move away from the code face just at the time when we should be working side-by-side with apprentices. I had to invent that role for myself by starting Codemanship. Had such roles existed for someone with 20 years’ experience, there would have been no need. I didn’t start a business to start a business. I started a business so that – as the boss – I could offer myself my ideal job.

And, as the boss, I understand why this job is important. It’s the most useful thing I can offer at this stage in my career. This is why I believe it’s important that more bosses come from a software development background – so they can see the benefits. As it stands, employers – for the most part – just don’t get it. Yet.

There’s more at stake here than pay and perks for developers who might progress beyond the current career ceiling that too many organisations impose on people who still write code. One factor that strongly determines the way a business invests its money is who is holding the purse strings. I sometimes rail at the infantalisation of software professionals, where we must go cap in hand to “mummy and daddy” for the most insignificant of purchases. If I need a new monitor at home, I go out and buy a new monitor. Easy. In the world of corporate tech, not so easy. I recall once having multiple meetings, escalating all the way up to the Director of IT, about buying a £200 whiteboard.

If the budget holders don’t understand the technical side of things – perhaps they never did, or it was so long ago they were directly involved in technology – then it can be hard to persuade them of the benefits of an investment in tools, in books, in training, in furniture, etc. As a business owner, I experience it from the other side, watching in dismay the hoops some teams have to jump through to get things they need like training.

Codemanship training does not appeal to CTOs, on the whole. Most don’t see the benefits. They buy it because the developers tugged at their sleeve and whined and pleaded long enough that the boss realised the only way to make them shut up was to buy a course. In that sense, code craft training’s a bit like the candy they display at supermarket checkouts.

A very few more enlightened companies let their developers make those decisions themselves, giving them budgets they can spend without having to get purchases approved. But they’re in the minority. Many more teams have to crawl over broken glass to book, say, a TDD workshop.

On a larger scale, decisions about what developers’ time gets invested in are usually not in the developers’ hands. If it were up to them, I suspect we’d see more time devoted to learning, to teaching, and to mentoring. But, sadly, it’s not. They have to ask for permission – quite probably from someone who isn’t a a developer, perhaps even someone who thinks writing software is a low-status job that doesn’t warrant that kind of investment in skills.

When that changes, I believe we will finally square the circle.

It’s About The Loops, Stupid!

2 essential models of software development:

1. Queuing – value “flows”, software is “delivered” etc. This is the “incremental” in “iterative & incremental”

2. Learning – teams converge on a working solution by iterating a design

The 1st model is fundamentally wrong & damaging.

The queue is the pipeline that takes the idea in our heads and turns it into working software users can try for real. But over-emphasis on that queue tends to obscure that it’s a loop feeding directly back into itself with better ideas.

And when we examine it more closely, considering the technical practices of software delivery, we see it’s actually lots of smaller interconnected loops (e.g., customer tests, red-green-refactor, CI etc) – wheels within wheels.

But our obsession with the pipeline tends to disguise this truth and frames all discussions in terms of queues instead of loops. Hence most teams mistake “delivered” with “done” and ignore the most important thing – feedback.

Psychologically, the language we use to describe what we do has a sort of queue/flow bias. And hence most “agile” teams just end up working their way incrementally through a thinly-disguised waterfall plan (called a “backlog”, or a “roadmap”).

They’re the workers in a parcel loading bay. They’re just loading “parcels” (features) on to a loading bay (dev) and then into a truck (ops).

Most dev teams have little or no visibility of what happens after those parcels have been “delivered”. What they don’t see at the other end is the customer trying to make something work with what’s in the parcels. Maybe they’re building a rocket, but we keep sending them toasters.

This mismatch in goals/motivations – “deliver all the toasters” vs “build a rocket” – is how the working relationship between dev teams and customers usually breaks down. We should all be trying to build the rocket.

It took humans decades (and hundreds of thousands of people) to learn how to build a rocket, with a lot of science and engineering (guesswork, basically) but mostly a lot of trial and error. We must learn with our customer what really needs be in the parcels we deliver.

The 4 Gears of Test-Driven Development

When I explain Test-Driven Development to people who are new to the concept, I try to be clear that TDD is not just about using unit tests to drive design at the internal code level.

Unit tests and the familiar red-green-refactor micro feedback cycle that we most commonly associate with TDD – thanks to 1,001 TDD katas that focus at that level – is actually just the innermost feedback cycle of TDD. There are multiple outer feedback loops that drive the choice of unit tests. Otherwise, how would we know what unit tests we needed to write?

Outside the rapid unit test feedback loop, there’s a slower customer test feedback loop that drives our understanding of what your units need to do in a particular software usage scenario.

Outside the customer test feedback loop, there’s a slower-still feature feedback loop, which may require us to pass multiple customer tests to complete.

And, most important of all, there’s an even slower goal feedback loop that drives our understanding of what features might be required to solve a business problem.

On the Codemanship TDD course, pairs experience these feedback loops first hand. They’re asked to think of a real-world problem they believe might be solved with a simple piece of software. For example, “It’s hard to find good vegan takeaway in my local area.” We’re now in the first feedback loop of TDD – goals.

Then they imagine a headline feature – a proverbial button the user clicks that solves this problem: what would that feature do? Perhaps it displays a list of takeaway restaurants with vegan dishes on their menu that will deliver to my address, ordered by customer ratings. We’re now in the next feedback loop of TDD – features.

Next, we need to think about what other features the software might require to make the headline feature possible. For example, we need to gather details of takeaway restaurants in the area, including their vegan menus and their locations, and whether or not they’ll deliver to the customer’s address. Our headline feature might require a number of such supporting features to make it work.

We work with our customer to design a minimum feature set that we believe will solve their problem. It’s important to keep it as simple as we can, because we want to have a working prototype ready as soon as we’re able that we can test with real end users in the real world.

Next, for each feature – starting with the most important one, which is typically the headline feature – we drive out a precise understanding of exactly what that feature will do using examples harvested from the real world. We might go online, or grab a phone book, and start checking out takeaway restaurants, collecting their menus and asking what postcode areas they deliver in. Then we would pick addresses in our local area, and figure out – for each address – which restaurants would be available according to our criteria. We could search on sites like Google and Trip Advisor for reviews of the restaurants, or – if we can’t find reviews, invent some ratings – so we can describe how the result lists should be ordered.

We capture these examples in a format that’s human readable and machine readable, so we can collaborate directly with the customer on them and also pull the same data into automated executable tests.

We’re now in the customer test feedback loop. Working one customer test at a time, we automate execution of that test so we can continuously check our progress in passing it.

For each customer test, we then test-drive an implementation that will pass the test, using unit tests to drive out the details of how the software will complete each unit of work required. If the happy path for our headline feature requires that we

  • calculate a delivery map location using the customer’s address
  • identify for each restaurant in our list if they will deliver to that location
  • filter the list to exclude the restaurants that don’t
  • order the filtered list by average customer rating

…then that’s a bunch of unit tests we might need to write. We’re now in the unit test feedback loop.

Once we’ve completed our units and seen the customer test pass, we can move on to the next customer test, passing them one at a time until the feature is complete.

Many dev teams make the mistake of thinking that we’re done at this point. This is usually because they have no visibility of the real end goal. We’re rarely invited to participate in that conversation, to be fair. Which is a terrible, terrible mistake.

Once all the features – headline and supporting – are complete, we’re ready to test our minimum solution with real end users. We release our simple software to a representative group of tame vegan takeaway diners, who will attempt to use it to find good food. Heck, we can try using it ourselves, too. I’m all in favour of developers eating their own (vegan) dog food, because there’s no substitute for experiencing it for ourselves.

Our end users may report that some of the restaurants in their search results were actually closed, and that they had to phone many takeaway restaurants to find one open. They may report that when they ordered food, it took over an hour to be delivered to their address because the restaurant had been a little – how shall we say? – optimistic about their reach. They may report that they were specifically interested in a particular kind of cuisine – e.g., Chinese or Indian – and that they had to scroll through pages and pages of results for takeaway that was of no interest to find what they wanted.

We gather this real-world feedback and feed that back into another iteration, where we add and change features so we can test again to see if we’re closer to achieving our goal.

I like to picture these feedback loops as gear wheels. The biggest gear – goals – turns the slowest, and it drives the smaller features gear, which turns faster, driving the smaller and faster customer tests wheel, which drives the smallest and fastest unit tests wheel.

tdd_gears

It’s important to remember that the outermost wheel – goals – drives all the other wheels. They should not turning by themselves. I see many teams where it’s actually the features wheel driving the goals wheel, and teams force their customers to change their goals to fit the features they’re delivering. Bad developers! In your beds!

It’s also very, very important to remember that the goals wheel never stops turning, because there’s actually an even bigger wheel making it turn – the real world – and the real world never stops turning. Things change, and there’ll always be new problems to solve, especially as – when we release software into the world, the world changes.

This is why it’s so very important to keep all our wheels well-oiled so they can keep on turning for as long as we need them to. If there’s too much friction in our delivery processes, the gears will grind to a halt: but the real world will keep on turning whether we like it or not.

 

Iterating Is The Ultimate Requirements Discipline

The title of this blog post is something I’ve been trying to teach teams for many years now. As someone who very much drank the analysis and design Kool Aid of the 1990s, I learned through personal experience on dozens of projects – and from observing hundreds more from a safe distance – that time spent agonising over the system spec is largely time wasted.

A requirements specification is, at best, guesswork. It’s our starter for ten. When that spec – if the team builds what’s been requested, of course – meets the real world, all bets are usually off. This is why teams need more throws of the dice – as many as possible, really – to get it right. Most of the value in our code is added after that first production release, if we can incorporate our users’ feedback.

Probably the best way to illustrate this effect is with some code. Take a look at this simple algorithm for calculating square roots.

public static double sqrt(double number) {
    if(number == 0) return 0;
    double t;

    double squareRoot = number / 2;

    do {
        t = squareRoot;
        squareRoot = (t + (number / t)) / 2;
    } while ((t - squareRoot) != 0);

    return squareRoot;
}

When I mutation test this, I get a coverage report that says one line of code in this static method isn’t being tested.

pit

The mutation testing tool turned number / 2 into number * 2, and all the tests still passed. But it turns out that number * 2 works just as well as the initial input for this iterative algorithm. Indeed, number * number works, and number * 10000000 works, too. It just takes an extra few loops to converge on the correct answer.

It’s in the nature of convergent iterative processes that the initial input matters far less than the iterations. More frequent iterations will find a working solution sooner than any amount of up-front analysis and design.

This is why I encourage teams to focus on getting working software in front of end users sooner, and on iterating that solution faster. Even if your first release is way off the mark, you converge on something better soon enough. And if you don’t, you know the medicine’s not working sooner and waste a lot less time and money barking up the wrong mixed metaphor.

What I try to impress on teams and managers is that building it right is far from a ‘nice-to-have’. The technical discipline required to rapidly iterate working software and to sustain the pace of releases is absolutely essential to building the right thing, and it just happens to be the same technical discipline that produces reliable, maintainable software. That’s a win-win.

Iterating is the ultimate requirements discipline.

 

How Agile Works

After 18 years of talk and hype about Agile, I find that it’s easy to lose sight of what Agile means in essence, and – importantly – how it works.

I see it as an inescapable reality of software development – or any sufficiently complex endeavour – that we shouldn’t expect to get it right first time. The odds of our first solution being the best solution are vanishingly small – the proverbial “hole in one”.

So we should expect to need to take multiple passes at a solution, so we can learn with each iteration of the design what works and what doesn’t and progressively get it less wrong.

If Agile is an algorithm, then it’s a search algorithm. It searches an effectively infinite solution space for a design that best fits our problem. The name of this search algorithm is evolution.

Starting with the simplest input, it tests that design against one or more fitness functions. The results of this test are fed back into the next iteration of the design. And around and around we go, adding a little, changing a little, and testing again and again.

In nature, evolution takes tiny steps forward. If a viable organism produced offspring that are too different from itself, chances are that next generation will be non-viable. Evolution doesn’t take big, risky leaps. Instead, it edges forward one tiny, low-risk change at a time.

The Agile design process doesn’t make 100 changes to a solution and then test for fitness. It makes one or two changes, and sees how they work out before making more.

The speed of this search algorithm depends on three things:

  • The frequency of iterations
  • The amount of change in each iteration
  • The quality of feedback into the next iteration

If releases of working software are too far apart, we learn too slowly about what works and what doesn’t.

If we change too much in each release, we increase the risk of making the solution non-viable. We also take on a much higher risk and cost if a release has to be rolled back, as we lose a tonne of changes. It’s in the nature of software that it works as a connected whole. It’s easy to roll back 1 of 1 changes. It’s very hard to roll back 1 of 100 changes.

The lessons we learn with each release will depend on how it was tested. We find that feedback gathered from real end users using the software for real is usually the most valuable feedback. Everything else is just guesswork until our code meets the real world.

“Agile” teams who do weekly show-and-tells, but release working software into production less frequently, are missing out on the best feedback. Our code’s just a hypothesis until real people try to use it for real.

This is why our working relationship with our customer is so important – critical, in fact. far too many teams who call themselves “Agile” don’t get to engage with the customer and end users directly, and the quality of the feedback suffers when we’re only hearing someone’s interpretation of what their feedback was. It works best when the people writing the code get to see and hear first-hand from the people using it.

For me, it’s not Agile if it doesn’t fully embrace those fundamental principles, because they’re the engine that makes it work. Agile teams do small, frequent releases of working software to real customers and end users who they work with directly.

To achieve this, there are some technical considerations. If it takes a long time to check that the software’s fit for release, then you will release less often. If it takes a long time to build and deploy the software, then you’ll release less often. If the changes get harder and harder to make, then you’ll release less often.

And even after we’ve solved the problem, the world doesn’t stand still. The most common effect of releasing software into the world is that – if the software gets used – the world changes. Typically, it changes in ways we weren’t expecting. Western democracies are still struggling with the impact of social media, for example. But on a smaller scale, releasing software into any environment can have unintended consequences.

It’s not enough to get it right once. We have to keep learning and keep changing the software, normally for its entire operational lifetime (which, on average, is about 8 years). So we have to be able to sustain the pace of releases pretty much indefinitely.

All this comes with a bunch of technical challenges that have to be met in order to achieve small, frequent releases at a sustainable pace. Most “Agile” teams fail to master these technical disciplines, and their employers resist making the investment in skills, time and tools required to build a “delivery engine” that’s up to the job.

Most “Agile” teams don’t have the direct working relationship with the people using their software required to gain the most useful feedback.

To put it more bluntly, most “Agile” teams aren’t really Agile at all. They mistake Jira and Jenkins and stand-up meetings and backlogs and burn-down charts for agility. None of those things are, in of themselves, Agile.

Question is: are you?