Training Is Expensive. But Not As Expensive As Not Training.

However they choose to learn – from books, videos, blogs, online courses, instructor-led training, etc – by far the biggest cost in building a developer’s skills is the time that it takes.

I’ve worked with several thousand developers in more than 100 organisations over the last decade, so I have a perspective on how much time is really required. If you’re a manager, you may want to sit down for this.

Let’s start from scratch – a newly-minted programmer, just starting out in their dev career. They may have been programming for 6-12 months, perhaps at school or in a code club, or at home with a couple of books.

At this point, they’re a long way from being competent enough to be left alone to get on with writing real software for real end users in a real business. Typically, we find that it takes another 2-3 years. Before then, they’re going to need a lot of supervision – almost continuous – from a more experienced developer.

Of course, you could just leave them to their own devices, freeing up that more productive mentor to write their own code. But we know that the cost of maintaining code over its lifetime is an order of magnitude higher than the cost of writing it in the first place. An inexperienced developer’s code is likely to be far less maintainable, and therefore cost far more to live with.

This is the first hidden cost of learning, and it’s a big one.

But it’s not just about maintainability of code, of course. Inexperienced developers are less likely to know how to pin down requirements, and therefore more likely to build the wrong things, requiring larger amounts of rework. And this is rework of code that’s harder to change, so it’s expensive rework.

More mature organisations recognise this, and invest more to get their developers up to speed sooner. (Many developers, sadly, never learn to write maintainable code at any point in their career – it’s pot luck if you happen to end up being exposed to good practices).

Or you could exclusively hire more experienced developers, of course. But that plan has two fatal flaws. Firstly, hiring developers is very expensive and takes months. Secondly, if nobody hires inexperienced developers, where will these experienced developers come from?

So, you end up paying the piper one way or another. You can pay him for training. Or you can pay him for constant supervision. Or you can pay him for bug fixes and rework. Or you can pay him to try and recruit senior developers.

It turns out that training – I mean, really training – your developers is the cheapest option. It’s also the option least chosen.

On paper, it sounds like a huge investment. Some development organisations spend as much as 25% of their entire budget on learning and improving. Most organisations balk at this. It’s too much!

The lion’s share of this manifests in the developers’ time. They might, for example, give developers one day a week dedicated to learning and improving (and, as they become more senior, researching and experimenting). For a team of 6 developers, that adds up to £140,000 a year of developer time.

They might send teams on training courses. A group of 12 – the average Codemanship class size – on a 3-day course represents approximately £16,000 of dev time here in London.

These are some pretty big numbers. But only when you consider them without the context of the total you’re spending on development, and more importantly, the return your organisation gets from that total investment.

I often see organisations – of all sizes and shapes – brought to their knees by legacy products and systems, and their inability to change them, and therefore to change the way they do business.

Think not about the 25%. Think instead about what you’re getting from the other 75%.

I’m A Slacker, And Proud Of It

That probably sounds like an unwise thing to put on your CV, but it’s nevertheless true. I deliberately leave slack in my schedule. I aim not to be busy. And that’s why I get more done.

As counterintuitive as it sounds, that’s the truth. The less I fill my diary, the more I’m able to achieve.

Here’s why.

Flash back to the 1990s, and picture a young and relatively inexperienced lead software developer. Thanks to years of social conditioning from family, from school, from industry, and from the media, I completely drank the Hussle Kool-Aid.

Get up early. Work, work, work. Meetings, meetings, meetings. Hussle, hussle, hussle. That’s how you get things done.

I filled my long work days completely, and then went home and read and practiced and learned and answered emails and planned for next work-packed day.

A friend and mentor recognised the signs. He recommended a I read a book called Slack: Getting Past Burnout, Busywork & The Myth of Total Efficiency by Tom DeMarco. It changed my life.

Around this time, ‘Extreme Programming’ was beginning to buzz on the message boards and around the halls of developer conferences. These two revelations came at roughly the same time. It’s not about how fast you can go in one direction – following a plan. It’s about how easily you can change direction, change the plan. And for change to be easy, you need adaptive capacity – otherwise known as slack.

Here was me as a leader:

“Jason, we need to talk about this urgent thing”

“I can fit you in a week on Thursday”

“Jason, should we look into these things called ‘web services’?”

“No time, sorry”

“Jason, your trousers are on fire”

“Send me a memo and I’ll schedule some time to look into that”

At an organisational level, lack of adaptive capacity can be fatal. The more streamlined and efficient they are at what they currently do, the less able they are to learn to do something else. Try turning a car at its absolute top speed.

At a personal level, the drive to be ever more efficient – to go ever faster – also has serious consequences. Aside from the very real risk of burning out – which ends careers and sometimes lives – it’s actually the dumbest way of getting things done. There are very few jobs left where everything’s known at the start, where nothing changes, and where just sticking to a plan will guarantee a successful outcome. Most outcomes are more complex than that. We need to learn our way towards them, adjusting as we go. And changing direction requires spare capacity: time to review, time to reflect, time to learn, time to adjust.

On a more serious note, highly efficient systems tend to be very brittle. Think of our rail networks. The more we seek to make them more efficient, the bigger the impact on the network when something goes wrong. If we have a service that takes 30 minutes to get from, say, London Waterloo to Surbiton, and we run it every hour, if there’s a delay, there’s 30 minutes of slack to recover in. The next train doesn’t have to be late. If we run it every 30 minutes – at maximum “efficiency” – there’s no wiggle room. The next train will be late, and the one after that, etc.

My days were kind of like that; if my 9am meeting overran, then I’d be late for my 9:20, and late for my 10am, and so on.

When we stretch ourselves and our systems to breaking point – which is what ‘100% efficiency’ really means – we end up being rigid (hard to change) and brittle (easy to break).

We’re seeing that now in many countries’ handling of the pandemic. After a decade of ideological austerity stripping away more and more resources from public services in the UK, forcing them to become ever more ‘efficient’, the appearance of the unexpected – though we really should have been expecting it at some point – has now broken many of those services, and millions of lives.

Since the late 90s, I’ve deliberately kept my diary loose. For example, I try very hard to avoid running two training courses in the same week. When someone else was managing my diary and my travel arrangements, they’d have me finishing work in one city and jumping on a late train or flight to the next city for another appointment the next morning. This went wrong very, very often. And there was no time to adjust at all. If you’ve ever tried to find a replacement laptop at 7am in a strange city, you’ll know what I’m talking about.

So I highly recommend reading Tom’s book, especially if you’re recognising the symptoms. And then you too can become a more productive slacker.

Codemanship Code Craft Videos

Over the last 6 months, I’ve been recording hands-on tutorials about code craft – TDD, design principles, refactoring, CI/CD and more – for the Codemanship YouTube channel.

I’ve recorded the same tutorials in JavaScript, Java, C# and (still being finished) Python.

As well as serving as a back-up for the Codemanship Code Craft training course, these series of videos forms possibly the most comprehensive free learning resource on the practices of code craft available anywhere.

Each series has over 9 hours of video, plus links to example code and other useful resources.

Codemanship Code Craft videos currently available

I’ve heard from individual developers and teams who’ve been using these videos as the basis for their practice road map. What seems to work best is to watch a video, and then straight away try out the ideas on a practical example (e.g., a TDD kata or a small project) to see how they can work on real code.

In the next few weeks, I’ll be announcing Codemanship Code Craft Study Groups, which will bring groups of like-minded learners together online once a week to watch the videos and pair program on carefully designed exercises with coaching from myself.

This will be an alternative way of receiving our popular training, but with more time dedicated to hands-on practice and coaching, and more time between lessons for the ideas to sink in. It should also be significantly less disruptive than taking a whole team out for 3 days for a regular training course, and significantly less exhausting than 3 full days of Zoom meetings! Plus the price per person will be the same as the regular Code Craft course.

Proactive vs Reactive Learning (or “Why Your Company Only Does Easy Things”)

Imagine you lead an orchestra. The word comes down from on high “Tonight, our audience demands you play Rachmaninoff’s Piano Concerto No. 3. The future of the orchestra depends on it. We’re all counting on you.”

But your orchestra has no pianist. Nobody in your orchestra has even touched a piano, let alone taken lessons. You turn to the lead violin: “Quick. Google ‘how to play piano?’ “

Now, of course, there’s absolutely no chance that any human being could learn to play piano to that standard in a day. Or a week. Or a month. It takes a lot of time and a lot of work to get to that level. Years.

The inevitable result is that the orchestra will not be playing Rachmaninoff’s Piano Concerto No. 3 that evening. At least, not with the piano part. And that’s kind of essential to a piano concerto.

I see tech organisations in this situation on a regular basis. They discover a need that they’re simply nowhere near competent enough to deal with – something completely beyond the range of their current capabilities. “The users demand that the software learns from their interactions and anticipates their needs. Quick. Google ‘how to train a machine?'” “The customer demands a custom query language. Quick. Google ‘how to write a compiler?'” And so on.

Have we become so used to looking stuff up on Stack Overflow, I wonder, that we’ve forgotten that some of this stuff is hard? Some of these things take a long time to learn? Not everything is as easy as finding out what that error message means, or how to install a testing framework using NPM?

The latter style of learning is what some people call reactive. “I need to know this thing now, because it is currently impeding my progress.” And software development involves a lot of reactive learning. You do need to be rather good at looking stuff up to get through the typical working day, because there are just so, so many little details to remember.

Here’s the thing, though: reactive learning only really works for little details – things that are easy to understand and can be learned quickly. If the thing that impedes our progress is that we require a road bridge to be built to get us over that canyon, then that’s where we see the limits of reactive learning. It can remove small obstacles. Big problems that takes a long time to solve require a different style of learning that’s much more proactive.

If your orchestra only plays the instruments needed for the exact pieces they’ve played up to that point, then there’s in increased likelihood that there’ll be gaps. If a dev team only has the exact skill set for the work they’ve done up to that point, there are likewise very likely to be gaps.

It’s hard, of course, to anticipate every possible future need and prepare months or years in advance for every eventuality. But some orgs have a greater adaptive capacity than others because their people are skilled beyond today’s specific requirements. That is to say, they’re better at solving problems because they have more ways of solving problems – more strings to their bow (or more keys to their piano, if you like).

Compiler design might sound like the kind of esoteric computer-sciency thing that’s unlikely to arise as a business need. But think of it this way: what’s our code built on? Is the structure of programs not the domain model we work in every day? While I’ve never designed a compiler, I have had numerous occasions when – to write a tool that makes my job easier – it’s been very useful to understand that model. Programmers who understand what programs are made of tend to be more effective at reasoning about code, and better at writing code that’s about code. We use those tools every day, but all tooling has gaps. I’ve yet to meet a static analysis tool, for example, that had all the rules I’d be interested in applying to code quality.

The most effective dev teams I’ve come into contact with have invested in custom tooling to automate repetitive donkey work at the code face. Some of them end up being open-sourced, and you may be using them yourself today. How did you think our test runner’s unit test discovery worked?

Some books about stuff I had no immediate need to know but read anyway

Now, we could of course hire a pianist for our orchestra – one who already knows Rachmaninoff’s Piano Concerto No. 3. But guess what? It turns out pianists of that calibre are really difficult to find – probably because it takes years and years to get to that standard. (No shortage of people who manage pianists, of course.) And now you remember how you voted against all those “superfluous” music education programmes. If only you could have known that one day you might need a concert pianist. If only someone had warned you!

Well, here I am – warning you. Not all problems are easy. Some things take a long time to learn, and those things may crop up. And while nobody can guarantee that they will, this is essentially a numbers game. What are the odds that we have the capability – or can buy in the capability at short notice (which opens the lid on a can of worms I call “proactive recruitment”) – to solve this problem?

Most of the time, organisations end up walking away from the hard problems. They are restricted to the things most programmers can solve. This is not a good way to build competitive advantage, any more than sticking to works that don’t have a piano part is a good way to run a successful orchestra.

Enlightened organisations actively invest in developing capabilities they don’t see any immediate need for. Yes, they’re speculating. And speculation can be wasteful. Just like all uncertain endeavors can be wasteful. But there are usually signposts in our industry about what might be coming a year from now, a decade from now, and beyond.

And there are trends – the continued increase in available computing power is one good example. Look at what would be really useful but is currently computationally too expensive right now. In 1995, we saw continuous build and tests cycles as highly desirable. But most teams still ran them overnight, because the hardware was about 1000 times slower than today. Now coming into vogue – as I predicted it would over a decade ago – more and more of us are building and testing (and even automatically inspecting) our code continuously in the background as we type it. That was totally foreseeable. As is the rise of Continuous Inspection as a more mainstream discipline off the back of it.

There are countless examples of long-established and hugely success businesses being caught with their pants down by Moore’s Law.

Although digital photography was by no means a new invention, its sudden commercial viability 20 years ago over chemical photography nearly finished Kodak overnight. They had not speculated. They had not invested in digital photography capability. They’d been too busy being the market leader in film.

And then there was the meteoric rise of guitar amp simulators – a technology long sneered at (but begrudgingly used) by serious players, and less serious players like myself. The early generations of virtual amps didn’t sound great, and didn’t feel like playing through a real amp with real tubes. (Gotta love them tubes!) But – damn – they were convenient.

The nut they couldn’t crack was making it sound like it was being recorded through a real speaker cabinet with a real microphone. There was a potential solution – convolution, a mathematical process that can combine two signals, so the raw output of a guitar amp (real or virtual) can be combined with an “impulse response” (a short audio sample, like the short reverberation in a room after you click your fingers) of a cabinet and microphone to give a strikingly convincing approximation of what that signal would sound like in the space – or what that guitar amp output would sound like through those speakers recorded with that microphone. Now, suddenly, virtual guitar amps were convenient and sounded good.

But up to that point, convolution had been too computationally expensive to be viable for playing and recording on commercially available hardware. And then, suddenly, it wasn’t. Queue mad dash by established amp manufacturers to catch up. And, to be fair to them, their virtual amp offerings are pretty spiffy these days. Was this on their radar, I wonder? Did the managers and the engineers see virtual amp technology looming on the horizon and proactively invest in developing that capability in exactly the way Kodak didn’t? Not before virtual amp leaders like Line 6 had taken a chunk of their market share, I suspect. And now convolution is everywhere. So many choices, so many market players old and new.

You see, it’s all well and good making hay while the sun shines. But when the weather turns, don’t end up being the ones who didn’t think to invest in a umbrella.

Introduction to Test-Driven Development Video Series

Over the last month, I’ve been recording screen casts introducing folks to the key ideas in TDD.

Each series covers 6 topics over 7 videos, with practical demonstrations instead of slides – just the way I like it.

They’re available in the four most used programming languages today:

Of course, like riding a bike, you won’t master TDD just by watching videos. You can only learn TDD by doing it.

On our 2-day TDD training workshop, you’ll get practical, hands-on experience applying these ideas with real-time guidance from me.

Scheduled Online TDD Courses, May – July

I’ve got three publicly scheduled 2-day Test-Driven Development courses coming up:

I appreciate that we’re all going through some – how shall we say? – interesting times right now, and that many of you are getting used to working remotely for the first time.

Hopefully what you’re learning is that distributed teams can be as effective – sometimes even more effective – than co-located teams.

For every advantage of co-location lost, we gain another advantage when we remove the distractions of open plan offices, of long life-sapping commutes, and of never-ending meetings because we just happen to be in the same building,

And online training can be just as effective – sometimes even more effective – than onsite training. We are, after all, looking at and working on code. Screen sharing and webcams work just as well as sitting next to each other once you get the hang of it.

It’s pretty much the exact same training I do in person, only you can enjoy it from the comfort of your own home, without the daily commute either side and without the many distractions of being on your office. And the catering can be great – depending on what you’ve got in your fridge.

To find out more about the course, visit http://codemanship.co.uk/tdd.html

 

 

Lunchtime Learnings in TDD

Most software developers are now working from home due to the COVID-19 pandemic. If you’re one of them, now might be an opportunity to hone your code craft skills so that when things return to normal your market value as a dev will be higher. (Data from itjobswatch.co.uk suggests that developers earn 20% more on average if they have good TDD experience).

We appreciate that things are a bit up in the air at the moment, so taking 2 days out for our TDD course might be a non-starter. And this is why we’ve split it into 6x 90-minute weekly workshops being run at lunchtimes.

The first workshop is TDD in JavaScript, which starts at 12:30pm GMT next Tuesday. Details and registration can be found here.

This will be followed by TDD in C# on Thursday, 12:30pm GMT.

Workshops in Python and Java will start the following Monday and Friday lunchtimes, so keep your eye on @codemanship for announcements.

The Experience Paradox

One sentiment I hear very often from managers is how very difficult it is to hire experienced developers. This, of course, is a self-fulfilling prophecy. If you won’t hire developers without experience, how will inexperienced developers get the jobs they need to gain that experience?

I simultaneously hear from new developers – recent graduates, boot camp survivors, and so on – that they really struggle to land that first development job because they don’t have the experience.

When you hear both of these at the same time, it becomes a conversation. Unfortunately, it’s a conversation conducted from inside soundproof booths, and I’m seeing no signs that this very obvious situation will be squared any time soon.

I guess we have to add it to the list of things that every knows is wrong, but everyone does anyway. (Like adding more developers to teams when the schedule’s slipping.)

Organisations should hire for potential as well as experience. People who have the potential to be good developers can learn from people with experience. It’s a match made in heaven, and the end result is a larger pool of experienced developers. This problem can fix it itself, if we were of a mind to let it. We all know this. So why don’t we do it?

At the other end of the spectrum, I hear many, many managers say “This person is too experienced to be a developer”. And at the same time, I hear many, many very experienced developers struggle to find work. This, too, is a problem that creates itself. Typically, there are two reasons why managers rule out the most experienced developers:

  • They expect to get paid more (because they usually achieve more)
  • They won’t put up with your bulls**t

Less experienced developers may be more malleable in terms of how much – or little – you can pay them, how much unpaid overtime they may be willing to tolerate, and how willing they might be to cut corners when ordered to. They may have yet to learn what their work is really worth to you. They may have yet to experience burnout. They may yet to have lived with a large amount of technical debt.

Developers with 20+ years experience, who’ve been around the block a few times and know the score, don’t fit it into the picture of developers as fungible resources.

By freezing out inexperienced developers and very experienced developers, employers create the exact situation they complain endlessly about – lack of experienced developers. If it were my company and my money on the line, I’d hire developers with decades of experience specifically to mentor the inexperienced developers with potential I’d also be hiring.

Many employers, of course, argue that training up developers is too much of a financial risk. Once they’re good enough, won’t they leave for a better job? The clue is in the question, though. They leave for a better job. Better than the one you’re offering them once they qualify. Don’t just be a stepping stone, be a tropical island – a place they would want to stay and work.

If you think training up developers is going to generate teams of cheaper developers who’ll work harder and longer for less, then – yes – they’ll leave at the first opportunity. Finding a better job won’t be hard.

Code, Meet World #NoBacklogs

Do we dare to imagine Agile without backlogs? How would that work? How would we know what to build in this iteration? How would we know what’s coming in future iterations?

Since getting into Agile Software Development (and it’s precursors, like DSDM and Extreme Programming), I’ve gradually become convinced that most of us have been doing it fundamentally wrong.

It’s a question of emphasis. What I see is thousands of teams working their way through a product plan. They deliver increments of the plan every week or three, and the illusion of being driven by feedback instead of by the plan is created by showing each increment to a “customer” and asking “Waddaya think?”

In the mid-90s, I worked as the sole developer on a project where my project managers – two of them! – would make me update their detailed plan every week. It was all about delivering the plan, and every time the plan bumped into reality, the whole 6-month plan had to be updated all over again.

This was most definitely plan-driven software development. We planned, planned, and planned again. And at no point did anybody suggest that maybe spending 1 day a week updating a detailed long-term plan might be wasted effort.

Inspired by that experience, I investigated alternative approaches to planning that could work in the real world. And I discovered one in a book by Tom Gilb called Principles of Software Engineering Management. Tom described an approach to planning that chimed with my own experience of how real projects worked in the real world.

It’s a question of complexity. Weather is very, very complicated. And this makes weather notoriously difficult to predict in detail. The further ahead we look, the less accurate our predictions tend to be. What will the weather be like tomorrow? We can take a pretty good guess with the latest forecasting models. What will the weather be like the day after tomorrow? We can take a guess, but it’s less likely to be accurate. What will the weather be like 6 weeks next Tuesday from now? Any detailed prediction is very likely to be wrong. That’s an inherent property of complex systems: they’re unpredictable in the long term.

Software development is also complex. Complex in the code, its thousands of component parts, and the interactions between them. Complex in the teams, which are biological systems. Complex in how the software will interact with the real world. There are almost always things we didn’t think of.

So the idea that we can predict what features a software system will need in detail, looking months – or even just weeks ahead – seemed a nonsense to me.

But, although complex systems can be inherently unpredictable in detail, they tend to be – unless they’re completely chaotic – roughly predictable in general terms.

We can’t tell you what the exact temperature will be outside the Dog & Fox in Wimbledon Village at 11:33am on August 13th 2025, but we can pretty confidently predict that it will not be snowing.

And we can’t be confident that we’ll definitely need a button marked “Sort A-Z” on a web page titled “Contacts” that displays an HTML table of names and addresses, to be delivered 3 months from now in the 12th increment of a React web application. But we can be confident that users will need to find an address to send their Christmas card to.

The further we look into the future, the less detailed our predictions need to become if they are to be useful in providing long-term direction. And they need to be less detailed to avoid the burden of continually updating a detailed plan that we know is going to change anyway.

This was a game-changer for me. I realised that plans are perishable goods. Plans rot. Curating a detailed 6-month plan, to me, was like buying a 6-month supply of tomatoes. You’ll be back at the supermarket within a fortnight.

I also realised that you’ve gotta eat those tomatoes before they go bad. It’s the only way to know if they’re good tomatoes. Features delivered months after they were conceived are likely rotten – full of untested assumptions piled on top of untested assumptions about what the end users really need. In software, details are very highly biodegradable.

So we need to test our features before they go off. And the best place to test them is in the real world, with real end users, doing real work. Until our code meets the real world, it’s all just guesswork.

Of course, in some domains, releasing software into production every week – or every day even – is neither practical nor desirable. I wouldn’t necessarily recommend it for a nuclear power station, for example.

And in these situations where releases create high risk, or high disruption to end users, we can craft simulated release environments where real end users can try the software in an as-real-as-we-can-make-it world.

If detailed plans are only likely to survive until the next release, and if the next release should be at most a couple of weeks away, then arguably we should only plan in detail – i.e., at the level of features – up to the next release.

Beyond that, we should consider general goals instead of features. In each iteration, we ask “What would be the simplest set of features that might achieve this goal?” If the feature set is too large to fit in that iteration, we can break the goal down. We build that feature set, and release for end user testing in the real (or simu-real) world to see if it works.

Chances are, it won’t work. It might be close, but usually there’s no cigar. So we learn from that iteration and feed the lessons back in to the next. Maybe extra features are required. Maybe features need tweaking. Maybe we’re barking up the wrong tree and need to completely rethink.

Each iteration is therefore about achieving a goal (or a sub-goal), not about delivering a set of features. And the output of each release is not features, but what we learn from watching real end users try to achieve their goal using the features. The output of software releases is learning, not features.

This also re-frames the definition of “done”. We’re not done because we delivered the features. We’re only done when we’ve achieved the goal. Maybe we do that in one iteration. Maybe we need more throws of the dice to get there.

So this model of software development sees cross-functional teams working as one to achieve a goal, potentially making multiple passes at it, and working one goal at a time. The goal defines the team. “We’re the team who enables diners to find great Chinese food”. “We’re the team who puts guitar players in touch with drummers in their town.” “We’re the team who makes sure patients don’t forget when their repeat prescriptions need to be ordered.”

Maybe you need a programmer to make that happen. Maybe you need a web designer to make that happen. Maybe you need a database expert to make that happen. The team is whoever you need to achieve that goal in the real world.

Now I look at the current state of Agile, and I see so many teams munching their way through a list of features, and so few teams working together to solve real end users’ problems. Most teams don’t even meet real end users, and never see how the software gets used in the real world. Most teams don’t know what problem they’re setting out to solve. Most teams are releasing rotten tomatoes, and learning little from each release.

And driving all of this, most teams have a dedicated person who manages that backlog of features, tweaking it and “grooming” it every time the plan needs to change. This is no different to my experiences of updating detailed project plans in the 90s. It’s plan-driven development, plain and simple.

Want to get unstuck from working through a detailed long-term plan? Then ditch the backlog, get with your real customer, and start solving real problems together in the real world.