## Code Craft : Part III – Unit Tests are an Early Warning System for Programmers

Before I was introduced to code craft, my way of checking that the programs I wrote worked was to run them and use them and see if they did what I expected them to do.

Consider this command line program I wrote that does some simple maths:

I can run this program with different inputs to check if the results of the calculations are correct.

C:\Users\User\PycharmProjects\pymaths>python maths.py sqrt 4.0
The square root of 4.0 = 2.0

C:\Users\User\PycharmProjects\pymaths>python maths.py factorial 5
5 factorial = 120

C:\Users\User\PycharmProjects\pymaths>python maths.py floor 4.7
The floor of 4.7 = 4.0

C:\Users\User\PycharmProjects\pymaths>python maths.py ceiling 2.3
The ceiling of 2.3 = 3.0

Testing my code by using the program is fine if I want to check that it works first time around.

These four test cases, though, don’t give me a lot of confidence that the code really works for all the inputs my program has to handle. I’d want to cover more examples, perhaps using a list to remind me what tests I should do.

• sqrt 0.0 = 0.0
• sqrt -1.0 -> should raise an exception
• sqrt 1.0 = 1.0
• sqrt 4.0 = 2.0
• sqrt 6.25 = 2.5
• factorial 0 = 0
• factorial 1 = 1
• factorial 5 = 120
• factorial – 1 -> should raise an exception
• factorial 0.5 -> should raise an exception
• floor 0.0 = 0.0
• floor 4.7 = 4.0
• floor -4.7 = -5.0
• ceiling 0.0 = 0.0
• ceiling 2.3 = 3.0
• ceiling -2.3 = -2.0

Now, that’s a lot of test cases (and we haven’t even thought about how we handle incorrect command line arguments yet).

To run the program and try all of these test cases once seems like quite a bit of work, but if it’s got to be done, it’s got to be done. (The alternative is not doing all these tests, and then how do we know our program really works?)

But what if I need to change my maths code? (And if we know one thing about code, it’s that it changes). Then I’ll need to perform these tests again. And if I change the code again, I have to do the tests again. And again. And again. And again.

If we don’t re-test the code after we’ve changed it, we risk not knowing if we’ve broken it. I don’t know about you, but I’m not happy with the idea of my end users being lumbered with broken software. So I re-test the software every time it changes.

It took me about 5-6 minutes to perform all of these tests using the command line. That’s 5-6 minutes of testing every time I need to change my code. And maybe 5-6 minutes of testing doesn’t sound like a lot, but this program only has about 40 lines of code. Extrapolate that testing time to 1,000 lines of code. Or 10,000 lines. Or a million.

Testing programs by using them – what we call manual testing – simply doesn’t scale up to large amounts of code. The time it takes to re-test our program when we’ve changed the code becomes an obstacle to making those changes safely. If it takes hours or days or even weeks to re-test it, then change will be slow and difficult. It may even be impractical to change it at all, and far too many programs lots of people rely on end up in this situation. The time taken to test our code has a profound impact on the cost of making changes.

Studies have shown that the effort required to fix a bug rises dramatically the longer that bug goes undiscovered.

If it takes a week to re-test our program, then the cost of fixing the bugs that testing discovers will be much higher than if we’d been alerted a minute after we made that error. The average programmer can introduce a lot of bugs in a week.

Creating good working software depends heavily on our ability to check that the code’s working very frequently – almost continuously, in fact. So we have to be able to perform our tests very, very quickly. And that’s not possible when we perform them manually.

So, how could we speed up testing to make changes quicker and easier? Well we’re computer programmers – so how about we write a computer program to test our code?

A few things to note about my test code:

• Each test case has a unique name to make it easy to identify which test failed
• There are two helper functions that ask if the actual result matches the expected result – either an expected output, or an expected exception that should have been raised
• The script counts the total number of tests run and the number of tests passed, so it can summarise the result of running this suite of tests
• My test code isn’t testing the whole program from the outside, like I was doing at the command line. Some code just tests the sqrt function, some just tests the factorial function, and so on. Tests that only test parts of a program are often referred to as unit tests. A ‘unit’ could be an individual function or a method of a class, or a whole class or module, or a group of these things working together to do a specific job. Opinions vary, but what we mostly all agree is that a unit is a discrete part of a program, and not the whole program.

1. When a test fails, it’s much easier to pinpoint the source of the problem
2. Less code is executed in order to check a specific piece of logic works, so unit tests tend to run much faster
3. By invoking functions directly, there’s usually less code involved in writing a unit test

When I run my test script, if all the tests pass, I get this output:

Running math tests…
Tests run: 16
Passed: 16 , Failed: 0

Phew! All my tests are passing.

This suite of tests ran in a fraction of a second, meaning I can run them as many times as I like, as often as I want. I can change a single line of code, then run my tests to check that change didn’t break anything. If I make a boo-boo, there’s a high chance my tests will alert me straight away. We say that these automated tests give me high assurance that – at any point in time – my code is working.

This ability to re-test our code after just a single change can make a huge difference to how we program. If I break the code, very little has little has changed since the code was last working, so it’s much easier to pinpoint what’s gone wrong and much easier to fix it. If I’ve made 100 changes before I re-test the code, it could be a lot of work to figure out which change(s) caused the problem. I have found, after 25 years of writing unit tests, that I need to spend very little time in my debugger.

If any tests fail, I get this kind of output:

Running math tests…
sqrt of 0.0 failed – expected 1.0 , actual 0
sqrt of -1.0 failed – expected Exception to be raised
Tests run: 16
Passed: 14 , Failed: 2

It helpfully tells me which tests failed, and what the expected and actual results were, to make it easier for me to pin down the cause of the problem. Since I only made a small change to the code since the tests last all passed, it’s easy for me to fix.

Notice that I’ve grouped my tests by the function that they’re testing. There’s a bunch of tests for the sqrt function, a bunch for factorial, and more for floor and for ceiling. As my maths program grows, I’ll add many more tests. Keeping them all in one big module will get unmanageable, so it makes sense to split them out into their own modules. That makes them easier to manage, and also allows us to run just the tests for, say, sqrt, or just the tests for factorial – if we only changed code in those parts of the program – if we want to.

Here I’ve split the tests for sqrt into their own test module, which we call a test fixture. It can be run by itself, or can be invoked as part of the main test suite along with the other test fixtures.

The two helper functions I wrote that check and record the result of each test – assert_equals and assert_raises – could be reused in other suites of tests, since they’re quite generic. What I’ve created here could be the beginnings of a reusable library for writing test scripts in Python.

As my maths program grows, and I add more and more tests, there’ll likely be more helper functions I’ll find useful. But, in computing, before you set out to write a reusable library to help you with something, it’s usually a good idea to check if someone’s already written one.

For a problem as common as automating program tests, you won’t be surprised that such libraries already exist. Python has several, but the most commonly used test automation library actually comes as part of Python’s standard modules – unittest (formerly known as PyUnit.)

Here’s the sqrt tests I write translated into unittest tests.

There’s a lot to unittest, but this test fixture uses just some of its basic features.

To create a test fixture, you just need to declare a class that inherits from unittest.TestCase. Individual tests are methods of your fixture class that start with test_ – so that unittest knows it’s a test – and they accept no parameters, and return no data.

The TestCase class defines many useful helper methods for making assertions about the result of a test. Here, I’ve used assertEqual and assertRaisesRegex.

assertEqual takes an expected result value as the first parameter, followed by the actual result, and compares the two. If they don’t match, the test fails.

assertRaisesRegex is like my own assert_raises, except that it also matches the error message the exception is raised with using regular expressions – so we can check that it was the exact exception we expected.

I don’t need to write a test suite that directly invokes this test fixture’s tests. The unittest test runner will examine the test code, find the test fixtures and test methods, and build the suite out of all the tests it finds. This saves me a fair amount of coding.

I can run the sqrt tests from the command line:

C:\Users\User\PycharmProjects\pymaths\test>python -m unittest sqrt_test.py
…..
———————————————————————-
Ran 5 tests in 0.002s

OK

If any tests fail, unittest will tell me which tests failed and provide helpful diagnostic information.

C:\Users\User\PycharmProjects\pymaths\test>python -m unittest sqrt_test.py
F…F
======================================================================
FAIL: test_sqrt_0 (sqrt_test.SqrtTest)
———————————————————————-
Traceback (most recent call last):
File “C:\Users\User\PycharmProjects\pymaths\test\sqrt_test.py”, line 8, in test_sqrt_0
self.assertEqual(1.0, sqrt(0.0))
AssertionError: 1.0 != 0

======================================================================
FAIL: test_sqrt_minus1 (sqrt_test.SqrtTest)
———————————————————————-
Traceback (most recent call last):
File “C:\Users\User\PycharmProjects\pymaths\test\sqrt_test.py”, line 13, in test_sqrt_minus1
lambda: sqrt(1))
AssertionError: Exception not raised by <lambda>

———————————————————————-
Ran 5 tests in 0.002s

FAILED (failures=2)

I can run all of the tests in my project folder at the command line using unittest‘s test discovery feature.

C:\Users\User\PycharmProjects\pymaths\test>python -m unittest discover -p “*_test.py”
…………….
———————————————————————-
Ran 16 tests in 0.004s

OK

The test runner finds all tests in files matching ‘*_test.py’ in the current folder and runs them for me. Easy as peas!

You may have noticed that my tests are in a subfolder C:\Users\User\PycharmProjects\pymaths\test, too. It’s a very good idea to keep your test code separate from the code they’re testing, so you can easily see which is which.

Note how each test method has a meaningful name that identifies the test case, just like the test names in my hand-rolled unit tests before.

Note also that each test only asks one question – Is the sqrt of four 2? Is the factorial of five 120? And so on. When a test fails, it can only really be for one reason, which makes debugging much, much easier.

When I’m programming, I put in significant effort to make sure that as much of my code is tested by automated unit tests as possible. And, yes, this means I may well end up writing as much unit test code as solution code – if not more.

A common objection inexperienced programmers have to unit testing is that they have to write twice as much code. Surely this takes twice as long? Surely we could add twice as many features if we didn’t waste time writing unit test code?

Well, here’s the funny thing: as our program grows, we tend to find – if we rely on slow manual testing to catch the bugs we’ve introduced – that the proportion of the time we spend fixing bugs grows too. Teams who do testing the hard way often end up spending most of their time bug fixing.

Because bugs can cost exponentially more to fix the longer they go undiscovered, we find that the effort we put in up-front to write fast tests that will catch them more than pays for itself later on in time saved.

Sure, if the program you’re writing is only ever going to be 100 lines long, extensive unit tests might be a waste (although I would still write a few, as I’ve found even on relatively simple programs some unit testing has saved me time). But most programs are much larger, and therefore unit tests are a good idea most of the time. You wouldn’t fit a smoke alarm in a tiny Lego house, but in a real house that people live in, you might be very grateful of one.

One final thought about unit tests. Consider this code that calculates rental prices of movies based on their IMDb ratings:

This code fetches information about a video, using its IMDb ID, from a web service. Using that information, it decides whether to charge a premium of £1 because the video has a high IMDb rating or knock off £1 because the video has a low IMDb rating.

If we wrote a unittest test for this, when it runs our code will connect to an external web service to fetch information about the video we’re pricing. Connecting to web services is slow in comparison to things that happen entirely in memory. But we want our unit tests to run as fast as possible.

How could we test that prices are calculated correctly without connecting to this external service?

Our pricing logic requires movie information that comes from someone else’s software. Could we fake that somehow, so a rating is available for us to test with?

What if, instead of the price method connecting directly to the web service itself, we were to provide it with an object that fetches video information for it? i.e., what if we made fetching video information somebody else’s problem? The object is passed in as a parameter of Pricer‘s constructor like this.

Because videoInfo is passed as a constructor parameter, Pricer only knows what that object looks like from the outside. It knows it has to have a fetch_video_info method that accepts an IMDb ID as a parameter and returns the title and IMDb rating of that video.

Thanks to Python’s duck typing – if it walks like a duck and quacks like a duck etc – any object that has a matching method should work inside Pricer, including one that doesn’t actually connect to the web service.

We could write a class that provides whatever title and IMDb rating we tell it to, and use that in a unit test for Pricer.

When I run this test, it checks the pricing logic just as thoroughly as if we’d fetched the video information from the real web service. How video titles and ratings are obtained has nothing to do with how rental prices are calculated. We achieved flexibility in our design by cleanly separating those concerns. (Separation of Concerns is fancy software architecture-speak for “make it someone else’s problem”.)

The object that fetches video information is passed in to the Pricer. We call this dependency injection. Pricer depends on VideoInfo, but because the dependency is passed in as a parameter from the outside, the calling code can decide which implementation to use – the stub, or the real thing.

A stub is a kind of what we call a test double. It’s an object that looks like the real thing from the outside, but has a different implementation inside. The job of a stub is to provide test data that would normally come from some external source – like video titles and IMDb ratings.

Test doubles require us to introduce flexibility into our code, so that objects (or functions) can use each other without knowing exactly which implementation they’re using – just as long as they look the same as the real thing from the outside. This not only helps us to write fast-running unit tests, but is good design generally. What if we need to fetch video information from a different web service? Because we provide video information by dependency injection, we can easily swap in a different web service with no need to rewrite Pricer.

This is what we really mean by ‘separation of concerns’ – we can change one part of the program without having to change any of the other parts. This can make changing code much, much easier.

Let’s look at one final example that involves an external dependency. Consider this code that totals the number of copies of a song sold on a digital download service, then sends that total to a web service that compiles song charts at the end of each day.

How can we unit test that song sales are calculated correctly without connecting to the external web service? Again, the trick here it to separate those two concerns – to make sending sales information to the charts somebody else’s problem.

Before we write a unit test for this, notice how this situation is different to the video pricing example. Here, our charts object doesn’t return any data. So we can’t use a stub in this case.

When we want to swap in a test double for an object that’s going to be used, but doesn’t return any data that we need to worry about, we can choose from two other kinds of test double.

A dummy is an object that looks like the real thing from the outside, but does nothing inside.

In this test, we don’t care if the sales total for the song is sent to the charts. It’s all about calculating that total.

But what if we do care if the total is sent to the charts once it’s been calculated? How could we write a test that will fail if charts.send isn’t invoked?

A mock object is a test double that remembers when its methods are called so we can test that call happened. Using the built-in features of the unittest.mock library, we can create a mock charts object and verify that send is invoked with the exact parameter values we want.

In this test, we create an instance of the real Charts class that connects to the web service, but we replace its send method with a MagicMock that records when it’s invoked. We can then assert at the end that when sales_of is executed, charts.send is called with the correct song and sales total.

So there you have it. Unit tests – tests that test part of our program, and execute without connecting to any external resources like web services, file systems, databases and so on – are fast-running tests that allow us to test and re-test our program very frequently, ensuring as much as possible that our code’s always working.

As you’ll see in later posts, good, fast-running unit tests are an essential foundation of code craft, enabling many of the techniques we’ll be covering next.

## Code Craft : Part II – Version Control is Seat Belts for Programmers

When I was starting out as a professional programmer, I took the basic precaution of occasionally backing up my code so that if I took a “wrong turn”, I could get back to something that kind of sort of worked. I used to do this the old-fashioned way of creating a daily folder and copying the code into it.

But, it turns out, a day is a lot of work to lose. When things did go wrong – which happened regularly – I’d only go back to the previous day’s code as a last resort. Usually, I’d try and fix the problem, which took up a lot of time and typically had disappointing results.

Also, my hard drive very quickly filled up with back-ups if I didn’t get into the habit of deleting older copies. Maybe I changed 5 lines of code that day; making an entire copy of 500,000 lines of code every time is pretty wasteful. And if I made back-ups more often, the drive would fill up faster. In the 1990s, disk space was still expensive.

The effect of making infrequent back-ups on the way I worked was quite profound. When you risk losing a day’s work when you try something new, you take less risks. Fear tends to stifle creativity and innovation.

Really, I should have been making back-ups far more frequently – at least every hour or so – and the only way for that to be practical on a PC with a 100MB hard drive is to not back-up all the source code every single time, but only the parts that have changed.

I was several days into attempting to write something that enabled this when a more experienced programmer told me that such tools already existed. (That happens a lot in computing.)

His team were using what he called a “version control system” or VCS – in this case a tool called CVS (Concurrent Versions System). CVS was relatively new at the time (it was first released in 1990), but I later learned that version control systems had been around since the early 1970s.

A code project was copied to a central repository for the team to access, and they could “check in” any changes they made to source code files, and CVS stored the changes as a “delta”, keeping a history of all revisions to every file in the repository. Using the original source files and the deltas, CVS could recreate any version of the code from any point in its history.

I very quickly realised that this was super-useful. Not only could you get back to any version of the code with ease, without filling your hard drive up with copies, but you could also see the entire history of the code and analyse how it has evolved. Think of a version history as being a bit like a computer program’s own personal diary, logging every interesting change that’s been made – potentially going back years. Much can be learned by reading diaries.

I’ve been using version control systems ever since. And over the next 25 years, they have become very widespread. Most professional programmers use version control these days. So it’s curious – and a little alarming – that many schools and universities don’t teach students how to use them (or even tell students they exist).

The most popular VCS in use today is Git. Git is what we call a distributed version control system (DVCS). As well as a central repository of source code files, it also allows programmers to keep their own local repository, into which they can track changes they make on their own computer, before “pushing” those changes to the central repository to share with the other programmers on the team.

A simple workflow for version control with Git might go something like this (using the Git command line program in Bash):

• Initialise a folder on your computer to be a local Git code repository

User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths
$git init Initialized empty Git repository in C:/python_projects/maths/.git/ • In my maths folder, I create a Python script called sqrt.py. • If I want this file to be version-controlled, I need to add it to the Git repository. User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)$ git add sqrt.py

• sqrt.py is put into a “staging area” that contains all of the file changes (files added, files modified, files deleted) for my first commit to my local Git repository. Let’s commit this with a meaningful message that helps identify what version of the code this is.

User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)
$git commit -m “This is my first commit” [master (root-commit) 3e39188] This is my first commit 1 file changed, 14 insertions(+) create mode 100644 sqrt.py • If I make a change to sqrt.py • …and then commit that change… User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)$ git commit -m ‘Changed input to be square rooted’ –all
[master 75b5aef] Changed input to be square rooted
1 file changed, 1 deletion(-)

• …we add a new version of the source file to our local repository. We can see the version history of our repository using Git’s log command.

User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)
$git log commit f113f51030eab07943b9e8f9493d17a2209544d2 Author: Jason Gorman <jason.gorman@codemanship.com> Date: Wed Oct 2 08:39:12 2019 +0100 Changed input to be square rooted commit 3e391889c26574357b35f687413f2eb5d9e4f2c1 Author: Jason Gorman <jason.gorman@codemanship.com> Date: Wed Oct 2 08:17:51 2019 +0100 This is my first commit • If I then make a boo-boo in this code… • …I can get back to either of those versions by using Git’s reset command. User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)$ git reset –hard f113f51030eab07943b9e8f9493d17a2209544d2
HEAD is now at f113f51 Changed input to be square rooted

• And I can go back to any version in the code’s history if I want. I just tell it which version – the long identifier Git assigns to each commit – I want to go back to. Ultimate undo-ability!

User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)
$git reset –hard 3e391889c26574357b35f687413f2eb5d9e4f2c1 HEAD is now at 3e39188 This is my first commit Remember that Git is what we call a distributed version control system (DVCS), so the version history of my sqrt.py file is stored in a local repository on my computer. I can also create a shared remote repository – for example, on github.com – so other programmers can access the files and their histories and contribute to my maths project. • First, I create a new repository using my GitHub account. I’ve called it pymaths. • Then I copy the remote repository’s unique URL • I can now add this remote repository for use with my local repository User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)$ git remote add origin https://github.com/jasongorman/pymaths.git

• Now I can push the commits I made to my local repository to the remote repository, where other programmers can access them.

User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)
$git push origin master Enumerating objects: 3, done. Counting objects: 100% (3/3), done. Delta compression using up to 4 threads Compressing objects: 100% (2/2), done. Writing objects: 100% (3/3), 365 bytes | 365.00 KiB/s, done. Total 3 (delta 0), reused 0 (delta 0) To https://github.com/jasongorman/pymaths.git * [new branch] master -> master Now we can see that our commits are showing in the pymaths GitHub repository. (Bear in mind that I reset the code back to the original commit, so that’s the one showing as current.) When multiple programmers are contributing to a repository, they need a way to get changes other people have made which they can merge into their own working directories. Let’s say someone else on my team adds a function for calculating factorials. They push their change to the pymaths repository. To merge their changes into my local copy, I can use the Git pull command. User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)$ git pull origin master
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 3 (delta 0), pack-reused 0
Unpacking objects: 100% (3/3), done.
From https://github.com/jasongorman/pymaths
3e39188..23578b7 master -> origin/master
Updating 3e39188..23578b7
Fast-forward
sqrt.py | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)

That overwrites my local copy of sqrt.py with the new version from pymaths, which is fine if I haven’t also made pending changes to that file. What if we’ve both changed that file? That can lead to what’s called a merge conflict.

Imagine my team mates adds a function for calculating the ceiling of a number, and pushes that change to pymaths.

And at the same time, I add a function to my local copy for calculating the floor of a number and commit that to my local repository.

When I pull the changes from the remote repository, Git will attempt to merge the two versions of the file automatically, but in this case it fails.

User@DESKTOP-KSHARRN MINGW64 /c/python_projects/maths (master)
\$ git pull origin master
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (1/1), done.
remote: Total 3 (delta 1), reused 3 (delta 1), pack-reused 0
Unpacking objects: 100% (3/3), done.
From https://github.com/jasongorman/pymaths
87973fb..421547e master -> origin/master
Auto-merging sqrt.py
CONFLICT (content): Merge conflict in sqrt.py
Automatic merge failed; fix conflicts and then commit the result.

To resolve the conflict, I just need to edit the auto-merged file, and then commit and push the finished version to pymaths.

In a sense, version control is like seat belts for programmers. It gives us a level of safety when we’re creating and evolving our programs that means we can invent and try new things with much greater confidence that – whatever happens – there’s a way back if it goes wrong.

Version control systems like Git make it much easier for programmers to collaborate on the same code projects, even if they are on other sides of the world. They have built-in features that help us to manage conflicting changes, and are an essential ingredient in individual and team efforts of all sizes.

Here are some basic good habits for version control that I’ve been successfully applying for 25 years:

1. Unless it’s something genuinely trivial that you’re going to throw away, always start by putting your code project under version control
2. Check in your changes frequently – at least every hour (I do it many times an hour, usually). The less often you do it, the more work you might lose.
3. When working with others, merge their changes frequently into your local copy so you can keep up to date with what’s in the repository, and spot conflicts early when they’re easier to fix.
4. Use meaningful commit messages to help you and other programmers easily identify what’s changed in that version of the code.
5. Very importantly, don’t check in code that doesn’t work. If you break the program, and check in your changes, and then your team mates merge those changes into their copies, then you’ve just done the programming equivalent of giving everyone in the team your cold.

But how do we know it works before we commit? Well, we test it. Yes. Every single time before we commit. If it fails our tests, then we don’t commit it.

“Gee. Testing the program every time we want to commit our changes? And you say we should be committing frequently? That sounds like we’ll be spending all our time just testing our code!”

Yup. You’ll be testing and re-testing your program many times an hour. And in the next blog post I’ll be showing you how that can be achieved using fast-running automated unit tests.

## Code Craft : Part I – Why We Need Code Craft

I started programming many years ago at the beginning of the home computing boom of the 1980s. Computers back then had hundreds of thousands of times less memory and processing power than your smartphone does today, so the programs we wrote for our ZX Spectrums, Commodore 64s and Acorn Micros were necessarily small.

Under the limitations of having just a few kilobytes of memory for our code, programming was a pretty manageable affair. We could fit the whole program in our heads, so to speak.

When Dad brought home the IBM-compatible PC he’d been using in the office – it had been replaced with a newer model – all that changed. Memory leapt from the 64K of our C64 to 2MB, and disk space added a further 20MB for program files.

Like a plant that gets re-potted in a much larger container, my code had a sudden growth spurt – luxuriating in the seemingly unlimited resources of the PC, and in the vastly superior programming languages and tools that were available for it.

In code, size is everything. A simple Commodore 64 game written in hundreds of lines of BASIC is a very different proposition to a game with thousands or tens of thousands of lines of code. That won’t fit inside your head. As the code grows, you quickly hit your limitations of brain power and human memory.

Debugging large programs is really hard. There’s just so much more that can go wrong, and so many more places to look for the sources of problems. It’s the proverbial “needle in a haystack”. Debugging a C64 program was like looking for a needle in a matchbox.

The time taken to sufficiently test a 300-line program is peanuts compared to the testing you need to do for a 30,000-line program. Minutes turns into days or even weeks of clicking buttons to see if the program does what you expect it to across hundreds of functions, each potentially with dozens of different scenarios to consider.

The time it takes to test our programs is a huge factor in how easy it is to change the code. Studies show that the longer a bug goes undiscovered, the more it will cost to fix it. If I make a boo-boo in the code and spot it straight away, it’s a moment’s work to correct it. If that boo-boo makes it on to end users’ computers, fixing it is a much bigger deal. It’s all about the size of the loop we have to go through to schedule time to do the fix, examine the code and find the cause of the bug, fix the bug, re-test the program and then release it back to the end users with the fix in place. If re-testing takes weeks, then that’s one expensive bug fix!

We’re not just talking about coding errors, either. The most common kind of program bug – and one of the most expensive to fix – is when we wrote the wrong code. That is, we misunderstood – or just guessed at – what the end users wanted to do with the program and built the wrong thing. In my early career as a programmer, that happened a lot. Writing code is a very expensive way to find out what the users want, especially if your code is hard to change afterwards.

Changing large programs without breaking something is super hard. In a computer program, all the pieces are connected – directly or indirectly – so changing just one line of code can accidentally break a whole bunch of stuff. And as the program grows, with more and more interconnected parts, it gets harder and harder.

While I happily made the leap to much more memory, and much faster processors, and much more grown-up programming languages, I can now look back with the benefit of nearly 40 years of coding experience and see that the software I was writing back then was rigid – difficult to change – and brittle – easy to break – and buggy as heck.

I was a kid who’d been building tiny houses with Lego, who’d now progressed to building much bigger houses with real bricks and real cement and real timber – all the while thinking that building real houses was just like building Lego houses. NEWS FLASH: it isn’t.

There’s a lot, lot more that can go wrong building real houses, especially if you expect people to live in them. When you build people-sized houses, you have to think about a bunch of things that you don’t need to think about when you build Lego houses. Steps have to be taken to ensure the structural integrity of a house at all stages of construction and beyond. Otherwise, it can collapse under its own weight, causing expensive damage and even loss of life.

Ditto with large computer programs. There’s a bunch of things we need to think about for a 10,000-line or a 100,000-line or a 10,000,000 program that just aren’t an issue on a 200-line program.

In particular, we have to take steps to avoid having our big program collapse under its own weight, when making a single change causes it to break in unexpected, and potentially dangerous, ways – depending on who’s using it and what they’re using it for.

It was only a few years into my career as a professional programmer that I learned how to write code that was reliable and easy to change without breaking – code that people can safely live in (both the end users, and other programmers coming to change the code as their users’ needs changed.)

And here’s the thing; while we’re building our software, we are also living in the code. Like plasterers or carpenters or electricians working inside a house while it’s being built, we too are at risk from the thing collapsing on us. This is something it took me quite a while to appreciate. It’s not just about releasing good code that other people can live in. It’s about keeping the code that way while we’re writing it.

Many studies done on computer programming over the last few decades clearly show that code that’s rigid and brittle and buggy takes longer to get working in the first place.

Sure, for those first – easy – few hundred lines of code, we can go fast and don’t need to take a lot of care. But the effect of the size of our growing code hits us sooner than we might think, and soon we’re spending all our time trying to debug it to make it usable enough for a release. Many programming projects end with a “stabilisation” phase, where programmers work long hours debugging thousands and thousands of lines of code, trying to hit a deadline to make the software good enough for people to use.

We call this code-and-fix programming. We write a whole bunch of code fast as we can. Then we test it and find a tonne of bugs we didn’t realised we’d introduced. Then we spend a whole lot more time trying to remove those bugs. (And very probably introducing all new bugs while we do that. And around we go.)

Code-and-fix may work on small programs, but it’s often a disaster on larger programs. It’s by far the most time-consuming and expensive way to get programs of any significant size working. And, even after all the debugging, it tends to produce programs that still contain many bugs.

After we’ve released our program, we’re not done yet. It’s in the nature of computer programs that when people use them, they see ways they could be improved. That first release is usually just the start of a long learning process, figuring out what users really need. So they’ll want a second release, and a third, and a fourth, and on and on it tends to go. Programming at scale is a marathon, not a sprint.

If our program code is rigid and brittle, changing it without breaking it is going to be very difficult. On the second go round, it takes even more time and effort to produce a working program. On the third, harder still. Far too many programs hit a barrier where the code is so hard and so risky to change that nobody dares try. At this point we face a difficult decision.

Do we leave it as it is, and the users will just have to struggle on without the changes they need? This is something that holds a lot of businesses back. If you’re Acme Supermarket, and you need to change how your tills work – but you can’t change the software – you have a big problem.

Do we write the program again from scratch? This means that all the program functions your end users currently rely on will have to be rebuilt from the ground up just to get one or two new functions. That’s like building a completely new house just so you can add a porch. Very expensive, and the users will have to wait a long time for their changes.

Do we abandon it altogether, and leave the end users to find their own solutions (or pack up and go home)? I’ve seen businesses do this in extreme cases. The part of their business that relied on legacy software that was hopelessly out of date for their needs, but too expensive to change and too expensive to rewrite, was simply shut down. “We can’t keep up with the competition and their whizzy new software, so we give up.”

And you might think that I’m just talking about computer programs that are written for businesses. But the reality is that, in my own personal programming projects, I’ve faced these decisions because I didn’t take enough care over my code. A 10,000-line program I wrote in C to help with a hobby music project at university had to be abandoned because I was spending all my spare time fixing it. It just got too much. So I ditched not just the code, but the whole project. Months of my life wasted.

Over the 37 years I’ve been coding, I shudder to think how much of my time was wasted debugging code that needn’t have been buggy in the first place. How much time did I waste redoing work I had to throw away because I made infrequent back-ups? How much time did I waste rewriting whole sections of programs because I hadn’t understood what the end users were asking me to build? How much time did I waste trying to understand my own code after I’d come back to it weeks or months later? How much time did I waste re-testing programs by hand?

Most importantly, what else could I have done with all that wasted time?

We’re talking thousands and thousands of hours, probably. Thousands of hours of debugging. Thousands of hours redoing stuff I broke because I didn’t have a recent back-up. Thousands of hours staring at code trying to understand what it does. Thousands of hours going round in circles testing programs by running them and clicking lots of buttons, fixing bugs, and then finding a bunch of new bugs when I re-tested it.

All that changed when I learned some basic code craft after 13 years programming the hard way.

I learned to use version control, checking my code in at least once an hour, so if I hit a dead end, I can easily get back to a working version, losing at most an hour’s work.

I learned to write fast-running automated unit tests, so I could re-test large programs with hundreds of functions in minutes or even seconds, alerting me immediately if I break something.

I learned to write code that people can understand, keeping it as simple as possible and carefully choosing names for functions and data that clearly explained what that piece of code does.

I learned to break large programs down into small, manageable and easy-to-understand chunks (modules) that do one distinct job, and how to compose large programs out of these simple pieces so that a change to one doesn’t break a bunch of connected modules.

I learned how to change code safely in tiny micro-steps, running my unit tests after each change, to keep the code working at all times.

I learned to communicate with end users using examples to pin down exactly what they’re asking for, and I translate those examples directly into tests so I can get immediate feedback on whether the program is doing what the customer wants.

I learned to continually test that changes I’ve made to the code work with changes any other programmers have made at the same time, and to test that it not only works on my computer, but on other computers, too.

And I learned to use automated scripts to build and deploy programs so that a change the users ask for in the morning can be running on their computers by lunchtime if necessary. Since the code is always working, and since I and other programmers on my team are continuously merging our changes into our version control repository, this means that our code is always ready to be released.

These techniques:

• Version Control
• Unit Testing
• Simple Design
• Modular Design
• Refactoring
• Specification By Example
• Test-Driven Development
• Continuous Integration
• Continuous Delivery

…are the foundations of code craft. Master them, and you’ll waste far less time debugging, less time staring at code trying to understand what it does, less time redoing work because you didn’t make a recent back-up or because you misunderstood the requirements, and less time rewriting entire programs from scratch – leaving far more time for the fun stuff, like inventing, being creative, making your end users happy, and having a life outside programming.

## Code Craft’s Value Proposition: More Throws Of The Dice

Evolutionary design is a term that’s used often, not just in software development. Evolution is a way of solving complex problems, typically with necessarily complex solutions (solutions that have many interconnected/interacting parts).

But that complexity doesn’t arise in a single step. Evolved designs start very simple, and then become complex over many, many iterations. Importantly, each iteration of the design is tested for it’s “fitness” – does it work in the environment in which it operates? Iterations that don’t work are rejected, iterations that work best are selected, and become the input to the next iteration.

We can think of evolution as being a search algorithm. It searches the space of all possible solutions for the one that is the best fit to the problem(s) the design has to solve.

It’s explained best perhaps in Richard Dawkins’ book The Blind Watchmaker. Dawkins wrote a computer simulation of a natural process of evolution, where 9 “genes” generated what he called “biomorphs”. The program would generate a family of biomorphs – 9 at a time – with a parent biomorph at the centre surrounded by 8 children whose “DNA” differed from the parent by a single gene. Selecting one of the children made it the parent of a new generation of biomorphs, with 8 children of their own.

You can find a recreation and more detailed explanation of the simulation here.

The 9 genes of the biomorphs define a universe of 118 billion possible unique designs. The evolutionary process is a walk through that universe, moving just one space in any direction – because just one gene is changing with each generation – with each iteration. From simple beginnings, complex forms can quickly arise.

A brute force search might enumerate all possible solutions, test each one for fitness, and select the best out of that entire universe of designs. With Dawkins’ biomorphs, this would mean testing 118 billion designs to find the best. And the odds of selecting the best design at random are 1:118,000,000,000. There may, of course, be many viable designs in the universe of all possible solutions. But the chances of finding one of them with a single random selection – a guess – are still very small.

For a living organism, that has many orders of magnitude more elements in their genetic code and therefore an effectively infinite solution space to search, brute force simply isn’t viable. And the chances of landing on a viable genetic code in a single step are effectively zero. Evolution solves problems not by brute force or by astronomically improbable chance, but by small, perfectly probable steps.

If we think of the genes as a language, then it’s not a huge leap conceptually to think of a programming language in the same way. A programming language defines the universe of all possible programs that could be written in that language. Again, the chances of landing on a viable working solution to a complex problem in a single step are effectively zero. This is why Big Design Up-Front doesn’t work very well – arguably at all – as a solution search algorithm. There is almost always a need to iterate the design.

Natural evolution has three key components that make it work as a search algorithm:

• Reproduction – the creation of a new generation that has a virtually identical genetic code
• Mutation – tiny variances in the genetic code with each new generation that make it different in some way to the parent (e.g., taller, faster, better vision)
• Selection – a mechanism for selecting the best solutions based on some “fitness” function against which each new generation can be tested

The mutations from one generation to the next are necessarily small. A fitness function describes a fitness landscape that can be projected onto our theoretical solution space of all possible programs written in a language. Programs that differ in small ways are more likely to have very similar fitness than programs that are very different. Make one change to a working solution and, chances are, you’ve still got a working solution. Make 100 changes, and the risk of breaking things is much higher.

Evolutionary design works best when each iteration is almost identical to that last, with only one or two small changes. Teams practicing Continuous Delivery with a One-Feature-Per-Release policy, therefore, tend to arrive at better solutions than teams who schedule many changes in each release.

And within each release, there’s much more scope to test even smaller changes – micro-changes of the kind enacted in, say, refactoring, or in the micro-iterations of Test-Driven Development.

Which brings me neatly to the third component of evolutionary design: selection. In nature, the Big Bad World selects which genetic codes thrive and which are marked out for extinction. In software, we have other mechanisms.

Firstly, there’s our own version of the Big Bad World. This is the operating environment of the solution. A Point Of Sale system is ultimately selected or rejected through real use in real shops. An image manipulation program is selected or rejected by photographers and graphic designers (and computer programmers writing blog posts).

Real-world feedback from real-world use should never be underestimated as a form of testing. It’s the most valuable, most revealing, and most real form of testing.

Evolutionary design works better when we test our software in the real world more frequently. One production release a year is way too little feedback, way too late. One production release a week is far better.

Once we’ve established that the software is fit for purpose through customer testing – ideally in the real world – there are other kinds of testing we can do to help ensure the software stays working as we change it. A test suite can be thought of as a codified set of fitness functions for our solution.

One implication of the evolutionary design process is that, on average, more iterations will produce better solutions. And this means that faster iterations tend to arrive at a working solution sooner. Species with long life cycles – e.g., humans or elephants – evolve much slower than species with short life cycles like fruit flies and bacteria. (Indeed, they evolve so fast that it’s been observed happening in the lab.) This is why health organisations have to guard against new viruses every year, but nobody’s worried about new kinds of shark suddenly emerging.

For this reason, anything in our development process that slows down the iterations impedes our search for a working solution. One key factor in this is how long it takes to build and re-test the software as we make changes to it. Teams whose build + test process takes seconds tend to arrive at better solutions sooner than teams whose builds take hours.

More generally, the faster and more frictionless the delivery pipeline of a development team, the faster they can iterate and the sooner a viable solution evolves. Some teams invest heavily in Continuous Delivery, and get changes from a programmer’s mind into production in minutes. Many teams under-invest, and changes can take weeks or months to reach the real world where the most useful feedback is to be had.

Other factors that create delivery friction include the maintainability of the code itself. Although a system may be complex, it can still be built from simple, single-purpose, modular parts that can be changed much faster and more cheaply than complex spaghetti code.

And while many BDUF teams focus on “getting it right first time”, the reality we observe is that the odds of getting it right first time are vanishingly small, no matter how hard we try. I’ll take more iterations over a more detailed requirements specification any day.

When people exclaim of code craft “What’s the point of building it right if we’re building the wrong thing?”, they fail to grasp the real purpose of the technical practices that underpin Continuous Delivery like unit testing, TDD, refactoring and Continuous Integration. We do these things precisely because we want to increase the chances of building the right thing. The real requirements analysis happens when we observe how users get on with our solutions in the real world, and feed back those lessons into a new iteration. The sooner we get our code out there, the sooner can get that feedback. The faster we can iterate solutions, the sooner a viable solution can evolve. The longer we can sustain the iterations, the more throws of the dice we can give the customer.

That, ultimately, is the promise of good code craft: more throws of the dice.

## “Stateless” – You Keep Using That Word…

One of the requirements of pure functions is that they are stateless. To many developers, this means simply that the data upon which the function acts is immutable. When dealing with objects, we mean that the object of an action has immutable fields, set at instantiation and then never changing throughout the instance’s life cycle.

In actual fact, this is not what ‘stateless’ means. Stateless means that the result of an action – e.e. a method call or a function call – is always the same given the same inputs, no matter how many times it’s invoked.

The classic stateless function is one that calculates square roots. sqrt(4) is always 2. sqrt(6.25) is always 2.5, and so on.

The classic stateful function is a light switch. The result of flicking the switch depends on whether the light is on or off at the time. If it’s off, it’s switched on. If it’s on, it’s switched off.

function Light() {
this.on = false;

this.flickSwitch = function (){
this.on = !this.on;
}
}

let light = new Light();

light.flickSwitch();
console.log(light);

light.flickSwitch();
console.log(light);

light.flickSwitch();
console.log(light);

light.flickSwitch();
console.log(light);

This code produces the output:

{ on: true }
{ on: false }
{ on: true }
{ on: false }

Most domain concepts in the real world are stateful, like our light switch. That is to say, they have a life cycle during which their behaviour changes depending on what has happened to them previously.

This is why finite state machines form a theoretical foundation for all program behaviour. Or, more simply, all program behaviour can be modeled as a finite state machine – a logical map of an object’s life cycle.

Now, a lot of developers would argue that flickSwitch() is stateful because it acts on an object with a mutable field. They would then reason that making on immutable, and producing a copy of the light with it’s state changed, would make it stateless.

const light = {
on: false
}

function flickSwitch(light){
return {...light, on: !light.on};
}

const copy1 = flickSwitch(light)
console.log(copy1);

const copy2 = flickSwitch(copy1);
console.log(copy2);

const copy3 = flickSwitch(copy2);
console.log(copy3);

const copy4 = flickSwitch(copy3);
console.log(copy4);

Technically, this is a pure functional implementation of our light switch. No state changes, and the result of each call to flickSwitch() is entirely determined by its input.

But, is it stateless? I mean, is it really? Technically, yes it is. But conceptually, no it certainly isn’t.

If this code was controlling a real light in the real world, then there’s only one light, it’s state changes, and the result of each invocation of flickSwitch() depends on the light’s history.

This is functional programming’s dirty little secret. In memory, it’s stateless and pure functional. Hooray for FP! But at the system level, it’s stateful.

While making it stateless can certainly help us to reason about the logic when considered in isolation – at the unit, or component or service level – when the identity of the object being acted upon is persistent, we lose those benefits at the system level.

Imagine we have two switches controlling a single light (e.g., one at the top of a flight of stairs and one at the bottom.)

In this situation, where a shared object is accessed in two different places, it’s harder to reason about the state of the light without knowing its history.

If I have to replace the bulb, I’d like to know if the light is on or off. With a single switch, I just need to look to see if it’s in the up (off) or down (on) position. With two switches, I need to understand the history. Was it last switched on, or switched off?

Copying immutable objects, when they have persistent identity – it’s the same light – does not make functions that act on those objects stateless. It makes them pure functional, sure. But we still need to consider their history. And in situations of multiple access (concurrency), it’s no less complicated than reasoning about mutable state, and just as prone to errors.

When I was knocking up my little code example, my first implementation of the FP version was:

const light = {
on: false
}

function flickSwitch(light){
return {...light, on: !light.on};
}

const copy1 = flickSwitch(light)
console.log(copy1);

const copy2 = flickSwitch(copy1);
console.log(copy2);

const copy3 = flickSwitch(copy2);
console.log(copy3);

const copy4 = flickSwitch(copy3);
console.log(copy3);

Do you see the error? When I ran it, it produced this output.

{ on: true }
{ on: false }
{ on: true }
{ on: true }

This is a class of bug I’ve seen many times in functional code. The last console.log uses the wrong copy.

The order – in this case, the order of copies – matters. And when the order matters, our logic isn’t stateless. It has history.

The most common manifestation of this class of bug I come across is in FP programs that have databases where object state is stored and shared across multiple client threads or processes.

Another workaround is to push the versioning model of our logical design into the database itself, in the form of event sourcing. This again, though, is far from history-agnostic and therefore far from stateless. Each object’s state – rather than being a single record in a single table that changes over time – is now the aggregate of the history of events that mutated it.

Going back to our finite state machine, each object is represented as the sequence of actions that brought it to its current state (e.g., flickSwitch() -> flickSwitch() -> flickSwitch() would produce a light that’s turned on.)

In reasoning about our logic, despite all the spiffy technological workarounds of FP, event sourcing and so on, if objects conceptually have history then they conceptually have state. And at the system level, we have to get that logic conceptually right.

Yet again, technology – including programming paradigm – is no substitute for thinking.

## Standards & Gatekeepers & Fitted Bathrooms

One thing I’ve learned from 10 years on Twitter is that whenever you dare to suggest that the software development profession should have minimum basic standards of competence, people will descend on you from a great height accusing you of being “elitist” and a “gatekeeper”.

Evil Jason wants to keep people out of software development. BAD JASON!

Well, okay: sure. I admit it. I want to keep people out of software development. Specifically, I want to keep people who can’t do the job out of software development. Mwuhahahahahaha etc.

That’s a very different proposition from suggesting that I want to stop people from becoming good, competent software developers, though. If you know me, then you know I’ve long advocated proper, long-term, in-depth paid software developer apprenticeships. I’ve advocated proper on-the-job training and mentoring. (Heck, it’s my entire business these days.) I’ve advocated schools and colleges and code clubs encouraging enthusiasts to build basic software development skills – because fundamentals are the building blocks of fun (or something pithy like that.)

I advocate every entry avenue into this profession except one – turning up claiming to be a software developer, without the basic competencies, and expecting to get paid a high salary for messing up someone’s IT.

If you can’t do the basic job yet, then you’re a trainee – an apprentice, if you prefer – software developer. And yes, that is gatekeeping. The gates to training should be wide open to anyone with aptitude. Money, social background, ethnicity, gender, sexual orientation, age or disabilities should be no barrier.

But…

I don’t believe the gates should be wide open to practicing as a software developer – unsupervised by experienced and competent mentors – on real software and systems with real end users and real consequences for the kinds of salaries we can earn – just for anyone who fancies that job title. I think we should have to earn it. I think I should have had to earn it when I started out. Crikey, the damage I probably did before I accidentally fell into a nest of experienced software engineers who fixed me…

Here’s the thing; when I was 23, I didn’t know that I wasn’t a competent software developer. I thought I was aces. Even though I’d never used version control, never written a unit test, never refactored code – not once – and thought that a 300-line function with nested IFs running 10 deep was super spiffy and jolly clever. I needed people to show me. I was lucky to find them, though I certainly didn’t seek them out.

And who the heck am I to say our profession should have gates, anyway? Nobody. I have no power over hiring anywhere. And, for sure, when I’ve been involved in the hiring process, bosses have ignored my advice many times. And many times, they’ve paid the price for letting someone who lacked basic dev skills loose on their production code. And a few times they’ve even admitted it afterwards.

But I’ve rarely said “Don’t hire that person”. Usually, I say “Train that person”. Most employers choose not to, of course. They want them ready-made and fully-formed. And, ideally, cheap. Someone else can train them. Hell, they can train themselves. And many of us do.

In that landscape, insisting on basic standards is difficult – because where do would-be professional developers go to get real-world experience, high-quality training and long-term mentoring? Would-be plumbers and would-be veterinarians and would-be hairdressers have well-defined routes from aspiration to profession. We’re still very much at the “If You Say You’re A Software Developer Then You’re A Software Developer” stage.

So that’s where we are right now. We can stay at that level, and things will never improve. Or we can do something about it. I maintain that long-term paid apprenticeships – leading to recognised qualifications – are the way to go. I maintain that on-the-job training and mentoring are essential. You can’t learn this job from books. You’ve got to see it and do it for real, and you need people around you who’ve done lots of it to guide you and set an example.

I maintain that apprenticeships and training and mentoring should be the norm for people entering the profession – be it straight of high school or after a degree or after decades of experience working in other industries or after raising children. This route should be open to all. But there should be a bar they need to jump at the end before being allowed to work unsupervised on production code. I wish I’d had that from the start. I should have had that.

And, yes, how unfair it is for someone who blundered into software development largely self-taught to look back and say “Young folk today must qualify first!” But there must have been a generation of self-taught physicians who one day declared “Okay, from now on, doctors have to qualify.” If not my generation, or your generation, then whose generation? We can’t keep kicking this can down the road forever.

As software “eats the world”, more and more people are going to enter the profession. More and more of our daily lives will be run by software, and the consequences of system failures and high costs of changing code will hurt society more and more. This problem isn’t going away.

I hope to Bod that the people coming to fit my bathroom next week don’t just say they’re builders and plumbers and electricians. I hope to Bod they did proper apprenticeships and had plenty of good training and mentoring. I hope to Bod that their professions have basic standards of competence.

And I hope to Bod that those standards are enforced by… gatekeepers.

## What’s The Point of Code Craft?

A conversation I seem to have over and over again – my own personal Groundhog Day – is “What’s the point in code craft if we’re building the wrong thing?”

The implication is that there’s a zero-sum trade-off between customer collaboration – the means by which we figure out what’s needed – and technical discipline. Time spent writing unit tests or refactoring duplication or automating builds is time not spent talking with our customers.

This is predicated on two falsehoods:

• It takes longer to deliver working code when we apply more technical discipline
• The fastest way to solve a problem is to talk about it more

All the evidence we have strongly suggests that, in the majority of cases, better quality working software doesn’t take significantly longer to deliver. In fact, studies using large amounts of industry data repeatedly show the inverse. It – on average – takes longer to deliver software when we apply less technical discipline.

Teams who code and fix their software tend to end up having less time for their customers because they’re too busy fixing bugs and because their code is very expensive to change.

Then there’s the question of how to solve our customers’ problems. We can spend endless hours in meetings discussing it, or we could spend a bit of time coming up with a simple idea for a solution, build it quickly and release it straight away for end users to try for real. The feedback we get from people using our software tends to tell us much more, much sooner about what’s really needed.

I’ll take a series of rapid software releases over the equivalent series of requirements meetings any day of the week. I’ve seen this many times in the last three decades. Evolution vs Big Design Up-Front. Rapid iteration vs. Analysis Paralysis.

The real customer collaboration happens out in the field (or in the staging environment), where developers and end users learn from each small, frequent release and feed those lessons back into the next iteration. The map is not the terrain.

Code craft enables high-value customer collaboration by enabling rapid, working releases and by delivering code that’s much easier to change. Far from getting in the way of building the right thing, it is the way.

But…

That’s only if your design process is truly iterative. Teams that are just working through a backlog may see things differently, because they’re not setting out to solve the customer’s problem. They’re setting out to deliver a list of features that some people sitting in a room – these days very probably not the end users and not the developers themselves – guessed might solve the problem (if indeed, the problem was ever discussed).

In that situation, technical discipline won’t help you deliver the right thing. But it could help you deliver the wrong thing sooner for less expense. #FailFast