I’ve long considered software development as a process of removing uncertainty.
The customer asks us for “Instagram, but for cats” – which could have infinite possible interpretations – and our job is to whittle those possibilities down to a single interpretation. Computers kind of insist on that.
How could this process be represented more essentially, so I can see the wood for the trees of such a complex thing?
Let’s play a game.
There are two teams, A and B, and they are both tasked with guessing a random 4-digit number. Their guesses must be submitted with numbers they have to carve on to stone tablets.
Team A guesses all 4 digits at a time. They painstakingly carve “0000” on to a tablet and submit that to learn whether it’s right or wrong. In this case, “0000” is wrong. So they painstakingly carve another 4-digit number, “0001”, which is also wrong.
If the guess is wrong, the tablet is destroyed, and they have to start all over again. Let’s say that the time to carve one digit is 1 hour. So it takes team A 4 hours to make one guess.
Team B take a different approach. They guess one digit at a time, still carving them into stone tablets. They start by guessing that the first digit is “0”, which is wrong.
When their guess is wrong, the tablet is also destroyed, and they must start a new one.
Which team – A or B – would you bet on to guess the 4-digit number first?
Worst case, team A could take 40,000 hours. Worst case for team B is 40 hours. The odds of team A guessing right in the first 40 hours are 1,000:1 against. I’d bet on team B.
Now, let’s 10x team A’s “productivity” by giving them a machine that can carve one digit in just 6 minutes. Each guess now takes them 24 minutes instead of 4 hours.
Which team would you bet on now?
The odds of team A guessing right in the first 40 hours using the 10x machine are 100:1 against. I’d still bet on team B.
We’d be mistaken to confuse “numbers guessed” with “numbers guessed correctly” as our measure of productivity here.
What’s giving team B such a massive advantage is not the speed at which they produce tablets, but the speed at which they reduce uncertainty.
When team A make their first guess, the odds of it being right are 1:10,000. On their second guess, having ruled out one 4-digit number, they are 1:9,999.
When team A make their first guess, their odds of guessing all 4 digits correctly are also 1:10,000. But on their second guess, having ruled out all 4-digit numbers beginning with “0”, they are 1:9,000.
Basically, with each guess, team B reduce the uncertainty by a factor of 10%. Team A reduce it by a tiny fraction of that with each of their guesses. To put it another way, team B outlearns team A, even as team A out-delivers them by a factor of 10.
The takeaway for me is that – in software development, considered as a process of reducing uncertainty – it’s the batch size and the feedback loops doing the heavy lifting.
If your team wants to build the skills they’ll need to outlearn the competition, solving one problem at a time in tight feedback loops, visit my training site for details of courses and on-the-job coaching.