The Jason’s Guitar Shack kata – Part I (Core Logic)

This week, I’ve been coaching developers for an automotive client in Specification By Example (or, as I call it these days, “customer-driven TDD”).

The Codemanship approach to software design and development has always been about solving problems, as opposed to building products or delivering features.

So I cooked up an exercise that starts with a customer with a business problem, and tasked pairs to work with that customer to design a simple system that might solve the problem.

It seems to have gone well, so I thought I’d share the exercise with you for you to try for yourselves.

Jason’s Guitar Shack

I’m a retired international rock legend who has invested his money in a guitar shop. My ex-drummer is my business partner, and he runs the shop, while I’ve been a kind of silent partner. My accountant has drawn my attention to a problem in the business. We have mysterious gaps in the sales of some of our most popular products.

I can illustrate it with some data I pulled off our sales system:

DateTimeProduct IDQuantityPrice Charged
13/07/201910:477571549
13/07/201912:157571549
13/07/201917:238111399
14/07/201911:454491769
14/07/201913:378111399
14/07/201915:018111399
15/07/201909:267571549
15/07/201911:558111399
16/07/201910:3337411199
20/07/201914:074491769
22/07/201911:284491769
24/07/201910:178112798
24/07/201915:318111399
Product sales for 4 selected guitar models

Product 811 – the Epiphone Les Paul Classic in worn Cherry Sunburst – is one of our biggest sellers.

Epiphone Les Paul Classic Worn in Heritage Cherry | GAK

We sell one or two of these a day, usually. But if you check out the sales data, you’ll notice that between July 15th and July 24th, we didn’t sell any at all. These gaps appear across many product lines, throughout the year. We could be losing hundreds of thousands of pounds in sales.

After some investigation, I discovered the cause, and it’s very simple: we keep running out of stock.

When we reorder stock from the manufacturer or other supplier, it takes time for them to fulfil our order. Every product has a lead time on delivery to our warehouse, which is recorded in our warehouse system.

DescriptionPrice (£)StockRack SpaceManufacturer Delivery Lead Time (days)Min Order
Fender Player Stratocaster w/ Maple Fretboard in Buttercream54912201410
Fender Deluxe Nashville Telecaster MN in 2 Colour Sunburst769510215
Ibanez RG652AHMFX-NGB RG Prestige Nebula Green Burst (inc. case)119925601
Epiphone Les Paul Classic In Worn Heritage Cherry Sunburst39922301420
Product supply lead times for 4 selected guitars

My business partner – the store manager – typically only reorders stock when he realises we’ve run out (usually when a customer asks for it, and he checks to see if we have any). Then we have no stock at all while we wait for the manufacturer to supply more, and during that time we lose a bunch of sales. In this age of the Electric Internet, if we don’t have what the customer wants, they just take out their smartphone and buy it from someone else.

This is the business problem you are tasked with solving: minimise lost sales due to lack of stock.

There are some wrinkles to this problem, of course. We could solve it by cramming our warehouse full of reserve stock. But that would create a cash flow problem for the business, as we have bills to pay while products are gathering dust on our shelves. So the constraint here is, while we don’t want to run out of products, we actually want as few in stock as possible, too.

The second wrinkle we need to deal with is that sales are seasonal. We sell three times as much of some products in December as we do in August, for example. So any solution would need to take that into account to reduce the risk of under- or over-stocking.

So here’s the exercise for a group of 2 or more:

  • Nominate someone in your group as the “customer”. They will decide what works and what doesn’t as far as a solution is concerned.
  • Working with your customer, describe in a single sentence a headline feature – this is a simple feature that solves the problem. (Don’t worry about how it works yet, just describe what it does.)
  • Now, think about how your headline feature would work. Describe up to a maximum of 5 supporting features that would make the headline feature possible. These could be user-facing features, or internal features used by the headline feature. Remember, we’re trying to design the simplest solution possible.
  • For each feature, starting with the headline feature, imagine the scenarios the system would need to handle. Describe each scenario as a simple headline (e.g., “product needs restocking”). Build a high-level test list for each feature.
  • The design and development process now works one feature at a time, starting with the headline feature.
    • For each feature’s scenario, describe in more detail how that scenario will work. Capture the set-up for that scenario, the action or event that triggers the scenario, and the outcomes the customer will expect to happen as a result. Feel free to use the Given…When…Then style. (But remember: it’s not compulsory, and won’t make any difference to the end result.)
    • For each scenario, capture at least one example with test data for every input (every variable in the set-up and every parameter of the action or event), and for every expected output or outcome. Be specific. Use the sample data from our warehouse and sales systems as a starting point, then choose values that fit your scenario.
    • Working one scenario at a time, test-drive the code for its core logic using the examples, writing one unit test for each output or outcome. Organise and name your tests and test fixture so it’s obvious which feature, which scenario and which output or outcome they are talking about. Try as much as possible to choose names that appear in the text you’ve written with your customer. You’re aiming for unit tests that effectively explain the customer’s tests.
    • Use test doubles – stubs and mocks – to abstract external dependencies like the sales and warehouse systems, as well as to Fake It Until You Make it for any supporting logic covered by features you’ll work on later.

And that’s Part I of the exercise. At the end, you should have the core logic of your solution implemented and ready to incorporate into a complete working system.

Here’s a copy of the sample data I’ve been using with my coachees – stick close to it when discussing examples, because this is the data that your system will be consuming in Part II of this kata, which I’ll hopefully write up soon.

Good luck!

Readable Parameterized Tests

Parameterized tests (sometimes called “data-driven tests”) can be a useful technique for removing duplication from test code, as well as potentially buying teams much greater test assurance with surprisingly little extra code.

But they can come at the price of readability. So if we’re going to use them, we need to invest some care in making sure it’s easy to understand what the parameter data means, and to ensure that the messages we get when tests fail are meaningful.

Some testing frameworks make it harder than others, but I’m going to illustrate using some mocha tests in JavaScript.

Consider this test code for a Mars Rover:

it("turns right from N to E", () => {
let rover = {facing: "N"};
rover = go(rover, "R");
assert.equal(rover.facing, "E");
})
it("turns right from E to S", () => {
let rover = {facing: "E"};
rover = go(rover, "R");
assert.equal(rover.facing, "S");
})
it("turns right from S to W", () => {
let rover = {facing: "S"};
rover = go(rover, "R");
assert.equal(rover.facing, "W");
})
it("turns right from W to N", () => {
let rover = {facing: "W"};
rover = go(rover, "R");
assert.equal(rover.facing, "N");
})
view raw rover_test.js hosted with ❤ by GitHub

These four tests are different examples of the same behaviour, and there’s a lot of duplication (I should know – I copied and pasted them myself!)

We can consolidate them into a single parameterised test:

[{input: "N", expected: "E"}, {input: "E", expected: "S"}, {input: "S", expected: "W"},
{input: "W", expected: "N"}].forEach(
function (testCase) {
it("turns right", () => {
let rover = {facing: testCase.input};
rover = go(rover, "R");
assert.equal(rover.facing, testCase.expected);
})
})
view raw rover_test.js hosted with ❤ by GitHub

While we’ve removed a fair amount of duplicate test code, arguably this single parameterized test is harder to follow – both at read-time, and at run-time.

Let’s start with the parameter names. Can we make it more obvious what roles these data items play in the test, instead of just using generic names like “input” and “expected”?

[{startsFacing: "N", endsFacing: "E"}, {startsFacing: "E", endsFacing: "S"}, {startsFacing: "S", endsFacing: "W"},
{startsFacing: "W", endsFacing: "N"}].forEach(
function (testCase) {
it("turns right", () => {
let rover = {facing: testCase.startsFacing};
rover = go(rover, "R");
assert.equal(rover.facing, testCase.endsFacing);
})
})
view raw rover_test.js hosted with ❤ by GitHub

And how about we format the list of test cases so they’re easier to distinguish?

[
{startsFacing: "N", endsFacing: "E"},
{startsFacing: "E", endsFacing: "S"},
{startsFacing: "S", endsFacing: "W"},
{startsFacing: "W", endsFacing: "N"}
].forEach(
function (testCase) {
it("turns right", () => {
let rover = {facing: testCase.startsFacing};
rover = go(rover, "R");
assert.equal(rover.facing, testCase.endsFacing);
})
})
view raw rover_test.js hosted with ❤ by GitHub

And how about we declutter the body of the test a little by destructuring the testCase object?

[
{startsFacing: "N", endsFacing: "E"},
{startsFacing: "E", endsFacing: "S"},
{startsFacing: "S", endsFacing: "W"},
{startsFacing: "W", endsFacing: "N"}
].forEach(
function ({startsFacing, endsFacing}) {
it("turns right", () => {
let rover = {facing: startsFacing};
rover = go(rover, "R");
assert.equal(rover.facing, endsFacing);
})
})
view raw rover_test.js hosted with ❤ by GitHub

Okay, hopefully this is much easier to follow. But what happens when we run these tests?

It’s not at all clear which test case is which. So let’s embed some identifying data inside the test name.

[
{startsFacing: "N", endsFacing: "E"},
{startsFacing: "E", endsFacing: "S"},
{startsFacing: "S", endsFacing: "W"},
{startsFacing: "W", endsFacing: "N"}
].forEach(
function ({startsFacing, endsFacing}) {
it(`turns right from ${startsFacing} to ${endsFacing}`, () => {
let rover = {facing: startsFacing};
rover = go(rover, "R");
assert.equal(rover.facing, endsFacing);
})
})
view raw rover_test.js hosted with ❤ by GitHub

Now when we run the tests, we can easily identify which test case is which.

With a bit of extra care, it’s possible with most unit testing tools – not all, sadly – to have our cake and eat it with readable parameterized tests.

Pull Requests & Defensive Programming – It’s All About Trust

A very common thing I see on development teams is reliance on code reviews for every check-in (in this age of Git-Everything, often referred to as “Pull Requests”). This can create bottlenecks in the delivery process, as our peers are pulled away from their own work and we have to wait for their feedback. And, often, the end result is that these reviews are superficial at best, missing a tonne of problems while still holding up delivery.

Pull Request code reviews on a busy team

But why do we do these reviews in the first place?

I think of it in programming terms. Imagine a web service. It has a number of external clients that send it requests via the Web.

Some clients can be trusted, others not

These client apps were not written by us. We have no control over their code, and therefore can’t guarantee that the requests they’re sending will be valid. There’s a need, therefore, to validate these requests before acting on them.

This is what we call defensive programming, and in these situations where we cannot trust the client to call us right, it’s advisable.

Inside our firewall, our web service calls a microservice. Both services are controlled by us – that is, we’re writing both client and server code in this interaction.

Does the microservice need to validate those requests? Not if we can be trusted to write client code that obeys the contract.

In that case, a more appropriate style of programming might be Design By Contract. Clients are trusted to satisfy the preconditions of service calls before they call them: in short, if it ain’t valid, they don’t call, and the supplier doesn’t need to waste time – and code – checking the requests. That’s the client’s job.

Now let’s project these ideas on to code reviews. If a precondition of merging to the main branch is that your code satisfies certain code quality preconditions – test coverage, naming, simplicity etc – then we have two distinct potential situations:

  • The developer checking in can be trusted not to break those preconditions (e.g., they never check in code that doesn’t have passing tests)
  • The developer can’t be trusted not to break them

In an open source code base, we have a situation where potentially anyone can attempt to contribute. The originators of that code base – e.g., Linus – have no control over who tries to push changes to the main branch. So he defends the code base – somewhat over-enthusiastically, perhaps – from bad inputs like our web service defends our system from bad requests.

In a closed-source situation, where the contributors have been chosen and we can exercise some control over who can attempt to check in changes, a different situation may arise. Theoretically, we hired these developers because we believe we can trust them.

I personally do not check in code that doesn’t have good, fast-running, passing automated tests. I personally do not check in spaghetti code (unless it’s for a workshop on refactoring spaghetti code). If we agree what the standards are for our code, I will endeavour not to break them. I may also use tools to help me keep my code clean pre-check-in. I’m the web service talking to the microservice in that diagram. I’m a trusted client.

But not all developers can be trusted not to break the team’s coding standards. And that’s the problem we need to be addressing here. Ask not so much “How do we do Pull Requests?”, but rather “Why do we need to do Pull Requests?” There are some underlying issues about skills and about trust.

Pull Requests are a symptom, not a solution.

Codemanship Code Craft Videos

Over the last 6 months, I’ve been recording hands-on tutorials about code craft – TDD, design principles, refactoring, CI/CD and more – for the Codemanship YouTube channel.

I’ve recorded the same tutorials in JavaScript, Java, C# and (still being finished) Python.

As well as serving as a back-up for the Codemanship Code Craft training course, these series of videos forms possibly the most comprehensive free learning resource on the practices of code craft available anywhere.

Each series has over 9 hours of video, plus links to example code and other useful resources.

Codemanship Code Craft videos currently available

I’ve heard from individual developers and teams who’ve been using these videos as the basis for their practice road map. What seems to work best is to watch a video, and then straight away try out the ideas on a practical example (e.g., a TDD kata or a small project) to see how they can work on real code.

In the next few weeks, I’ll be announcing Codemanship Code Craft Study Groups, which will bring groups of like-minded learners together online once a week to watch the videos and pair program on carefully designed exercises with coaching from myself.

This will be an alternative way of receiving our popular training, but with more time dedicated to hands-on practice and coaching, and more time between lessons for the ideas to sink in. It should also be significantly less disruptive than taking a whole team out for 3 days for a regular training course, and significantly less exhausting than 3 full days of Zoom meetings! Plus the price per person will be the same as the regular Code Craft course.

Leading By Example

We’re used to the idea of leaders saying “Do as I say, not as I do”. Politicians, for example, are notorious for their double standards.

But the long-term effect of leaders not being seen to “eat their own dog food” is that it undermines the faith of those being led in their leaders and in their policies.

When we see public servants avoiding tax, we assume “Well, if it’s good enough for them…”, and before you know it you’ve got industrial-scale tax avoidance going on.

When we see government advisors breaking their own lockdown rules, we think “Actually, I quite fancy a trip to Barnard’s Castle”, and before you know it, lockdown has broken down.

When we see self-proclaimed socialists sending their children to private schools, we think “Obviously, state schools must be a bit crap” and start ordering prospectuses for Eton and Harrow.

And when we see lead developers who don’t follow their own advice, we naturally assume that the advice doesn’t apply to any of us.

If you want your team to write tests first, write your tests first. If you want your team to merge to trunk frequently, merge to trunk frequently. If you want your team to be kind in code reviews, be kind in your code reviews.

As Gandhi once put it: be the change you want to see in the world. Be the developer you want to see on your team.

This means that, among the many qualities that make a good lead developer, the willingness to roll up your sleeves and lead from the front is essential. Teams can see when the rules you impose on them don’t seem to apply to you, and it undermines those rules and your authority. That authority has to be earned.

This is why I’ve made damn sure that every single idea people learn on a Codemanship course – even the more “out there” ideas – is something I’ve applied successfully on real teams, and why I demonstrate rather than just present those ideas whenever possible. You can make any old nonsense seem viable on a PowerPoint slide.

As a trainer and mentor – and mentoring is a large part of leading a development team – I choose to lead by example because, after 3 decades working with developers, I’ve found that to be most effective. Don’t tell them. Show them.

This puts an onus on lead developers to do the legwork when new ideas and unfamiliar technologies need to be explored. If you need to get your legacy COBOL programmers writing unit tests, then it’s time to learn some COBOL and write some COBOL unit tests. This is another kind of leading by example. Getting out of your comfort zone can serve as an example for teams who are maybe just a little too comfortable in theirs.

And this extends beyond programming languages and technical practices. If you believe your team need a better work-life balance, don’t mandate a better work-life balances and then stay at your desk until 8pm every day. Go home at 5:30pm. Show them it’s fine to do that. Show them it’s fine to learn new skills during office hours. Show them it’s fine to switch your phone off and not check your emails when you’re on holiday. Show them that you don’t marginalise people on your team because of, say, their gender or ethnic background. Show them that you don’t act inappropriately at the Christmas party. Show them that you actively consider questions of ethics in your work.

Whatever it is you want the team to be, that’s what you need to be, because there are far too many people saying and not doing in this world.

Slow Tests Kill Businesses

I’m always surprised at how few organisations track some pretty fundamental stats about software development, because if they did then they might notice what’s been killing their business.

It’s a picture I’ve seen many, many times; a software product or system is created, and it goes live. But it has bugs. Many bugs. So, a bigger chunk of the available development time is used up fixing bugs for the second release. Which has even more bugs. Many, many bugs. So an even bigger chunk of the time is used to fix bugs for the third release.

It looks a little like this:

Over the lifetime of the software, the proportion of development time devoted to bug fixing increases until that’s pretty much all the developers are doing. There’s precious little time left for new features.

Naturally, if you can only spare 10% of available dev time for new features, you’re going to need 10 times as many developers. Right? This trend is almost always accompanied by rapid growth of the team.

So the 90% of dev time you’re spending on bug fixing is actually 90% of the time of a team that’s 10x as large – 900% of the cost of your first release, just fixing bugs.

So every new feature ends up in real terms costing 10x in the eighth release what it would have in the first. For most businesses, this rules out change – unless they’re super, super successful (i.e., lucky). It’s just too damned expensive.

And when you can’t change your software and your systems, you can’t change the way you do business at scale. Your business model gets baked in – petrified, if you like. And all you can do is throw an ever-dwindling pot of money at development just to stand still, while you watch your competitors glide past you with innovations you’ll never be able to offer your customers.

What happens to a business like that? Well, they’re no longer in business. Customers defected in greater and greater numbers to competitor products, frustrated by the flakiness of the product and tired of being fobbed off with promises about upgrades and hotly requested features and fixes that never arrived.

Now, this effect is entirely predictable. We’ve known about it for many decades, and we’ve known the causal mechanism, too.

Source: IBM System Science Institute

The longer a bug goes undetected, exponentially the more it costs to fix. In terms of process, the sooner we test new or changed code, the cheaper the fix is. This effect is so marked that teams actually find that if they speed up testing feedback loops – testing earlier and more often – they deliver working software faster.

This is very simply because they save more time downstream on bug fixes than they invest in earlier and more frequent testing.

The data used in the first two graphs was taken from a team that took more than 24 hours to build and test their code.

Here’s the same stats from a team who could build and test their code in less than 2 minutes (I’ve converted from releases to quarters to roughly match the 12-24 week release cycles of the first team – this second team was actually releasing every week):

This team has nearly doubled in size over the two years, which might sound bad – but it’s more of a rosy picture than the first team, whose costs spiraled to more than 1000% of their first release, most of which was being spent fixing bugs and effectively going round and round in circles chasing their own tails while their customers defected in droves.

I’ve seen this effect repeated in business after business – of all shapes and sizes: software companies, banks, retail chains, law firms, broadcasters, you name it. I’ve watched $billion businesses – some more than a century old – brought down by their inability to change their software and their business-critical systems.

And every time I got down to the root cause, there they were – slow tests.

Every. Single. Time.

Where’s User Experience In Your Development Process?

I ran a little poll through the Codemanship twitter account yesterday, and thought I’d share the result with you.

There are two things that strike me about the results. Firstly, it looks like teams who actively involve user experience experts throughout the design process are very much in the minority. To be honest, this comes as no great surprise. My own observations of development teams over years tend to see UXD folks getting involved early on – often before any developers are involved, or any customer tests have been discussed – in a kind of a Waterfall fashion. “We’re agile. But the user interface design must not change.”

To me, this is as nonsensical as those times when I’ve arrived on a project that has no use cases or customer tests, but somehow magically has a very fleshed-out database schema that we are not allowed to change.

Let’s be clear about this: the purpose of the user experience is to enable the user to achieve their goals. That is a discussion for everybody involved in the design process. It’s also something that is unlikely we’ll get right first time, so iterating the UXD multiple times with the benefit of end user feedback almost certainly will be necessary.

The most effective teams do not organise themselves into functional silos of requirements analysis, UXD, architecture, programming, security, data management, testing, release and operations & support and so on, throwing some kind of output (a use case, a wireframe, a UML diagram, source code, etc) over the wall to the next function.

The most effective teams organise themselves around achieving a goal. Whoever’s needed to deliver on that should be in the room – especially when those goals are being discussed and agreed.

I could have worded the question in my poll “User Experience Designers: when you explore user goals, how often are the developers involved?” I suspect the results would have been similar. Because it’s the same discussion.

On a movie production, you have people who write scripts, people who say the lines, people who create sets, people who design costumes, and so on. But, whatever their function, they are all telling the same story.

The realisation of working software requires multiple disciplines, and all of them should be serving the story. The best teams recognise this, and involve all of the disciplines early and throughout the process.

But, sadly, this still seems quite rare. I hear lip service being paid, but see little concrete evidence that it’s actually going on.

The second thing I noticed about this poll is that, despite several retweets, the response is actually pretty low compared to previous polls. This, I suspect, also tells a story. I know from both observation and from polls that teams who actively engage with their customers – let alone UXD professionals etc – in their BDD/ATDD process are a small minority (maybe about 20%). Most teams write the “customer tests” themselves, and mistake using a BDD tool like Cucumber for actually doing BDD.

But I also get a distinct sense, working with many dev teams, that UXD just isn’t on their radar. That is somebody else’s problem. This is a major, major miscalculation – every bit as much as believing that quality assurance is somebody else’s problem. Any line of code that doesn’t in some way change the user’s experience – and I use the term “user” in the wider sense that includes, for example, people supporting the software in production, who will have their own user experience – is a line of code that should be deleted. Who is it for? Whose story does it serve?

We are all involved in creating the user experience. Bad special effects can ruin a movie, you know.

We may not all be qualified in UXD, of course. And that’s why the experts need to be involved in the ongoing design process, because UX decisions are being taken throughout development. It only ends when the software ends (and even that process – decommissioning – is a user experience).

Likewise, every decision a UI designer takes will have technical implications, and they may not be the experts in that. Which is why the other disciplines need to be involved from the start. It’s very easy to write a throwaway line in your movie script like “Oh look, it’s Bill, and he’s brought 100,000 giant fighting robots with him”, but writing 100,000 giant fighting robots and making 100,000 giant fighting robots actually appear on the screen are two very different propositions.

So let’s move on from the days of developers being handed wire-frames and told to “code this up”, and from developers squeezing input validation error messages into random parts of web forms, and bring these – and all the other – disciplines together into what I would call a “development team”.

Proactive vs Reactive Learning (or “Why Your Company Only Does Easy Things”)

Imagine you lead an orchestra. The word comes down from on high “Tonight, our audience demands you play Rachmaninoff’s Piano Concerto No. 3. The future of the orchestra depends on it. We’re all counting on you.”

But your orchestra has no pianist. Nobody in your orchestra has even touched a piano, let alone taken lessons. You turn to the lead violin: “Quick. Google ‘how to play piano?’ “

Now, of course, there’s absolutely no chance that any human being could learn to play piano to that standard in a day. Or a week. Or a month. It takes a lot of time and a lot of work to get to that level. Years.

The inevitable result is that the orchestra will not be playing Rachmaninoff’s Piano Concerto No. 3 that evening. At least, not with the piano part. And that’s kind of essential to a piano concerto.

I see tech organisations in this situation on a regular basis. They discover a need that they’re simply nowhere near competent enough to deal with – something completely beyond the range of their current capabilities. “The users demand that the software learns from their interactions and anticipates their needs. Quick. Google ‘how to train a machine?'” “The customer demands a custom query language. Quick. Google ‘how to write a compiler?'” And so on.

Have we become so used to looking stuff up on Stack Overflow, I wonder, that we’ve forgotten that some of this stuff is hard? Some of these things take a long time to learn? Not everything is as easy as finding out what that error message means, or how to install a testing framework using NPM?

The latter style of learning is what some people call reactive. “I need to know this thing now, because it is currently impeding my progress.” And software development involves a lot of reactive learning. You do need to be rather good at looking stuff up to get through the typical working day, because there are just so, so many little details to remember.

Here’s the thing, though: reactive learning only really works for little details – things that are easy to understand and can be learned quickly. If the thing that impedes our progress is that we require a road bridge to be built to get us over that canyon, then that’s where we see the limits of reactive learning. It can remove small obstacles. Big problems that takes a long time to solve require a different style of learning that’s much more proactive.

If your orchestra only plays the instruments needed for the exact pieces they’ve played up to that point, then there’s in increased likelihood that there’ll be gaps. If a dev team only has the exact skill set for the work they’ve done up to that point, there are likewise very likely to be gaps.

It’s hard, of course, to anticipate every possible future need and prepare months or years in advance for every eventuality. But some orgs have a greater adaptive capacity than others because their people are skilled beyond today’s specific requirements. That is to say, they’re better at solving problems because they have more ways of solving problems – more strings to their bow (or more keys to their piano, if you like).

Compiler design might sound like the kind of esoteric computer-sciency thing that’s unlikely to arise as a business need. But think of it this way: what’s our code built on? Is the structure of programs not the domain model we work in every day? While I’ve never designed a compiler, I have had numerous occasions when – to write a tool that makes my job easier – it’s been very useful to understand that model. Programmers who understand what programs are made of tend to be more effective at reasoning about code, and better at writing code that’s about code. We use those tools every day, but all tooling has gaps. I’ve yet to meet a static analysis tool, for example, that had all the rules I’d be interested in applying to code quality.

The most effective dev teams I’ve come into contact with have invested in custom tooling to automate repetitive donkey work at the code face. Some of them end up being open-sourced, and you may be using them yourself today. How did you think our test runner’s unit test discovery worked?

Some books about stuff I had no immediate need to know but read anyway

Now, we could of course hire a pianist for our orchestra – one who already knows Rachmaninoff’s Piano Concerto No. 3. But guess what? It turns out pianists of that calibre are really difficult to find – probably because it takes years and years to get to that standard. (No shortage of people who manage pianists, of course.) And now you remember how you voted against all those “superfluous” music education programmes. If only you could have known that one day you might need a concert pianist. If only someone had warned you!

Well, here I am – warning you. Not all problems are easy. Some things take a long time to learn, and those things may crop up. And while nobody can guarantee that they will, this is essentially a numbers game. What are the odds that we have the capability – or can buy in the capability at short notice (which opens the lid on a can of worms I call “proactive recruitment”) – to solve this problem?

Most of the time, organisations end up walking away from the hard problems. They are restricted to the things most programmers can solve. This is not a good way to build competitive advantage, any more than sticking to works that don’t have a piano part is a good way to run a successful orchestra.

Enlightened organisations actively invest in developing capabilities they don’t see any immediate need for. Yes, they’re speculating. And speculation can be wasteful. Just like all uncertain endeavors can be wasteful. But there are usually signposts in our industry about what might be coming a year from now, a decade from now, and beyond.

And there are trends – the continued increase in available computing power is one good example. Look at what would be really useful but is currently computationally too expensive right now. In 1995, we saw continuous build and tests cycles as highly desirable. But most teams still ran them overnight, because the hardware was about 1000 times slower than today. Now coming into vogue – as I predicted it would over a decade ago – more and more of us are building and testing (and even automatically inspecting) our code continuously in the background as we type it. That was totally foreseeable. As is the rise of Continuous Inspection as a more mainstream discipline off the back of it.

There are countless examples of long-established and hugely success businesses being caught with their pants down by Moore’s Law.

Although digital photography was by no means a new invention, its sudden commercial viability 20 years ago over chemical photography nearly finished Kodak overnight. They had not speculated. They had not invested in digital photography capability. They’d been too busy being the market leader in film.

And then there was the meteoric rise of guitar amp simulators – a technology long sneered at (but begrudgingly used) by serious players, and less serious players like myself. The early generations of virtual amps didn’t sound great, and didn’t feel like playing through a real amp with real tubes. (Gotta love them tubes!) But – damn – they were convenient.

The nut they couldn’t crack was making it sound like it was being recorded through a real speaker cabinet with a real microphone. There was a potential solution – convolution, a mathematical process that can combine two signals, so the raw output of a guitar amp (real or virtual) can be combined with an “impulse response” (a short audio sample, like the short reverberation in a room after you click your fingers) of a cabinet and microphone to give a strikingly convincing approximation of what that signal would sound like in the space – or what that guitar amp output would sound like through those speakers recorded with that microphone. Now, suddenly, virtual guitar amps were convenient and sounded good.

But up to that point, convolution had been too computationally expensive to be viable for playing and recording on commercially available hardware. And then, suddenly, it wasn’t. Queue mad dash by established amp manufacturers to catch up. And, to be fair to them, their virtual amp offerings are pretty spiffy these days. Was this on their radar, I wonder? Did the managers and the engineers see virtual amp technology looming on the horizon and proactively invest in developing that capability in exactly the way Kodak didn’t? Not before virtual amp leaders like Line 6 had taken a chunk of their market share, I suspect. And now convolution is everywhere. So many choices, so many market players old and new.

You see, it’s all well and good making hay while the sun shines. But when the weather turns, don’t end up being the ones who didn’t think to invest in a umbrella.

The Software Design Process

One thing that sadly rarely gets discussed these days is how we design software. That is, how we get from a concept to working code.

As a student (and teacher) of software design and architecture of many years, experiencing first-hand many different methodologies from rigorous to ad hoc, heavyweight to agile, I can see similarities between all effective approaches.

Whether you’re UML-ing or BDD-ing or Event Storming-ing your designs, when it works, the thought process is the same.

It starts with a goal.

This – more often than not – is a problem that our customer needs solving.

This, of course, is where most teams get the design thinking wrong. They don’t start with a goal – or if they do, most of the team aren’t involved at that point, and subsequently are not made aware of what the original goal or problem was. They’re just handed a list of features and told “build that”, with no real idea what it’s for.

But they should start with a goal.

In design workshops, I encourage teams to articulate the goal as a single, simple problem statement. e.g.,

It’s really hard to find good vegan takeaway in my area.

Jason Gorman, just now

Our goal is to make it easier to order vegan takeaway food. This, naturally, begs the question: how hard is it to order vegan takeaway today?

If our target customer area is Greater London, then at this point we need to hit the proverbial streets and collect data to help us answer that question. Perhaps we could pick some random locations – N, E, S and W London – and try to order vegan takeaway using existing solutions, like Google Maps, Deliveroo and even the Yellow Pages.

Our data set gives us some numbers. On average, it took 47 minutes to find a takeaway restaurant with decent vegan options. They were, on average, 5.2 miles from the random delivery address. The orders took a further 52 minutes to be delivered. In 19% of selected delivery addresses, we were unable to order vegan takeaway at all.

What I’ve just done there is apply a simple thought process known as Goal-Question-Metric.

We ask ourselves, which of these do we think we could improve on with a software solution? I’m not at all convinced software would make the restaurants cook the food faster. Nor will it make the traffic in London less of an obstacle, so delivery times are unlikely to speed up much.

But if our data suggested that to find a vegan menu from a restaurant that will deliver to our address we had to search a bunch of different sources – including telephone directories – then I think that’s something we could improve on. It hints strongly that lack of vegan options isn’t the problem, just the ease of finding them.

A single searchable list of all takeaway restaurants with decent vegan options in Greater London might speed up our search. Note that word: MIGHT.

I’ve long advocated that software specifications be called “theories”, not “solutions”. We believe that if we had a searchable list of all those restaurants we had to look in multiple directories for, that would make the search much quicker, and potentially reduce the incidences when no option was found.

Importantly, we can compare the before and the after – using the examples we pulled from the real world – to see if our solution actually does improve search times and hit rates.

Yes. Tests. We like tests.

Think about it; we describe our modern development processes as iterative. But what does that really mean? To me – a physics graduate – it implies a goal-seeking process that applies a process over and over to an input, the output of which is fed into the next cycle, which converges on a stable working solution.

Importantly, if there’s no goal, and/or no way of knowing if the goal’s been achieved, then the process doesn’t work. The wheels are turning, the engine’s revving, but we ain’t going anywhere in particular.

Now, be honest, when have you ever been involved in a design process that started like that? But this is where good design starts: with a goal.

So, we have a goal – articulated in a testable way, importantly. What next?

Next, we imaginate (or is it visionize? I can never keep up with the management-speak) a feature – a proverbial button the user clicks – that solves their problem. What does it do?

Don’t think about how it works. Just focus on visualifying (I’m getting the hang of this now) what happens when the user clicks that magical button.

In our case, we imagine that when the user clicks the Big Magic Button of Destiny, they’re shown a list of takeaway restaurants with a decent vegan menu who can deliver to their address within a specified time (e.g., 45 minutes).

That’s our headline feature. A headline feature is the feature that solves the customer’s problem, and – therefore – is the reason for the system to exist. No, “Login” is never a headline feature. Nobody uses software because they want to log in.

Now we have a testable goal and a headline feature that solves the customer’s problem. It’s time to think about how that headline feature could work.

We would need a complete list of takeaway restaurants with decent vegan menus within any potential delivery address in our target area of Greater London.

We would need to know how long it might take to deliver from each restaurant to the customer’s address.

This would include knowing if the restaurant is still taking orders at that time.

Our headline feature will require other features to make it work. I call these supporting features. They exist only because of the headline feature – the one that solves the problem. The customer doesn’t want a database. They want vegan takeaway, damn it!

Our simple system will need a way to add restaurants to the list. It will need a way to estimate delivery times (including food preparation) between restaurant and customer addresses – and this may change (e.g., during busy times). It will need a way for restaurants to indicate if they’re accepting orders in real time.

At this point, you may be envisaging some fancypants Uber Eats style of solution with whizzy maps showing delivery drivers aimlessly circling your street for 10 minutes because nobody reads the damn instructions these days. Grrr.

But it ain’t necessarily so. This early on in the design process is no time for whizzy. Whizzy comes later. If ever. Remember, we’re setting out here to solve a problem, not build a whizzy solution.

I’ve seen some very high-profile applications go live with data entry interfaces knocked together in MS Access for that first simple release, for example. Remember, this isn’t a system for adding restaurant listings. This is a system for finding vegan takeaway. The headline feature’s always front-and-centre – our highest priority.

Also remember, we don’t know if this solution is actually going to solve the problem. The sooner we can test that, the sooner we can start iterating towards something better. And the simpler the solution, the sooner we can put it in the hands of end users. Let’s face it, there’s a bit of smoke and mirrors to even the most mature software solutions. We should know; we’ve looked behind the curtain and we know there’s no actual Wizard.

Once we’re talking about features like “Search for takeaway”, we should be in familiar territory. But even here, far too many teams don’t really grok how to get from a feature to working code.

But this thought process should be ingrained in every developer. Sing along if you know the words:

  • Who is the user and what do they want to do?
  • What jobs does the software need to do to give them that?
  • What data is required to do those jobs?
  • How can the work and the data be packaged together (e.g., in classes)
  • How will those modules talk to each other to coordinate the work end-to-end?

This is the essence of high-level modular software design. The syntax may vary (classes, modules, components, services, microservices, lambdas), but the thinking is the same. The user has needs (find vegan takeaway nearby). The software does work to satisfy those needs (e.g., estimate travel time). That work involves data (e.g., the addresses of restaurant and customer). Work and data can be packaged into discrete modules (e.g., DeliveryTimeEstimator). Those modules will need to call other modules to do related work (e.g., address.asLatLong()), and will therefore need “line of sight” – otherwise known as a dependency – to send that message.

You can capture this in a multitude of different ways – Class-Responsibility-Collaboration (CRC) cards, UML sequence diagrams… heck, embroider it on a tapestry for all I care. The thought process is the same.

This birds-eye view of the modules, their responsibilities and their dependencies needs to be translated into whichever technology you’ve selected to build this with. Maybe the modules are Java classes. Maybe their AWS lambdas. Maybe they’re COBOL programs.

Here we should be in writing code mode. I’ve found that if your on-paper (or on tapestry, if you chose that route) design thinking goes into detail, then it’s adding no value. Code is for details.

Start writing automated tests. Now that really should be familiar territory for every dev team.

/ sigh /

The design thinking never stops, though. For one, remember that everything so far is a theory. As we get our hands dirty in the details, our high-level design is likely to change. The best laid plans of mice and architects…

And, as the code emerges one test at a time, there’s more we need to think about. Our primary goal is to build something that solves the customer’s problem. But there are secondary goals – for example, how easy it will be to change this code when we inevitably learn that it didn’t solve the problem (or when the problem changes).

Most kitchen designs you can cater a dinner party in. But not every kitchen is easy to change.

It’s vital to remember that this is an iterative process. It only works if we can go around again. And again. And again. So organising our code in a way that makes it easy to change is super-important.

Enter stage left: refactoring.

Half the design decisions we make will be made after we’ve written the code that does the job. We may realise that a function or method is too big or too complicated and break it down. We may realise that names we’ve chosen make the code hard to understand, and rename. We may see duplication that could be generalised into a single, reusable abstraction.

Rule of thumb: if your high-level design includes abstractions (e.g., interfaces, design patterns, etc), you’ve detailed too early.

Jason Gorman, probably on a Thursday

The need for abstractions emerges organically as the code grows, through the process of reviewing and refactoring that code. We don’t plan to use factories or the strategy pattern, or to have a Vendor interface, in our solution. We discover the need for them to solve problems of software maintainability.

By applying organising principles like Simple Design, D.R.Y. Tell, Don’t Ask, Single Responsibility and the rest to the code is it grows, good, maintainable modular designs will emerge – often in unexpected ways. Let go of your planned architecture, and let the code guide you. Face it, it was going to be wrong anyway. Trust me: I know.

Here’s another place that far too many teams go wrong. As your code grows and an architecture emerges, it’s very, very helpful to maintain a birds-eye view of what that emerging architecture is becoming. Ongoing visualisation of the software – its modules, patterns, dependencies and so on – is something surprisingly few teams do these days. Working on agile teams, I’ve invested some of my time to creating and maintaining these maps of the actual terrain and displaying them prominently in the team’s area – domain models, UX storyboards, key patterns we’ve applied (e.g., how have we done MVC?) You’d be amazed what gets missed when everyone’s buried in code, neck-deep in details, and nobody’s keeping an eye on the bigger picture. This, regrettably, is becoming a lost skill – the baby Agile threw out with the bathwater.

So we build our theoretical solution, and deliver it to end users to try. And this is where the design process really starts.

Until working code meets the real world, it’s all guesswork at best. We may learn that some of the restaurants are actually using dairy products in the preparation of their “vegan” dishes. Those naughty people! We may discover that different customers have very different ideas about what a “decent vegan menu” looks like. We may learn that our estimated delivery times are wildly inaccurate because restaurants tell fibs to get more orders. We may get hundreds of spoof orders from teenagers messing with the app from the other side of the world.

Here’s my point: once the system hits the real world, whatever we thought was going to happen almost certainly won’t. There are always lessons that can only be learned by trying it for real.

So we go again. And that is the true essence of software design.

When are we done? When we’ve solved the problem.

And then we move on to the next problem. (e.g., “Yeah, vegan food’s great, but what about vegan booze?”)

Will There Be A Post-Pandemic IT Boom?

For billions of people around the world, things are pretty uncertain now. Hundreds of millions have lost their jobs. Businesses of all sizes – but especially smaller and newer businesses, many start-ups – are in trouble. Many have already folded.

The experts predict a recession the likes of which we haven’t seen in anyone’s lifetime. But there may be one sector that – as the dust settles – might even grow faster as a result of the pandemic.

Information and communications technology has come to the fore as country after country locked down, commanding businesses who could to let their employees work from home. This would not have been possible a generation ago for the vast majority. Most homes did not have computers, and almost no homes had Internet. Now, it’s the reverse.

While some household name brands have run into serious difficulties, new brands have become household names in the last 3 months – companies like Zoom, for example. “Zooming” is now as much a thing as “hoovering”.

Meanwhile, tens of thousands of established businesses have had their digital transformations stress-tested for the first time, and have found them wanting. From extreme cases like UK retailer Primark, who effectively had no online capability, to old hands in every sector who’ve invested billions in digital over the last 30 years, it seems most were not quite as “digital” as it turns out they needed to be.

From customer-facing transactions to internal business processes, the pandemic has revealed gaps that were being filled by people necessarily co-located in offices and shops and factories and so on. A client of mine, for example, still doesn’t have the ability to sign up new suppliers without a human being in the accounts department to access the mainframe via one of the dedicated terminals. They are rushing now to close that gap, but their mainframe skills base has dwindled to the point where nobody knows how. So they have to hire someone with COBOL skills to make the changes on that side, and C# skills to write the web front end for it. Good luck with that!

I’m noticing these digital gaps everywhere. Most organisations have them. They were missed because the processes still worked, thanks to the magic of People Going To OfficesTM. But now those gaps have been laid bare for everyone to see (and for customers and suppliers to experience).

Here’s the thing: thing’s aren’t going back to normal. The virus is going to be with us for some time, and even after we’ve tamed it with a vaccine or new treatments, everyone will be thinking about the next new virus. Just as COVID-19 leaves it mark on people it infects, the pandemic will leave its mark on our civilisation. We will adapt to a new normal. And a big component of that new normal will be digital technology. As wars accelerate science and technology, so too will COVID-19 accelerate digital innovation.

And it’s a match made in heaven, because this innovation can largely be done from our homes, thanks to… digital technology! It’s a self-accelerating evolution.

So, I have an inkling we’re going to be very busy in the near future.