The Jason’s Guitar Shack kata – Part I (Core Logic)

This week, I’ve been coaching developers for an automotive client in Specification By Example (or, as I call it these days, “customer-driven TDD”).

The Codemanship approach to software design and development has always been about solving problems, as opposed to building products or delivering features.

So I cooked up an exercise that starts with a customer with a business problem, and tasked pairs to work with that customer to design a simple system that might solve the problem.

It seems to have gone well, so I thought I’d share the exercise with you for you to try for yourselves.

Jason’s Guitar Shack

I’m a retired international rock legend who has invested his money in a guitar shop. My ex-drummer is my business partner, and he runs the shop, while I’ve been a kind of silent partner. My accountant has drawn my attention to a problem in the business. We have mysterious gaps in the sales of some of our most popular products.

I can illustrate it with some data I pulled off our sales system:

DateTimeProduct IDQuantityPrice Charged
13/07/201910:477571549
13/07/201912:157571549
13/07/201917:238111399
14/07/201911:454491769
14/07/201913:378111399
14/07/201915:018111399
15/07/201909:267571549
15/07/201911:558111399
16/07/201910:3337411199
20/07/201914:074491769
22/07/201911:284491769
24/07/201910:178112798
24/07/201915:318111399
Product sales for 4 selected guitar models

Product 811 – the Epiphone Les Paul Classic in worn Cherry Sunburst – is one of our biggest sellers.

Epiphone Les Paul Classic Worn in Heritage Cherry | GAK

We sell one or two of these a day, usually. But if you check out the sales data, you’ll notice that between July 15th and July 24th, we didn’t sell any at all. These gaps appear across many product lines, throughout the year. We could be losing hundreds of thousands of pounds in sales.

After some investigation, I discovered the cause, and it’s very simple: we keep running out of stock.

When we reorder stock from the manufacturer or other supplier, it takes time for them to fulfil our order. Every product has a lead time on delivery to our warehouse, which is recorded in our warehouse system.

DescriptionPrice (£)StockRack SpaceManufacturer Delivery Lead Time (days)Min Order
Fender Player Stratocaster w/ Maple Fretboard in Buttercream54912201410
Fender Deluxe Nashville Telecaster MN in 2 Colour Sunburst769510215
Ibanez RG652AHMFX-NGB RG Prestige Nebula Green Burst (inc. case)119925601
Epiphone Les Paul Classic In Worn Heritage Cherry Sunburst39922301420
Product supply lead times for 4 selected guitars

My business partner – the store manager – typically only reorders stock when he realises we’ve run out (usually when a customer asks for it, and he checks to see if we have any). Then we have no stock at all while we wait for the manufacturer to supply more, and during that time we lose a bunch of sales. In this age of the Electric Internet, if we don’t have what the customer wants, they just take out their smartphone and buy it from someone else.

This is the business problem you are tasked with solving: minimise lost sales due to lack of stock.

There are some wrinkles to this problem, of course. We could solve it by cramming our warehouse full of reserve stock. But that would create a cash flow problem for the business, as we have bills to pay while products are gathering dust on our shelves. So the constraint here is, while we don’t want to run out of products, we actually want as few in stock as possible, too.

The second wrinkle we need to deal with is that sales are seasonal. We sell three times as much of some products in December as we do in August, for example. So any solution would need to take that into account to reduce the risk of under- or over-stocking.

So here’s the exercise for a group of 2 or more:

  • Nominate someone in your group as the “customer”. They will decide what works and what doesn’t as far as a solution is concerned.
  • Working with your customer, describe in a single sentence a headline feature – this is a simple feature that solves the problem. (Don’t worry about how it works yet, just describe what it does.)
  • Now, think about how your headline feature would work. Describe up to a maximum of 5 supporting features that would make the headline feature possible. These could be user-facing features, or internal features used by the headline feature. Remember, we’re trying to design the simplest solution possible.
  • For each feature, starting with the headline feature, imagine the scenarios the system would need to handle. Describe each scenario as a simple headline (e.g., “product needs restocking”). Build a high-level test list for each feature.
  • The design and development process now works one feature at a time, starting with the headline feature.
    • For each feature’s scenario, describe in more detail how that scenario will work. Capture the set-up for that scenario, the action or event that triggers the scenario, and the outcomes the customer will expect to happen as a result. Feel free to use the Given…When…Then style. (But remember: it’s not compulsory, and won’t make any difference to the end result.)
    • For each scenario, capture at least one example with test data for every input (every variable in the set-up and every parameter of the action or event), and for every expected output or outcome. Be specific. Use the sample data from our warehouse and sales systems as a starting point, then choose values that fit your scenario.
    • Working one scenario at a time, test-drive the code for its core logic using the examples, writing one unit test for each output or outcome. Organise and name your tests and test fixture so it’s obvious which feature, which scenario and which output or outcome they are talking about. Try as much as possible to choose names that appear in the text you’ve written with your customer. You’re aiming for unit tests that effectively explain the customer’s tests.
    • Use test doubles – stubs and mocks – to abstract external dependencies like the sales and warehouse systems, as well as to Fake It Until You Make it for any supporting logic covered by features you’ll work on later.

And that’s Part I of the exercise. At the end, you should have the core logic of your solution implemented and ready to incorporate into a complete working system.

Here’s a copy of the sample data I’ve been using with my coachees – stick close to it when discussing examples, because this is the data that your system will be consuming in Part II of this kata, which I’ll hopefully write up soon.

Good luck!

Readable Parameterized Tests

Parameterized tests (sometimes called “data-driven tests”) can be a useful technique for removing duplication from test code, as well as potentially buying teams much greater test assurance with surprisingly little extra code.

But they can come at the price of readability. So if we’re going to use them, we need to invest some care in making sure it’s easy to understand what the parameter data means, and to ensure that the messages we get when tests fail are meaningful.

Some testing frameworks make it harder than others, but I’m going to illustrate using some mocha tests in JavaScript.

Consider this test code for a Mars Rover:

it("turns right from N to E", () => {
let rover = {facing: "N"};
rover = go(rover, "R");
assert.equal(rover.facing, "E");
})
it("turns right from E to S", () => {
let rover = {facing: "E"};
rover = go(rover, "R");
assert.equal(rover.facing, "S");
})
it("turns right from S to W", () => {
let rover = {facing: "S"};
rover = go(rover, "R");
assert.equal(rover.facing, "W");
})
it("turns right from W to N", () => {
let rover = {facing: "W"};
rover = go(rover, "R");
assert.equal(rover.facing, "N");
})
view raw rover_test.js hosted with ❤ by GitHub

These four tests are different examples of the same behaviour, and there’s a lot of duplication (I should know – I copied and pasted them myself!)

We can consolidate them into a single parameterised test:

[{input: "N", expected: "E"}, {input: "E", expected: "S"}, {input: "S", expected: "W"},
{input: "W", expected: "N"}].forEach(
function (testCase) {
it("turns right", () => {
let rover = {facing: testCase.input};
rover = go(rover, "R");
assert.equal(rover.facing, testCase.expected);
})
})
view raw rover_test.js hosted with ❤ by GitHub

While we’ve removed a fair amount of duplicate test code, arguably this single parameterized test is harder to follow – both at read-time, and at run-time.

Let’s start with the parameter names. Can we make it more obvious what roles these data items play in the test, instead of just using generic names like “input” and “expected”?

[{startsFacing: "N", endsFacing: "E"}, {startsFacing: "E", endsFacing: "S"}, {startsFacing: "S", endsFacing: "W"},
{startsFacing: "W", endsFacing: "N"}].forEach(
function (testCase) {
it("turns right", () => {
let rover = {facing: testCase.startsFacing};
rover = go(rover, "R");
assert.equal(rover.facing, testCase.endsFacing);
})
})
view raw rover_test.js hosted with ❤ by GitHub

And how about we format the list of test cases so they’re easier to distinguish?

[
{startsFacing: "N", endsFacing: "E"},
{startsFacing: "E", endsFacing: "S"},
{startsFacing: "S", endsFacing: "W"},
{startsFacing: "W", endsFacing: "N"}
].forEach(
function (testCase) {
it("turns right", () => {
let rover = {facing: testCase.startsFacing};
rover = go(rover, "R");
assert.equal(rover.facing, testCase.endsFacing);
})
})
view raw rover_test.js hosted with ❤ by GitHub

And how about we declutter the body of the test a little by destructuring the testCase object?

[
{startsFacing: "N", endsFacing: "E"},
{startsFacing: "E", endsFacing: "S"},
{startsFacing: "S", endsFacing: "W"},
{startsFacing: "W", endsFacing: "N"}
].forEach(
function ({startsFacing, endsFacing}) {
it("turns right", () => {
let rover = {facing: startsFacing};
rover = go(rover, "R");
assert.equal(rover.facing, endsFacing);
})
})
view raw rover_test.js hosted with ❤ by GitHub

Okay, hopefully this is much easier to follow. But what happens when we run these tests?

It’s not at all clear which test case is which. So let’s embed some identifying data inside the test name.

[
{startsFacing: "N", endsFacing: "E"},
{startsFacing: "E", endsFacing: "S"},
{startsFacing: "S", endsFacing: "W"},
{startsFacing: "W", endsFacing: "N"}
].forEach(
function ({startsFacing, endsFacing}) {
it(`turns right from ${startsFacing} to ${endsFacing}`, () => {
let rover = {facing: startsFacing};
rover = go(rover, "R");
assert.equal(rover.facing, endsFacing);
})
})
view raw rover_test.js hosted with ❤ by GitHub

Now when we run the tests, we can easily identify which test case is which.

With a bit of extra care, it’s possible with most unit testing tools – not all, sadly – to have our cake and eat it with readable parameterized tests.

Codemanship Code Craft Videos

Over the last 6 months, I’ve been recording hands-on tutorials about code craft – TDD, design principles, refactoring, CI/CD and more – for the Codemanship YouTube channel.

I’ve recorded the same tutorials in JavaScript, Java, C# and (still being finished) Python.

As well as serving as a back-up for the Codemanship Code Craft training course, these series of videos forms possibly the most comprehensive free learning resource on the practices of code craft available anywhere.

Each series has over 9 hours of video, plus links to example code and other useful resources.

Codemanship Code Craft videos currently available

I’ve heard from individual developers and teams who’ve been using these videos as the basis for their practice road map. What seems to work best is to watch a video, and then straight away try out the ideas on a practical example (e.g., a TDD kata or a small project) to see how they can work on real code.

In the next few weeks, I’ll be announcing Codemanship Code Craft Study Groups, which will bring groups of like-minded learners together online once a week to watch the videos and pair program on carefully designed exercises with coaching from myself.

This will be an alternative way of receiving our popular training, but with more time dedicated to hands-on practice and coaching, and more time between lessons for the ideas to sink in. It should also be significantly less disruptive than taking a whole team out for 3 days for a regular training course, and significantly less exhausting than 3 full days of Zoom meetings! Plus the price per person will be the same as the regular Code Craft course.

Slow Tests Kill Businesses

I’m always surprised at how few organisations track some pretty fundamental stats about software development, because if they did then they might notice what’s been killing their business.

It’s a picture I’ve seen many, many times; a software product or system is created, and it goes live. But it has bugs. Many bugs. So, a bigger chunk of the available development time is used up fixing bugs for the second release. Which has even more bugs. Many, many bugs. So an even bigger chunk of the time is used to fix bugs for the third release.

It looks a little like this:

Over the lifetime of the software, the proportion of development time devoted to bug fixing increases until that’s pretty much all the developers are doing. There’s precious little time left for new features.

Naturally, if you can only spare 10% of available dev time for new features, you’re going to need 10 times as many developers. Right? This trend is almost always accompanied by rapid growth of the team.

So the 90% of dev time you’re spending on bug fixing is actually 90% of the time of a team that’s 10x as large – 900% of the cost of your first release, just fixing bugs.

So every new feature ends up in real terms costing 10x in the eighth release what it would have in the first. For most businesses, this rules out change – unless they’re super, super successful (i.e., lucky). It’s just too damned expensive.

And when you can’t change your software and your systems, you can’t change the way you do business at scale. Your business model gets baked in – petrified, if you like. And all you can do is throw an ever-dwindling pot of money at development just to stand still, while you watch your competitors glide past you with innovations you’ll never be able to offer your customers.

What happens to a business like that? Well, they’re no longer in business. Customers defected in greater and greater numbers to competitor products, frustrated by the flakiness of the product and tired of being fobbed off with promises about upgrades and hotly requested features and fixes that never arrived.

Now, this effect is entirely predictable. We’ve known about it for many decades, and we’ve known the causal mechanism, too.

Source: IBM System Science Institute

The longer a bug goes undetected, exponentially the more it costs to fix. In terms of process, the sooner we test new or changed code, the cheaper the fix is. This effect is so marked that teams actually find that if they speed up testing feedback loops – testing earlier and more often – they deliver working software faster.

This is very simply because they save more time downstream on bug fixes than they invest in earlier and more frequent testing.

The data used in the first two graphs was taken from a team that took more than 24 hours to build and test their code.

Here’s the same stats from a team who could build and test their code in less than 2 minutes (I’ve converted from releases to quarters to roughly match the 12-24 week release cycles of the first team – this second team was actually releasing every week):

This team has nearly doubled in size over the two years, which might sound bad – but it’s more of a rosy picture than the first team, whose costs spiraled to more than 1000% of their first release, most of which was being spent fixing bugs and effectively going round and round in circles chasing their own tails while their customers defected in droves.

I’ve seen this effect repeated in business after business – of all shapes and sizes: software companies, banks, retail chains, law firms, broadcasters, you name it. I’ve watched $billion businesses – some more than a century old – brought down by their inability to change their software and their business-critical systems.

And every time I got down to the root cause, there they were – slow tests.

Every. Single. Time.

Is Your Agile Transformation Just ‘Agility Theatre’?

I’ve talked before about what I consider to be the two most important feedback loops in software development.

When I explain the feedback loops – the “gears” – of Test-Driven Development, I go to great pains to highlight which of those gears matter most, in terms of affecting our odds of success.

tdd_gears

Customer or business goals drive the whole machine of delivery – or at least, they should. We are not done because we passed some acceptance tests, or because a feature is in production. We’re only done when we’ve solved the customer’s problem.

That’s very likely going to require more than one go-around. Which is why the second most important feedback loop is the one that establishes if we’re good to go for the next release.

The ability to establish quickly and effectively if the changes we made to the software have broken it is critical to our ability to release it. Teams who rely on manual regression testing can take weeks to establish this, and their release cycles are inevitably very slow. Teams who rely mostly on automated system and integration tests have faster release cycles, but still usually far too slow for them to claim to be “agile”. Teams who can re-test most of the code in under a minute are able to release as often as the customer wants – many times a day, if need be.

The speed of regression testing – of establishing if our software still works – dictates whether our release cycles span months, weeks, or hours. It determines the metabolism of our delivery cycle and ultimately how many throws of the dice we get at solving the customer’s problem.

It’s as simple as that: faster tests = more throws of the dice.

If the essence of agility is responding to change, then I conclude that fast-running automated tests lie at the heart of that.

What’s odd is how so many “Agile transformations” seem to focus on everything but that. User stories don’t make you responsive to change. Daily stand-ups don’t make you responsive to change. Burn-down charts don’t make you responsive to change. Kanban boards don’t make you responsive to change. Pair programming doesn’t make you responsive to change.

It’s all just Agility Theatre if you’re not addressing the two must fundamental feedback loops, which the majority of organisations simply don’t. Their definition of done is “It’s in production”, as they work their way through a list of features instead of trying to solve a real business problem. And they all too often under-invest in the skills and the time needed to wrap software in good fast-running tests, seeing that as less important than the index cards and the Post-It notes and the Jira tickets.

I talk often with managers tasked with “Agilifying” legacy IT (e.g., mainframe COBOL systems). This means speeding up feedback cycles, which means speeding up delivery cycles, which means speeding up build pipelines, which – 99.9% of the time – means speeding up testing.

After version control, it’s #2 on my list of How To Be More Agile. And, very importantly, it works. But then, we shouldn’t be surprised that it does. Maths and nature teach us that it should. How fast do bacteria or fruit flies evolve – with very rapid “release cycles” of new generations – vs elephants or whales, whose evolutionary feedback cycles take decades?

There are two kinds of Agile consultant: those who’ll teach you Agility Theatre, and those who’ll shrink your feedback cycles. Non-programmers can’t help you with the latter, because the speed of the delivery cycle is largely determined by test execution time. Speeding up tests requires programming, as well as knowledge and experience of designing software for testability.

70% of Agile coaches are non-programmers. A further 20% are ex-programmers who haven’t touched code for over a decade. (According to the hundreds of CVs I’ve seen.) That suggests that 90% of Agile coaches are teaching Agility Theatre, and maybe 10% are actually helping teams speed up their feedback cycles in any practical sense.

It also strongly suggests that most Agile transformations have a major imbalance; investing heavily in the theatre, but little if anything in speeding up delivery cycles.

Introduction to Test-Driven Development Video Series

Over the last month, I’ve been recording screen casts introducing folks to the key ideas in TDD.

Each series covers 6 topics over 7 videos, with practical demonstrations instead of slides – just the way I like it.

They’re available in the four most used programming languages today:

Of course, like riding a bike, you won’t master TDD just by watching videos. You can only learn TDD by doing it.

On our 2-day TDD training workshop, you’ll get practical, hands-on experience applying these ideas with real-time guidance from me.

Scheduled Online TDD Courses, May – July

I’ve got three publicly scheduled 2-day Test-Driven Development courses coming up:

I appreciate that we’re all going through some – how shall we say? – interesting times right now, and that many of you are getting used to working remotely for the first time.

Hopefully what you’re learning is that distributed teams can be as effective – sometimes even more effective – than co-located teams.

For every advantage of co-location lost, we gain another advantage when we remove the distractions of open plan offices, of long life-sapping commutes, and of never-ending meetings because we just happen to be in the same building,

And online training can be just as effective – sometimes even more effective – than onsite training. We are, after all, looking at and working on code. Screen sharing and webcams work just as well as sitting next to each other once you get the hang of it.

It’s pretty much the exact same training I do in person, only you can enjoy it from the comfort of your own home, without the daily commute either side and without the many distractions of being on your office. And the catering can be great – depending on what you’ve got in your fridge.

To find out more about the course, visit http://codemanship.co.uk/tdd.html

 

 

For Distributed Teams, Code Craft is Critical

Right now, most software teams all around the world are working from home. Many have not done it before, and are on a learning curve that means last week’s productivity won’t be returning for a while.

I’ve worked on distributed teams many times, and – through Codemanship – trained and mentored dozens of teams remotely. One thing I’ve learned from all that remote development experience is that coding discipline becomes super-important.

Just as distributed systems amplify every design flaw, turning what would be a headache in a monolith into a major outbreak in a service-oriented architecture, distributed working amplifies team dysfunctions as the communication pathways take on extra weight.

Here’s how code craft can help:

  • Unit tests – keeping the software working is Distributed Dev Team 101. Open Source projects rely on suites of fast-running tests to protect against check-ins that break the code.
  • Continuous Integration – is how distributed teams communicate their changes to each other. Co-located teams should merge their changes to the master branch often and be build aware, keeping one eye on other people’s merges to see what’s changed. But it’s much easier on co-located teams to keep everyone in step because we can see and talk to each other about the changes we’re making. If remote developers do infrequent large merges, integration hell gets amplified tenfold by the extra communication barriers.
  • Test-Driven Development – a lot of the communication between developers, and between developers and their customers, can be handwavy and vague. And if communication is easy – like on a co-located team – we just go around a few more times until we converge on what’s required. But when communication is harder, like in distributed teams, a few more goes around gets very expensive. Using executable tests as specifications removes the ambiguity. It should do exactly this. Also, TDD – done well – produces suites of useful, fast-running automated tests. It’s a win-win.
  • Design Principles – Well-factored code is very important to co-located teams, and super-duper-important to distributed teams. Let’s count the ways:
    • Simple Design
      • Code should work – if it don’t work, we can’t ship it. Any changes that break the code block the team. It’s a big deal on a co-located team, but it’s a really big deal on a distributed team.
      • Code should clearly communicate its intent – code should speak for itself, and when developers are working remotely, and communicating requires extra effort, this is especially true. The easier code is to understand, the less teleconferences required to understand it.
      • Code should be free of duplication – so much duplication in software is duplication of concepts. This often occurs when developers on teams work in isolation, unaware that someone else has already added a module that does what their module also does. Devs need to be aware of duplication in the code – Continuous Integration and merge awareness helps – and clued up to when they should refactor it and when they should leave it alone.
      • Code should be as simple as we can make it – every line of code that has to be maintained as another straw on the camel’s back. When the camel’s back stretches between multiple locations – possibly in multiple time zones – the impact of every additional straw is felt many-fold.
    • Modular Design
      • Modules should do one job – the ability to change the behaviour of a system by just editing one module is critical to a team’s ability to make the changes they need without treading on the toes of other developers. On distributed teams, multiple developers all making changes to one module for multiple reasons can lead to some spectacular merge train wrecks.
      • Modules should hide their internal workings – the more modules are coupled to each other, the bigger and wider the impact of even the smallest changes will be felt. Imagine your distributed team is working precariously balanced on high wires that are all interconnected. What you don’t want is for one person to start violently shaking their wire, sending ripples throughout the network. Or it could all come tumbling down. Again, it’s bad on co-located teams, but it’s Double-Plus-Triple-Word-Score-Bad on distributed teams. Ever dependency can bring pain.
      • Modules should not depend directly on implementations of other modules – it’s good architecture generally for modules not to bind directly to implementations of the other modules they use, for a variety of reasons. But it’s especially important when teams aren’t co-located. Taken together, the first three principles of modular design are better known as “Separation of Concerns”. Or, as I like to call it, the Principle of Somebody Else’s Problem. If my module needs to send an email, I shouldn’t need to know how emails are actually sent – all that detail should be hidden from me – and I should be able to work on my code without having to actually send emails when I test it. Sending emails is somebody else’s problem. It’s particularly useful in a test-driven approach to design to be able to write a test for code that has external dependencies – things it uses that other developers are working on – without actually binding directly to the implementation of that external component so that we can swap in a test double that pretends to do that job. That’s how you scale TDD. That’s how you make TDD work in distributed teams, too.
      • Module interfaces should be designed from the client’s point of view – tied together with TDD, we can specify modules very precisely from the outside: this is what it should look like (interface) and this is what it should do (tests). Imagine your distributed team is making a jigsaw: the hard way to do it is to have each person go off and make a piece of the jigsaw and then hope that they all fit together at the end. The smart way to do it is to define the shapes of the pieces as parts of the whole puzzle, and then have people implement the pieces based in the interfaces and tests agreed. You do this by designing systems from the outside in, defining modules by how they will be used from the client code’s POV. This also helps to restrict public interfaces to only what client’s need to see, hiding internal details, improving encapsulation and reducing coupling. Coupling on distributed teams can be very, very expensive.
    • Refactoring – the still-rather-too-rare discipline of reshaping code without breaking the software is the means by which we achieve good design. Try as we might to never write code that’s hard to understand, or has duplication, or is overly complex, or too tightly coupled, we’ll always need to clean up our code as we go. If the impact of poor design is amplified on distributed teams, the importance of refactoring must be proportionally amplified. The alternative is relying on after-the-fact code reviews (e.g., in GitFlow), which will become multiple times the bottleneck they already were when your team was co-located and you could just pop over to Mary’s desk and ask.

Underpinning all of this is a need for levels of delivery process automation – automated testing, automated builds, automated deployments, automated code reviews – that the majority of teams are nowhere near.

And then there’s the interpersonal: the communication, the coordination, the planning and tracking, the collaborative design. It takes a big investment to make a distributed Agile team as productive as a co-located team.

All the Jiras and GitHubs and cloud-based build pipelines and remote whiteboards and shared IDEs and Zoom meetings in the world won’t save you if the code craft isn’t up to snuff, though. It’s foundational to delivering as a distributed team.

If you want to know more about code craft, visit www.codemanship.com

 

Lunchtime Learnings in TDD

Most software developers are now working from home due to the COVID-19 pandemic. If you’re one of them, now might be an opportunity to hone your code craft skills so that when things return to normal your market value as a dev will be higher. (Data from itjobswatch.co.uk suggests that developers earn 20% more on average if they have good TDD experience).

We appreciate that things are a bit up in the air at the moment, so taking 2 days out for our TDD course might be a non-starter. And this is why we’ve split it into 6x 90-minute weekly workshops being run at lunchtimes.

The first workshop is TDD in JavaScript, which starts at 12:30pm GMT next Tuesday. Details and registration can be found here.

This will be followed by TDD in C# on Thursday, 12:30pm GMT.

Workshops in Python and Java will start the following Monday and Friday lunchtimes, so keep your eye on @codemanship for announcements.