Is Your Agile Transformation Just ‘Agility Theatre’?

I’ve talked before about what I consider to be the two most important feedback loops in software development.

When I explain the feedback loops – the “gears” – of Test-Driven Development, I go to great pains to highlight which of those gears matter most, in terms of affecting our odds of success.

tdd_gears

Customer or business goals drive the whole machine of delivery – or at least, they should. We are not done because we passed some acceptance tests, or because a feature is in production. We’re only done when we’ve solved the customer’s problem.

That’s very likely going to require more than one go-around. Which is why the second most important feedback loop is the one that establishes if we’re good to go for the next release.

The ability to establish quickly and effectively if the changes we made to the software have broken it is critical to our ability to release it. Teams who rely on manual regression testing can take weeks to establish this, and their release cycles are inevitably very slow. Teams who rely mostly on automated system and integration tests have faster release cycles, but still usually far too slow for them to claim to be “agile”. Teams who can re-test most of the code in under a minute are able to release as often as the customer wants – many times a day, if need be.

The speed of regression testing – of establishing if our software still works – dictates whether our release cycles span months, weeks, or hours. It determines the metabolism of our delivery cycle and ultimately how many throws of the dice we get at solving the customer’s problem.

It’s as simple as that: faster tests = more throws of the dice.

If the essence of agility is responding to change, then I conclude that fast-running automated tests lie at the heart of that.

What’s odd is how so many “Agile transformations” seem to focus on everything but that. User stories don’t make you responsive to change. Daily stand-ups don’t make you responsive to change. Burn-down charts don’t make you responsive to change. Kanban boards don’t make you responsive to change. Pair programming doesn’t make you responsive to change.

It’s all just Agility Theatre if you’re not addressing the two must fundamental feedback loops, which the majority of organisations simply don’t. Their definition of done is “It’s in production”, as they work their way through a list of features instead of trying to solve a real business problem. And they all too often under-invest in the skills and the time needed to wrap software in good fast-running tests, seeing that as less important than the index cards and the Post-It notes and the Jira tickets.

I talk often with managers tasked with “Agilifying” legacy IT (e.g., mainframe COBOL systems). This means speeding up feedback cycles, which means speeding up delivery cycles, which means speeding up build pipelines, which – 99.9% of the time – means speeding up testing.

After version control, it’s #2 on my list of How To Be More Agile. And, very importantly, it works. But then, we shouldn’t be surprised that it does. Maths and nature teach us that it should. How fast do bacteria or fruit flies evolve – with very rapid “release cycles” of new generations – vs elephants or whales, whose evolutionary feedback cycles take decades?

There are two kinds of Agile consultant: those who’ll teach you Agility Theatre, and those who’ll shrink your feedback cycles. Non-programmers can’t help you with the latter, because the speed of the delivery cycle is largely determined by test execution time. Speeding up tests requires programming, as well as knowledge and experience of designing software for testability.

70% of Agile coaches are non-programmers. A further 20% are ex-programmers who haven’t touched code for over a decade. (According to the hundreds of CVs I’ve seen.) That suggests that 90% of Agile coaches are teaching Agility Theatre, and maybe 10% are actually helping teams speed up their feedback cycles in any practical sense.

It also strongly suggests that most Agile transformations have a major imbalance; investing heavily in the theatre, but little if anything in speeding up delivery cycles.

Changing Legacy Code Safely

One of the topics we cover on the Codemanship TDD course is one that developers raise often: how can we write fast-running unit tests for code that’s not easily unit-testable? Most developers are working on legacy code – code that’s difficult and risky to change – most of the time. So it’s odd there’s only one book about it.

I highly recommend Micheal Feather’s book for any developer working in any technology applying any approach to development. On the TDD course, I summarise what we mean by “legacy code” – code that doesn’t have fast-running automated tests, making it risky to change – and briefly demonstrate Michael’s process for changing legacy code safely.

The example I use is a simple Python program for pricing movie rentals based on their IMDB ratings. Average movies rentals cost £3.95. High-rated movies cost an extra pound, low-rated movies cost a pound less.

My program has no automated tests, so I’ve been testing it manually using the command line.

Suppose the business asked us to change the pricing logic; how could we do this safely if we lack automated tests to guard against breaking the code?

Michael’s process goes like this:

  • Identify what code you will need to change
  • Identify where around that code you’d want unit tests
  • Break any dependencies that are stopping you from writing unit tests
  • Write the unit tests you’d want to satisfy you the changes you’ll make didn’t break the code
  • Make the change
  • While you’re there, refactor to improve the code that’s now covered by unit tests to make life easier for the next person who changes it (which could be you)

My Python program has a class called Pricer which we’ll need to change to update the pricing logic.

I’ve been testing this logic one level above by testing the Rental class that uses Pricer.

My script that I’ve been manually testing with allows me to create Rental objects and write their data to the command line for different movies using their IMDB ID’s.

I use three example movies – one with a high rating, one low-rated and one medium-rated – to test the code. For example, the output for the high-rated movie looks like this.

C:\Users\User\Desktop\tdd 2.0\python_legacy>python program.py jgorman tt0096754
Video Rental – customer: jgorman. Video => title: The Abyss, price: £4.95

I’d like to reproduce these manual tests as unit tests, so I’ll be writing unittest tests for the Rental class for each kind of movie.

But before I can do that, there’s an external dependency we have to deal with. The Pricer class connects directly to the OMDB API that provides movie information. I want to stub that so I can provide test IMDB ratings without connecting.

Here’s where we have to get disciplined. I want to refactor the code to make it unit-testable, but it’s risky to do that because… there’s no unit tests! Opinions differ on approach, but personally – learned through bitter experience – I’ve found that it’s still important to re-test the code after every refactoring, manually if need be. It will seem like a drag, but we all tend to overlook how much time we waste downstream fixing avoidable bugs. It will seem slower to manually re-test, but it’s often actually faster in the final reckoning.

Okay, let’s do a refactoring. First, let’s get that external dependency in its own method.

I re-run my manual tests. Still passing. So far, so good.

Next, let’s move that new method into its own class.

And re-test. All passing.

To make the dependency on VideoInfo swappable, the instance needs to be injected into the constructor of Pricer from Rental.

And re-test. All passing.

Next, we need to inject the Pricer into Rental, so we can stub VideoInfo in our planned unit tests.

And re-test. All passing.

Now we can write unit tests to replicate our command line tests.

These unit tests reproduce all the checks I was doing visually at the command line, but they run in a fraction of a second. The going get’s much easier from here.

Now I can make the change to the pricing logic the business requested.

user_story

We can tackle this in a test-driven way now. Let’s update the relevant unit test so that it now fails.

Now let’s make it pass.

(And, yes – obviously in a real product, the change would likely be more complex than this.)

Okay, so we’ve made the change, and we can be confident we haven’t broken the software. We’ve also added some test coverage and dealt with a problematic dependency in our architecture. If we wanted to get movie ratings from somewhere else (e.g., Rotten Tomatoes), or even aggregate sources, it would be quite straightforward now that we’ve cleanly separated that concern from our business logic.

One last thing while we’re here: there’s a couple of things in this code that have been bugging me. Firstly, we’ve been mixing our terminology: the customer says “movie”, but our code says “video”. Let’s make our code speak the customer’s language.

Secondly, I’m not happy with clients accessing objects’ fields directly. Let’s encapsulate.

With our added unit tests, these extra refactorings were much easier to do, and hopefully that means that changing this code in the future will be much easier, too.

Over time, one change at a time, the unit test coverage will build up and the code will get easier to change. Applying this process over weeks, months and years, I’ve seen some horrifically rigid and brittle software products – so expensive and risky to change that the business had stopped asking – be rehabilitated and become going concerns again.

By focusing our efforts on changes our customer wants, we’re less likely to run into a situation where writing unit tests and refactoring gets vetoed by our managers. The results of highly visible “refactoring sprints”, or even long refactoring phases – I’ve known clients freeze requirements for up to a year to “refactor” legacy code – are typically disappointing, and run the risk of making refactoring and adding unit tests forbidden by disgruntled bosses.

One final piece of advice: never, ever discuss this process with non-technical stakeholders. If you’re asked to break down an estimate to change legacy code, resist. My experience has been that it often doesn’t take any longer to make the change safely, and the longer-term benefits are obvious. Don’t give your manager or your customer the opportunity to shoot themselves in the foot by offering up unit tests and refactoring as a line item. Chances are, they’ll say “no, thanks”. And that’s in nobody’s interests.

Adventures In Multi-Threading

I’ve been spending my early mornings buried in Java threading recently. Although we talk often of concurrency and “thread safety” in this line of work, there’s surprisingly little actual multi-threaded code being written. Normally, when developers talk about multi-threading, we’re referring to how we write code to handle asynchronous operations in other people’s code (e.g., promises in JavaScript).

My advice to developers has always been to avoid writing multi-threaded code wherever possible. Concurrency is notoriously difficult to get right, and the safest multi-threaded code is single-threaded.

I’ve been eating my own dog food on that, and it occurred to me a couple of weeks back that I’ve written very little multi-threaded code myself in recent years.

But there is still some multi-threaded code being written in languages like Java, C# and Python for high-performance solutions that are targeted at multi-CPU platforms. And over the last few months I’ve been helping a client with just such a solution for scaling up property-based tests to run on multi-core Cloud platforms.

One of the issues we faced is how do we test our multi-threaded code?

There’s a practical issue of executing multiple threads in a single-threaded unit test – particularly synchronizing so that we can assert an outcome after all threads have completed their work.

And also, thread scheduling is out of our control and – on Windows and similar platforms – unpredictable and non-repeatable. A race condition or a deadlock might not show up every time we run a test.

Over the last couple of weeks, I’ve been playing with a rough prototype to try and answer these questions. It uses a simple producer-consumer example – loading parcels into a loading bay and then taking them off the loading bay and loading them into a truck – to illustrate the challenges of both safe multi-threading and multi-threaded testing.

When I test multi-threaded code, I’m interested in two properties:

  • Safety – what should always be true while the code is executing?
  • Liveness – what should eventually be achieved?

To test safety, an assertion needs to be checked throughout execution. To test liveness, an assertion needs to be checked after execution.

After writing code to do this, I refactored the useful parts into custom assertion methods, always() and eventually().

always() takes a list of Runnables (Java’s equivalent of functions that accept no parameters and have no return value) that will concurrently perform the work we want to test. It will submit each Runnable to a fixed thread pool a specified number of times (thread count) and then wait for all the threads in the pool to terminate.

On a single separate thread, a boolean function (in Java, Supplier<Boolean>) is evaluated multiple times throughout execution of the threads under test. This terminates after the worker threads have terminated or timed out. If, at any point in execution, the assertion evaluates to false, the test will fail.

In use, it looks like this:

bayLoader and truckLoader are objects that implement the Runnable interface. They will be submitted to the thread pool 2x each (because we’ve specified a thread count of 2 as our third parameter), so there will be 4 worker threads in total, accessing the same data defined in our set-up.

The bayLoader threads will load parcels on to the loading bay, which holds a maximum of 50 parcels, until all the parcels have been loaded.

The truckLoader threads will unload parcels from the loading bay and load them on to the truck, until the entire manifest of parcels has been loaded.

A safety property of this concurrent logic is that there should never be more than 50 parcels in the loading bay at any time, and that’s what our always assertion checks multiple times during execution:

() -> bay.getParcelCount() <= 50

When I run this test once, it passes. Running it multiple times, it still passes. But just because a test’s passing, that doesn’t mean our code really works. Let’s deliberately introduce an error into our test assertion to make sure it fails.

() -> bay.getParcelCount() <= 49

The first time I run this, the test fails. And the second and third times. But on the fourth run, the test passes. This is the thread determinism problem; we have no control over when our assertion is checked during execution. Sometimes it catches a safety error. Sometimes the error slips through the gaps and the test misses it.

The good news is that if it catches an error just once, that proves we have an error in our concurrent logic. Of course, if we catch no errors, that doesn’t prove they’re not there. (Absence of evidence isn’t evidence of absence.)

What if we run the test 100 times? Rather than sit there clicking the “run” button over and over, I can rig this test up as a JUnitParams parameterised test and feed it 100 test cases. (If you don’t have a parameterised testing feature, you can just loop 100 times).

When I run this, it fails 91/100 times. Changing the assertion back, it passes 100/100. So I can have 100% confidence the code satisfies this safety property? Not so fast. 100 test runs leaves plenty of gaps. Maybe I can be 99% confident with 100 test runs. How about we do 1000 test runs? Again, they all pass. So that gives me maybe 99.9% confidence. 10,000 could give me 99.99% confidence. And so on.

Thankfully, after a little performance engineering, 10,000 tests run in less than 30 seconds. All green.

The eventually() assertion method works along similar lines, except that it only evaluates its assertion once at the end (and therefore runs significantly faster):

If my code encounters a deadlock, the worker threads will time out after 1000 milliseconds. If a race condition occurs and our data becomes corrupted, the assertion will fail. Running this 10,000 times shows all the tests are green. I’m 99.99% confident my concurrent logic works.

Finally, speaking of deadlocks and race conditions, how might we avoid those?

A race condition can occur when two or more threads attempt to access the same data at the same time. In particular, we run the risk of a pre-condition paradox when bay loaders attempt to load parcels on to the loading bay, and truck loaders attempt to unload parcels from the bay.

The bay loader can only load a parcel if the bay is not full. A truck loader can only unload a parcel if the bay is not empty.

When I run my tests with this implementation of LoadingBay, 12% of them fail their liveness and safety checks because there’s a non-zero possibility of, say, a bay loader attempting to load a parcel after we’ve checked the bay isn’t full and another bay loader loading the 50th parcel in between that check and loading. Similarly, a truck loader might check that the bay isn’t empty, but before they unload the last parcel another truck loader thread takes it.

To avoid this situation, we need to ensure that pre-condition checks and actions are executed in a single, atomic sequence with no chance of other threads interfering.

When I test this implementation, tests still fail. The problem is that some parcels aren’t getting loaded on to the bay (though the bay loader thinks they have been), and some parcels aren’t getting unloaded, either. Our truck loader may be putting null parcels on the truck.

When loading, the bay must not be full. When unloading, it must not be empty. So our worker threads need to wait until their pre-conditions are satisfied. Now, Java threading gives us wait() methods, but they only wait for a specified amount of time. We need to wait until a condition becomes true.

This passes all 10,000 safety and liveness test runs, so I have 99.99% confidence we don’t have a race condition. But…

What happens when all the parcels have been loaded on to the truck? There’s a risk of deadlock if the bay remains permanently empty.

So we also need a way to stop the loading and unloading process once all the manifest has been loaded.

I’ve dealt with this in a similar way to waiting for pre-conditions to be satisfied, except this time we repeat loading and unloading until the parcels are all on the truck.

You may have already spotted the patterns in these two forms of loops:

  • Execute this action when this condition is true
  • Execute this action until this condition is true

Let’s refactor to encapsulate those nasty while loops.

There. That looks a lot better, doesn’t it? All nice and functional.

I tend to find conditional synchronisation easier to wrap my head around than all the wait() and notify() and callbacks malarky, and experiences so far with this approach suggest I tend to produce more reliable multi-threaded code.

My explorations continue, but I thought there might be folk out there who’d find it useful to see where I’ve got so far with this.

You can see the current source code at https://github.com/jasongorman/syncloop (it’s just a proof of concept, so provided with no warranty or support, of course.)

 

 

The Test Pyramid – The Key To True Agility

On the Codemanship TDD course, before we discuss Continuous Delivery and how essential it is to achieving real agility, we talk about the Test Pyramid.

It has various interpretations, in terms of the exactly how many layers and exactly what kinds of testing each layer is made of (unit, integration, service, controller, component, UI etc), but the overall sentiment is straightforward:

The longer tests take to run, the fewer of those kinds of tests you should aim to have

test_pyramid

The idea is that the tests we run most often need to be as fast as possible (otherwise we run them less often). These are typically described as “unit tests”, but that means different things to different people, so I’ll qualify: tests that do not involve any external dependencies. They don’t read from or write to databases, they don’t read or write files, they don’t connect with web services, and so on. Everything that happens in these tests happens inside the same memory address space. Call them In-Process Tests, if you like.

Tests that necessarily check our code works with external dependencies have to cross process boundaries when they’re executed. As our In-Process tests have already checked the logic of our code, these Cross-Process Tests check that our code – the client – and the external code – the suppliers – obey the contracts of their interactions. I call these “integration tests”, but some folk have a different definition of integration test. So, again, I qualify it as: tests that involve external dependencies.

These typically take considerably longer to execute than “unit tests”, and we should aim to have proportionally fewer of them and to run them proportionally less often. We might have thousands of unit tests, and maybe hundreds of integration tests.

If the unit tests cover the majority of our code – say, 90% of it – and maybe 10% of our code has direct external dependencies that have to be tested, on average we’ll make about 9 changes that need unit testing compared to 1 change that needs integration testing. In other words, we’d need to run our unit tests 9x as often as our integration tests, which is a good thing if each integration test is about 9 times slower than a unit test.

At the top of our test pyramid are the slowest tests of all. Typically these are tests that exercise the entire system stack, through the user interface (or API) all the way down to the external dependencies. These tests check that it all works when we plug everything together and deploy it into a specific environment. If we’ve already tested the logic of our code with unit tests, and tested the interactions with external suppliers, what’s left to test?

Some developers mistakenly believe that these system-levels tests are for checking the logic of the user experience – user “journeys”, if you like. This is a mistake. There are usually a lot of user journeys, so we’d end up with a lot of these very slow-running tests and an upside-down pyramid. The trick here is to make the logic of the user experience unit-testable. View models are a simple architectural pattern for logically representing what users see and what users do at that level. At the highest level they may be looking at an HTML table and clicking a button to submit a form, but at the logical level, maybe they’re looking at a movie and renting it.

A view model can help us encapsulate the logic of user experience in a way that can be tested quickly, pushing most of our UI/UX tests down to the base of the pyramid where they belong. What’s left – the code that must directly reference physical UI elements like HTML tables and buttons – can be wafer thin. At that level, all we’re testing is that views are rendered correctly and that user actions trigger the correct internal logic (which can easily be done using mock objects). These are integration tests, and belong in the middle layer of our pyramid, not the top.

Another classic error is to check core logic through the GUI. For example, checking that insurance premiums are calculated correctly by looking at what number is rendered on that web page. Some module somewhere does that calculation. That should be unit-testable.

So, if they’re not testing user journeys, and they’re not testing core logic, what do our system tests test? What’s left?

Well, have you ever found yourself saying “It worked on my machine”? The saying goes “There’s many a slip ‘twixt cup and lip.” Just because all the pieces work, and just because they all play nicely together, it’s not guaranteed that when we deploy the whole system into, say, our EC2 instances, that nothing could be different to the environments we tested it in. I’ve seen roll-outs go wrong because the servers handled dates different, or had the wrong locale, or a different file system, or security restrictions that weren’t in place on dev machines.

The last piece of the jigsaw is the system configuration, where our code meets the real production environment – or a simulation of it – and we find out if really works where it’s intended to work as a whole.

We may need dozens of those kinds of tests, and perhaps only need to run them on, say, every CI build by deploying the outputs to a staging environment that mirrors the production environment (and only if all our unit and integration tests pass first, of course.) These are our “good to go?” tests.

The shape of our test pyramid is critical to achieving feedback loops that are fast enough to allow us to sustain the pace of development. Ideally, after we make any change, we should want to get feedback straight away about the impact of that change. If 90% of our code can be re-tested in under 30 seconds, we can re-test 90% of our changes many times an hour and be alerted within 30 seconds if we broke something. If it takes an hour to re-test our code, then we have a problem.

Continuous Delivery means that our code is always shippable. That means it must always be working, or as near as possible always. If re-testing takes an hour, that means that we’re an hour away from finding out if changes we made broke the code. It means we’re an hour away from knowing if our code is shippable. And, after an hour’s-worth of changes without re-testing, chances are high that it is broken and we just don’t know it yet.

An upside-down test pyramid puts Continuous Delivery out of your reach. Your confidence that the code’s shippable at any point in time will be low. And the odds that it’s not shippable will be high.

The impact of slow-running test suites on development is profound. I’ve found many times that when a team invested in speeding up their tests, many other problems magically disappeared. Slow tests – which means slow builds, which means slow release cycles – is like a development team’s metabolism. Many health problems can be caused by a slow metabolism. It really is that fundamental.

Slow tests are pennies to the pound of the wider feedback loops of release cycles. You’d be surprised how much of your release cycles are, at the lowest level, made up of re-testing cycles. The outer feedback loops of delivery are made of the inner feedback loops of testing. Fast-running automated tests – as an enabler of fast release cycles and sustained innovation – are therefore highly desirable

A right-way-up test pyramid doesn’t happen by accident, and doesn’t come at no cost, though. Many organisations, sadly, aren’t prepared to make that investment, and limp on with upside-down pyramids and slow test feedback until the going gets too tough to continue.

As well as writing automated tests, there’s also an investment needed in your software’s architecture. In particular, the way teams apply basic design principles tends to determine the shape of their test pyramid.

I see a lot of duplicated code that contains duplicated external dependencies, for example. It’s not uncommon to find systems with multiple modules that connect to the same database, or that connect to the same web service. If those connections happened in one place only, that part of the code could be integration tested just once. D.R.Y. helps us achieve a right-way-up pyramid.

I see a lot of code where a module or function that does a business calculation also connects to an external dependency, or where a GUI module also contains business logic, so that the only way to test that core logic is with an integration test. Single Responsibility helps us achieve a right-way-up pyramid.

I see a lot of code where a module in one web service interacts with multiple features of another web service – Feature Envy, but on a larger scale – so there are multiple points of integration that require testing. Encapsulation helps us achieve a right-way-up pyramid.

I see a lot of code where a module containing core logic references an external dependency, like a database connection, directly by its implementation, instead of through an abstraction that could be easily swapped by dependency injection. Dependency Inversion helps us achieve a right-way-up pyramid.

Achieving a design with less duplication, where modules do one job, where components and services know as little as possible about each other, and where external dependencies can be easily stubbed or mocked by dependency injection, is essential if you want your test pyramid to be the right way up. But code doesn’t get that way by accident. There’s significant ongoing effort required to keep the code clean by refactoring. And that gets easier the faster your tests run. Chicken, meet egg.

If we’re lucky enough to be starting from scratch, the best way we know of to ensure a right-way-up test pyramid is to write the tests first. This compels us to design our code in such a way that it’s inherently unit-testable. I’ve yet to come across a team genuinely doing Continuous Delivery who wasn’t doing some kind of TDD.

If you’re working on legacy code, where maybe you’re relying on browser-based tests, or might have no automated tests at all, there’s usually a mountain to climb to get a test pyramid that’s the right way up. You need to write fast-running tests, but you will probably need to refactor the code to make that possible. Egg, meet chicken.

Like all mountains, though, it can be climbed. One small, careful step at a time. Michael Feather’s book Working Effectively With Legacy Code describes a process for making changes safely to code that lacks fast-running automated tests. It goes something like this:

  • Identify what code you need to change
  • Identify where around that code you’d want unit tests to make the change safely
  • Break any dependencies in that code getting in the way of unit testing
  • Write the unit tests
  • Make the change
  • While you’re there, make other improvements that will help the next developer who needs to change that code (the “boy scout rule” – leave the camp site tidier than you found it)

Change after change, made safely in this way, will – over time – build up a suite of fast-running unit tests that will make future changes easier. I’ve worked on legacy code bases that went from upside-down test pyramids of mostly GUI-based system tests, that took hours or even days to run, to right-side-up pyramids where most of the code could be tested in under a minute. The impact on the cost and the speed of delivery is always staggering. It can be done.

But be patient. A code base might take a year or two to turn around, and at first the going will be tough. I find I have to be super-disciplined in those early stages. I manually re-test as I refactor, and resist the temptation to make a whole bunch of changes at a time before I re-test. Slow and steady, adding value and clearing paths for future changes at the same time.

The 2 Most Critical Feedback Loops in Software Development

When I’m explaining the inner and outer feedback loops of Test-Driven Development – the “wheels within wheels”, if you like – I make the point that the two most important feedback loops are the outermost and the innermost.

feedbackloops

The outermost because the most important question of all is “Did we solve the problem?” The innermost because the answer is usually “No”, so we have to go round again. This means that the code we delivered will need to change, which raises the second most important question; “Did we break the code?”

The sooner we can deliver something so we can answer “Did we solve the problem?”, the sooner we can feedback the lessons learned on the next go round. The sooner we can re-test the code, the sooner we can know if our changes broke it, and the sooner we can fix it ready for the next release.

I realised nearly two decades ago that everything in between – requirements analysis, customer tests, software design, etc etc – is, at best, guesswork. A far more effective way of building the right thing is to build something, get folk to use it, and feedback what needs to change in the next iteration. Fast iterations accelerate this learning process. This is why I firmly believe these days that fast iterations – with all that entails – is the true key to building the right thing.

Continuous Delivery – done right, with meaningful customer feedback drawn from real use in the world world (or as close as we dare bring our evolving software to the real world) – is the ultimate requirements discipline.

Fast-running automated tests that provide good assurance that our code’s always working are essential to this. How long it takes to build, test and deploy our software will determine the likely length of those outer feedback loops. Typically, the lion’s share of that build time is regression testing.

About a decade ago, many teams told me “We don’t need unit tests because we have integration tests”, or “We have <insert name of trendy new BDD tool here> tests”. Then, a few years later, their managers were crying “Help! Our tests take 4 hours to run!” A 4-hour build-and-test cycle creates a serious bottleneck, leading to code that’s almost continuously broken without teams knowing. In other words, not shippable.

Turn a 4-hour build-and-test cycle into a 40-second build-and-test cycle, and a lot of problems magically disappear. You might be surprised how many other bottlenecks in software development have slow-running tests as their underlying cause – analysis paralysis, for example. That’s usually a symptom of high stakes in getting it wrong, and that’s usually a symptom of infrequent releases. “We better deliver the right thing this time, because the next go round could be 6 months later.” (Those among us old enough to remember might recall just how much more care we had to take over our code because of how long it took to compile. It’s a similar effect, but on a much larger scale with much higher stakes than a syntax error.)

Where developers usually get involved in this process – user stories and backlogs – is somewhere short of where they need to be involved. User stories – and prioritised queues of user stories – are just guesses at what an analyst or customer or product owner believes might solve the problem. To obsess over them is to completely overestimate their value. The best teams don’t guess their way to solving a problem; they learn their way.

Like pennies to the pound, the outer feedback loop of “Does it actually work in the real world?” is made up of all the inner feedback loops, and especially the innermost loop of regression testing after code is changed.

Teams who invest in fast-running automated regression tests have a tendency to out-learn teams who don’t, and their products have a tendency to outlive the competition.

 

 

In-Process, Cross-Process & Full-Stack Tests

Time for a quick clarification. (If you’ve been on a Codemanship course, you may have already heard this.)

Ask twelve developers for their definitions of “unit test”, “integration test” and “system test” and you’ll likely get twelve different answers. I feel – especially for training purposes – that I need to clarify what I mean by them.

Unit Test – when I say “unit test”, what I mean is a test that executes without any external dependencies. I can go further to qualify what I mean by an “external dependency”; that’s when code is executed in a separate memory address space – a separate process – to the test code. This is typically for speed, so we can test our logic quickly without hitting databases or file systems or web services and so on. It also helps separate concerns more cleanly, as “unit testable” code has to usually be designed in such a way to make external dependencies easily swappable (e.g., by dependency injection).

Integration Test – a test that executes code running in separate memory address spaces (e.g., separate Windows services, or SQL running on a DBMS). It’s increasingly common to find developers reusing their unit tests with different set-ups (replace a database stub with the real database connection, for example). The logic of the test is the same, but the set-up involves external dependencies. This allows us to test that our core logic still works when it’s interacting with external processes. (i.e., it tests the contracts at both sides).

System Test – executes code end-to-end, across the entire tech stack, including all external dependencies like databases, files, web services, the OS and even the hardware. (I’ve seen more than one C++ app blow a fuse because it was deployed on hardware that the code wasn’t compiled to run on, for example.) This allows us to test our system’s configuration, and ideally should be done in an environment as close to the real things as possible.

It might be clearer if I called them In-Process, Cross-Process and Full-Stack tests.

 

The Gaps Between The Gaps – The Future of Software Testing

If you recall your high school maths (yes, with an “s”!), think back to calculus. This hugely important idea is built on something surprisingly simple: smaller and smaller slices.

If we want to roughly estimate determine the area under a curve, we can add up the areas of rectangular slices underneath. If we want to improve the estimate, we make the slices thinner. Make them thinner still, the estimate gets even better. Make them infinitely thin, and we get a completely accurate result. We can actually prove the area under the curve by taking an infinite number of samples.

In computing, I’ve lived through several revolutions where increasing computing power has meant more and more samples can be taken, until the gaps between them are so small that – to all intents and purposes – the end result is analog. Digital Signal Processing, for example, has reached a level of maturity where digital guitar amplifiers and digital synthesizers and digital tape recorders are indistinguishable from the real thing to the human ear. As sample rates and bit depths increased, and number-crunching power skyrocketed while the cost per FLOP plummeted, we eventually arrived at a point where the question of, say, whether to buy a real tube amplifier or use a digitally modeled tube amplifier is largely a matter of personal preference rather than practical difference.

Software testing’s been quietly undergoing the same revolution. When I started out, automated test suites ran overnight on machines that were thousands of times less powerful than my laptop. Today, I see large unit test suites running in minutes or fractions of minutes on hardware that’s way faster and often cheaper.

Factor in the Cloud, and teams now can chuck what would relatively recently have been classed as “supercomputing” power at their test suites for a few extra dollars each time. While Moore’s Law seems to have stalled at the CPU level, the scaling out of computing power shows no signs of slowing down – more and more cores in more and more nodes for less and less money.

I have a client who I worked with to re-engineer a portion of their JUnit test suite for a mission critical application, adding a staggering 2.5 billion additional property-based test cases (with only an additional 1,000 lines of code, I might add). This extended suite – which reuses – but doesn’t replace – their day-to-day suite of tests – runs overnight in about 5 1/2 hours on Cloud-based hardware. (They call it “draining the swamp”).

I can easily imagine that suite running in 5 1/2 minutes in a decade’s time. Or running 250 billion tests overnight.

And it occurred to me that, as the gaps between tests get smaller and smaller, we’re tending towards what is – to all intents and purposes – a kind of proof of correctness for that code. Imagine writing software to guide a probe to the moons of Jupiter. A margin of error of 0.001% in calculations could throw it hundreds of thousands of kilometres off course. How small would the gaps need to be to ensure an accuracy of, say, 1km, or 100m, or 10m? (And yes, I know they can course correct as they get closer, but you catch my drift hopefully.)

When the gaps between the tests are significantly smaller than the allowable margin for error, I think that would constitute an effective proof of correctness. In the same way that when the audio samples fall way outside of human hearing, you have effectively analog audio – at least in the perceived quality of the end result.

And the good news is that this testing revolution is already well underway. I’ve been working with clients for quite some time, achieving very high integrity software using little more than the same testing tools we’re almost all using, and off-the-shelf hardware solutions available to almost everyone.