In-Process, Cross-Process & Full-Stack Tests

Time for a quick clarification. (If you’ve been on a Codemanship course, you may have already heard this.)

Ask twelve developers for their definitions of “unit test”, “integration test” and “system test” and you’ll likely get twelve different answers. I feel – especially for training purposes – that I need to clarify what I mean by them.

Unit Test – when I say “unit test”, what I mean is a test that executes without any external dependencies. I can go further to qualify what I mean by an “external dependency”; that’s when code is executed in a separate memory address space – a separate process – to the test code. This is typically for speed, so we can test our logic quickly without hitting databases or file systems or web services and so on. It also helps separate concerns more cleanly, as “unit testable” code has to usually be designed in such a way to make external dependencies easily swappable (e.g., by dependency injection).

Integration Test – a test that executes code running in separate memory address spaces (e.g., separate Windows services, or SQL running on a DBMS). It’s increasingly common to find developers reusing their unit tests with different set-ups (replace a database stub with the real database connection, for example). The logic of the test is the same, but the set-up involves external dependencies. This allows us to test that our core logic still works when it’s interacting with external processes. (i.e., it tests the contracts at both sides).

System Test – executes code end-to-end, across the entire tech stack, including all external dependencies like databases, files, web services, the OS and even the hardware. (I’ve seen more than one C++ app blow a fuse because it was deployed on hardware that the code wasn’t compiled to run on, for example.) This allows us to test our system’s configuration, and ideally should be done in an environment as close to the real things as possible.

It might be clearer if I called them In-Process, Cross-Process and Full-Stack tests.

 

Standards & Gatekeepers & Fitted Bathrooms

One thing I’ve learned from 10 years on Twitter is that whenever you dare to suggest that the software development profession should have minimum basic standards of competence, people will descend on you from a great height accusing you of being “elitist” and a “gatekeeper”.

Evil Jason wants to keep people out of software development. BAD JASON!

Well, okay: sure. I admit it. I want to keep people out of software development. Specifically, I want to keep people who can’t do the job out of software development. Mwuhahahahahaha etc.

That’s a very different proposition from suggesting that I want to stop people from becoming good, competent software developers, though. If you know me, then you know I’ve long advocated proper, long-term, in-depth paid software developer apprenticeships. I’ve advocated proper on-the-job training and mentoring. (Heck, it’s my entire business these days.) I’ve advocated schools and colleges and code clubs encouraging enthusiasts to build basic software development skills – because fundamentals are the building blocks of fun (or something pithy like that.)

I advocate every entry avenue into this profession except one – turning up claiming to be a software developer, without the basic competencies, and expecting to get paid a high salary for messing up someone’s IT.

If you can’t do the basic job yet, then you’re a trainee – an apprentice, if you prefer – software developer. And yes, that is gatekeeping. The gates to training should be wide open to anyone with aptitude. Money, social background, ethnicity, gender, sexual orientation, age or disabilities should be no barrier.

But…

I don’t believe the gates should be wide open to practicing as a software developer – unsupervised by experienced and competent mentors – on real software and systems with real end users and real consequences for the kinds of salaries we can earn – just for anyone who fancies that job title. I think we should have to earn it. I think I should have had to earn it when I started out. Crikey, the damage I probably did before I accidentally fell into a nest of experienced software engineers who fixed me…

Here’s the thing; when I was 23, I didn’t know that I wasn’t a competent software developer. I thought I was aces. Even though I’d never used version control, never written a unit test, never refactored code – not once – and thought that a 300-line function with nested IFs running 10 deep was super spiffy and jolly clever. I needed people to show me. I was lucky to find them, though I certainly didn’t seek them out.

And who the heck am I to say our profession should have gates, anyway? Nobody. I have no power over hiring anywhere. And, for sure, when I’ve been involved in the hiring process, bosses have ignored my advice many times. And many times, they’ve paid the price for letting someone who lacked basic dev skills loose on their production code. And a few times they’ve even admitted it afterwards.

But I’ve rarely said “Don’t hire that person”. Usually, I say “Train that person”. Most employers choose not to, of course. They want them ready-made and fully-formed. And, ideally, cheap. Someone else can train them. Hell, they can train themselves. And many of us do.

In that landscape, insisting on basic standards is difficult – because where do would-be professional developers go to get real-world experience, high-quality training and long-term mentoring? Would-be plumbers and would-be veterinarians and would-be hairdressers have well-defined routes from aspiration to profession. We’re still very much at the “If You Say You’re A Software Developer Then You’re A Software Developer” stage.

So that’s where we are right now. We can stay at that level, and things will never improve. Or we can do something about it. I maintain that long-term paid apprenticeships – leading to recognised qualifications – are the way to go. I maintain that on-the-job training and mentoring are essential. You can’t learn this job from books. You’ve got to see it and do it for real, and you need people around you who’ve done lots of it to guide you and set an example.

I maintain that apprenticeships and training and mentoring should be the norm for people entering the profession – be it straight of high school or after a degree or after decades of experience working in other industries or after raising children. This route should be open to all. But there should be a bar they need to jump at the end before being allowed to work unsupervised on production code. I wish I’d had that from the start. I should have had that.

And, yes, how unfair it is for someone who blundered into software development largely self-taught to look back and say “Young folk today must qualify first!” But there must have been a generation of self-taught physicians who one day declared “Okay, from now on, doctors have to qualify.” If not my generation, or your generation, then whose generation? We can’t keep kicking this can down the road forever.

As software “eats the world”, more and more people are going to enter the profession. More and more of our daily lives will be run by software, and the consequences of system failures and high costs of changing code will hurt society more and more. This problem isn’t going away.

I hope to Bod that the people coming to fit my bathroom next week don’t just say they’re builders and plumbers and electricians. I hope to Bod they did proper apprenticeships and had plenty of good training and mentoring. I hope to Bod that their professions have basic standards of competence.

And I hope to Bod that those standards are enforced by… gatekeepers.

Digital Is A Process, Not A Project

One sentiment I’m increasingly hearing on social media is how phrases like #NoEstimates and #NoProjects scare executives who require predictability to budget for digital investments.

I think this is telling. How do executives budget for HR or finance or facilities teams? These are typically viewed as core functions within a business – an ongoing cost to keep the lights on, so to speak.

Software and systems development, on the other hand, is usually seen as a capital investment, like building new offices or installing new plant. It’s presumed that at some point these “projects” will be “done”, and the teams who do them aren’t perceived as being core to the running of the business. After your new offices are completed, you don’t keep the builders on for more office building. They are “done”.

But modern software development just isn’t like that. We’re never really done. We’re continually learning and systems are continually evolving as we do. It’s an ongoing process of innovation and adaptation, not a one-off investment. And the teams doing that work are most certainly core to your business, every bit as much as the accountants and the sales people and anyone else keeping the lights on.

I can’t help wondering if what executives really fear is acknowledging that reality and embracing digital as a core part of their business that is never going to go away.

The Gaps Between The Gaps – The Future of Software Testing

If you recall your high school maths (yes, with an “s”!), think back to calculus. This hugely important idea is built on something surprisingly simple: smaller and smaller slices.

If we want to roughly estimate determine the area under a curve, we can add up the areas of rectangular slices underneath. If we want to improve the estimate, we make the slices thinner. Make them thinner still, the estimate gets even better. Make them infinitely thin, and we get a completely accurate result. We can actually prove the area under the curve by taking an infinite number of samples.

In computing, I’ve lived through several revolutions where increasing computing power has meant more and more samples can be taken, until the gaps between them are so small that – to all intents and purposes – the end result is analog. Digital Signal Processing, for example, has reached a level of maturity where digital guitar amplifiers and digital synthesizers and digital tape recorders are indistinguishable from the real thing to the human ear. As sample rates and bit depths increased, and number-crunching power skyrocketed while the cost per FLOP plummeted, we eventually arrived at a point where the question of, say, whether to buy a real tube amplifier or use a digitally modeled tube amplifier is largely a matter of personal preference rather than practical difference.

Software testing’s been quietly undergoing the same revolution. When I started out, automated test suites ran overnight on machines that were thousands of times less powerful than my laptop. Today, I see large unit test suites running in minutes or fractions of minutes on hardware that’s way faster and often cheaper.

Factor in the Cloud, and teams now can chuck what would relatively recently have been classed as “supercomputing” power at their test suites for a few extra dollars each time. While Moore’s Law seems to have stalled at the CPU level, the scaling out of computing power shows no signs of slowing down – more and more cores in more and more nodes for less and less money.

I have a client who I worked with to re-engineer a portion of their JUnit test suite for a mission critical application, adding a staggering 2.5 billion additional property-based test cases (with only an additional 1,000 lines of code, I might add). This extended suite – which reuses – but doesn’t replace – their day-to-day suite of tests – runs overnight in about 5 1/2 hours on Cloud-based hardware. (They call it “draining the swamp”).

I can easily imagine that suite running in 5 1/2 minutes in a decade’s time. Or running 250 billion tests overnight.

And it occurred to me that, as the gaps between tests get smaller and smaller, we’re tending towards what is – to all intents and purposes – a kind of proof of correctness for that code. Imagine writing software to guide a probe to the moons of Jupiter. A margin of error of 0.001% in calculations could throw it hundreds of thousands of kilometres off course. How small would the gaps need to be to ensure an accuracy of, say, 1km, or 100m, or 10m? (And yes, I know they can course correct as they get closer, but you catch my drift hopefully.)

When the gaps between the tests are significantly smaller than the allowable margin for error, I think that would constitute an effective proof of correctness. In the same way that when the audio samples fall way outside of human hearing, you have effectively analog audio – at least in the perceived quality of the end result.

And the good news is that this testing revolution is already well underway. I’ve been working with clients for quite some time, achieving very high integrity software using little more than the same testing tools we’re almost all using, and off-the-shelf hardware solutions available to almost everyone.

 

 

Overcoming Solution Bias

Just a short post this morning about a phenomenon I’ve seen many times in software development – which, for want of a better name, I’m calling solution bias.

It’s the tendency of developers, once they’ve settled on a solution to a problem, to refuse to let go of it – regardless of what facts may come to light that suggest it’s the wrong solution.

I’ve even watched teams argue with their customer to try to get them to change their requirements to fit a solution design the team have come up with. It seems once we have a solution in our heads (or in a Git repository) we can become so invested in it that – to borrow a metaphor – everything looks like a nail.

The damage this can do is obvious. Remember your backlog? That’s a solution design. And once a backlog’s been established, it has a kind of inertia that makes it unlikely to change much. We may fiddle at the edges, but once the blueprints have been drawn up, they don’t change significantly. It’s vanishingly rare to see teams throw their designs away and start afresh, even when it’s screamingly obvious that what they’re building isn’t going to work.

I think this is just human nature: when the facts don’t fit the theory, our inclination is to change the facts and not the theory. That’s why we have the scientific method: because humans are deeply flawed in this kind of way.

In software development, it’s important – if we want to avoid solution bias – to first accept that it exists, and that our approach must actively take steps to counteract it.

Here’s what I’ve seen work:

  • Testable Goals – sounds obvious, but it still amazes me how many teams have no goals they’re working towards other than “deliver on the plan”. A much more objective picture of whether the plan actually works can help enormously, especially when it’s put front-and-centre in all the team’s activities. Try something. Test it against the goal. See if it really works. Adapt if it doesn’t.
  • Multiple Designs – teams get especially invested in a solution design when it’s the only one they’ve got. Early development of candidate solutions should explore multiple design avenues, tested against the customer’s goals, and selected for extinction if they don’t measure up. Evolutionary design requires sufficiently diverse populations of possible solutions.
  • Small, Frequent Releases – a team that’s invested a year in a solution is going to resist that solution being rejected with far more energy than a team who invested a week in it. If we accept that an evolutionary design process is going to have failed experiments, we should seek to keep those experiments short and cheap.
  • Discourage Over-Specialisation – solution architectures can define professional territory. If the best solution is a browser-based application, that can be good news for JavaScript folks, but bad news for C++ developers. I often see teams try to steer the solution in a direction that favours their skill sets over others. This is understandable, of course. But when the solution to sorting a list of surnames is to write them into a database and use SQL because that’s what the developers know how to do, it can lead to some pretty inappropriate architectures. Much better, I’ve found, to invest in bringing teams up to speed on whatever technology will work best. If it needs to be done in JavaScript, give the Java folks a couple of weeks to learn enough JavaScript to make them productive. Don’t put developers in a position where the choice of solution architecture threatens their job.
  • Provide Safety – I can’t help feeling that a good deal of solution bias is the result of fear. Fear of failure.  Fear of blame. Fear of being sidelined. Fear of losing your job. If we accept that the design process is going to involve failed experiments, and engineer the process so that teams fail fast and fail cheaply – with no personal or professional ramifications when they do – then we can get on with the business of trying shit and seeing if it works. I’ve long felt that confidence isn’t being sure you’ll succeed, it’s not being afraid to fail. Reassure teams that failure is part of the process. We expect it. We know that – especially early on in the process of exploring the solution space – candidate solutions will get rejected. Importantly: the solutions get rejected, not the people who designed them.

As we learn from each experiment, we’ll hopefully converge on the likeliest candidate solution, and the whole team will be drawn in to building on that, picking up whatever technical skills are required as they do. At the end, we may not also deliver a good working solution, but a stronger team of people who have grown through this process.

 

What’s The Point of Code Craft?

A conversation I seem to have over and over again – my own personal Groundhog Day – is “What’s the point in code craft if we’re building the wrong thing?”

The implication is that there’s a zero-sum trade-off between customer collaboration – the means by which we figure out what’s needed – and technical discipline. Time spent writing unit tests or refactoring duplication or automating builds is time not spent talking with our customers.

This is predicated on two falsehoods:

  • It takes longer to deliver working code when we apply more technical discipline
  • The fastest way to solve a problem is to talk about it more

All the evidence we have strongly suggests that, in the majority of cases, better quality working software doesn’t take significantly longer to deliver. In fact, studies using large amounts of industry data repeatedly show the inverse. It – on average – takes longer to deliver software when we apply less technical discipline.

Teams who code and fix their software tend to end up having less time for their customers because they’re too busy fixing bugs and because their code is very expensive to change.

Then there’s the question of how to solve our customers’ problems. We can spend endless hours in meetings discussing it, or we could spend a bit of time coming up with a simple idea for a solution, build it quickly and release it straight away for end users to try for real. The feedback we get from people using our software tends to tell us much more, much sooner about what’s really needed.

I’ll take a series of rapid software releases over the equivalent series of requirements meetings any day of the week. I’ve seen this many times in the last three decades. Evolution vs Big Design Up-Front. Rapid iteration vs. Analysis Paralysis.

The real customer collaboration happens out in the field (or in the staging environment), where developers and end users learn from each small, frequent release and feed those lessons back into the next iteration. The map is not the terrain.

Code craft enables high-value customer collaboration by enabling rapid, working releases and by delivering code that’s much easier to change. Far from getting in the way of building the right thing, it is the way.

But…

That’s only if your design process is truly iterative. Teams that are just working through a backlog may see things differently, because they’re not setting out to solve the customer’s problem. They’re setting out to deliver a list of features that some people sitting in a room – these days very probably not the end users and not the developers themselves – guessed might solve the problem (if indeed, the problem was ever discussed).

In that situation, technical discipline won’t help you deliver the right thing. But it could help you deliver the wrong thing sooner for less expense. #FailFast

 

 

Refactoring To Closures in Kotlin & IntelliJ

I spent yesterday morning practicing a refactoring in Kotlin I wanted to potentially demonstrate for a workshop, and after half a dozen unsuccessful attempts, I found a way that seems relatively safe. I thought it might be useful to document it here, both for myself for the future and anyone else who might be interested.

My goal here is to encapsulate the data used in this function for calculating quotes for fitted carpets. The solution I’m thinking of is closures.

How do I get from this to closures being injected into quote() safely? Here’s how I did it in IntelliJ.

  1. Use the Function to Scope… refactoring to extract the body of, say, the roomArea() function into an internal function.

2. As a single step, change the return type of roomArea() to a function signature that matches area(), return a reference to ::area instead of the return value from area(), and change quote() to invoke the returned area() function. (Phew!)

3. Rename roomArea() to room() so it makes more sense.

4. In quote(), highlight the expression room(width, length) and use the Extract Parameter refactoring to have that passed into quote() from the tests.

5. Now we’re going to do something similar for carpetPrice(), with one small difference. Next, as with roomArea(), use the Function to Scope refactoring to extract the body of carpetPrice() into an internal function.

6. Then swap the return value with a reference to the ::price function.

7. Now, this time we want the area to be passed in as a parameter to the price() function. Extract Parameter area from price(), change the signature of the returned function and update quote() to pass it in using the area() function. Again, this must be a single step.

8. Change the Signature of carpetPrice() to remove the redundant area parameter.

9. Rename carpetPrice() to carpet() so it makes more sense.

10. Extract Parameter for the expression carpet(pricePerSqrMtr, roundUp) in quote() called price()

 

If you want to have a crack at this yourself, the source code is at https://github.com/jasongorman/kotlin_simple_design/tree/master/fp_modular_design , along with two more examples (an OO/Java version of this, plus another example that breaks all the rules of both Simple Design and modular design in Kotlin.