Foundations of Correct Code

When I introduce developers to practices like unit testing and TDD, I often find it useful to include a little background on the foundations of testing for software correctness.

You may well have heard the terms before: contract, pre-condition, post-condition. But you might not be aware of their origins. It’s good to know, because this is what forms the basis for describing what a program is expected to do, and therefore the basis for testing that it does.

Let’s jump in our time machine and vworp ourselves back to 1969. Computing pioneer Charles Antony Richard Hoare (Tony, for short – and now Sir Tony) proposed a simple logical formula for expressing the correct operation of a program.

{ P } C { Q }

If a pre-condition, P, holds when a command C is invoked, then if C executes correctly, a post-condition Q will be satisfied on completion. P and Q are predicates (assertions).

For example, consider a function that calculates square roots: sqrt(x). The pre-condition for calling sqrt would be that x must be a positive number, otherwise the function won’t work correctly. Provided x is positive, if sqrt executes correctly, the return value multiplied by itself will be equal to x.

Expressed as a Hoare triple:

{ x >= 0 }  sqrt(x)  { sqrt(x) * sqrt(x) = x }

We see Hoare triples all the time in testing. Indeed, every test of logic must have a pre-condition, an action or command, and a post-condition. You may have seen them described using various conventions, such as:

“Given that x is a positive number,

When I square root x,

Then the result multiplied by itself is equal to x.”

Or:

Arrange: x = 4

Act: y = sqrt(x)

Assert: y * y = x

Even when we write it all as one line of code in a unit test, it still has the three components of a Hoare triple.

assertEquals(4, sqrt(4) * sqrt(4))

Unit tests can check that, in specific scenarios, programs do what they’re supposed to. i.e., they can test post-conditions. The set-up for each test determines whether the pre-condition is satisfied. (e.g., x = 4 satisfies x >= 0 ).

But what unit tests can’t check is that programs aren’t invoked when their pre-conditions aren’t satisfied.

We have two choices here: defensive programming, and Design by Contract.

Defensive programming guards the body of a function or method, checking that the pre-condition holds and throwing an error if it doesn’t.

double sqrt(double x) {

        if( x < 0 ) throw new IllegalArgumentException();

        // main body of method is never reached when x < 0

        .....

}

In this implementation, throwing an exception when the pre-condition is broken is part of the correct operation of sqrt. Therefore, we could test that it does. But it complicates our logic. If client code can call sqrt with a negative number, then it we will also need to write code to handle that exception when it arises. We’ve made handling negative inputs part of the logic of our program.

A simpler approach would be to ensure that sqrt is never called when x < 0. For example, by ensuring that x is never less than zero.

Design by Contract works on the principle that for software to be correct, all of the interactions between its components must be correct. A Hoare triple describes a contract between a client and a supplier of a service (e.g., a method of a class), such that the supplier ensures that the post-condition is satisfied, and the client ensures that they never a method or function unless the pre-condition is satisfied. So there’s no need for defensive programming and no need for handling those edge cases in our internal logic.

Of course, if there’s an error in our code, a function might still be invoked when a pre-condition doesn’t hold. So fans of DbC will put pre-condition checks in their code, typically for use during testing, to flag up any such errors.

double sqrt(double x) {

        assert( x >= 0 );

        // execution is halted if x < 0 and 
        // assertion checking is switched on

        .....

}

Most build systems these days enable us to switch assertion checking on or off (e.g., on for testing, off for production).

Personally, I tend to use a combination of unit testing (sometimes quite exhaustive unit testing) and inline assertions to check pre-conditions during testing. If a pre-condition fails, that means there’s an error in my client code that I need to fix before it goes any further down the delivery pipeline. I’m not in the business of throwing “Oops. My Code Is Wrong” exceptions in production systems. Nor should you be.

Hoare triples can also be used as a basis for more meaningful and rigorous code inspections. If you think about it, it’s not just programs or functions or methods that have constraints on their correct operation: every piece of executable code – statements, blocks, expressions – has an implied contract.

int x = 100;

int y = 10;

int z = x / y;      // pre: y != 0

return sqrt(z);     // pre: z >= 0

As we read through the code, line by line, we can ask questions like “What should be true after this line has executed? When would this line of code fail?” This can point to test cases we didn’t think of, and guide us to trace back the origins of our input data to ensure it can never harm that code.

One final thought on defensive programming vs. Design by Contract, relating to the design of the system as a whole: when it’s us writing the client code, we have control over whether the inputs are correct. When someone else is providing the inputs – e.g., a web service client, or an end user – we don’t have that control. So it’s usually necessary to code defensively at system/service boundaries.

The best option where UX/UI design is concerned is usually to not give end users the ability to make invalid choices. For example, the UI of an ATM offers end users a choice of withdrawal amounts.

atm

Offering users choices that aren’t actually available and then displaying an error message when they select them not only annoys the user, but also complicates the application’s internal logic as it now has to handle more edge cases.

I’ve found the best approach to delivering correct software that’s simple in its internal logic is to design the UX carefully to simplify the inputs, do some defensive programming just below the UI to validate the inputs, and then apply Design by Contract for all the internal logic, with test-time assertion checking in case we made a boo-boo in some client code. Systems, components and services should have a hard outer shell – zero trust in any inputs – and a soft inner centre where inputs are trusted to be (almost) certainly valid.

Whether it’s defensive programming or DbC, though, there’s one approach I would never condone, and that’s deploying systems that don’t handle every allowable input or user action meaningfully. If the software lets you do it, the software must have a good answer to it. Understanding the contracts between the user (be it a human being or another system) and our software is the key to making sure that it does.

 

The Hidden Cost of “Dependency Drag”

 

16736646645_f4cfd8f770_b
The mysterious Sailing Stones of Death Valley are moved by some unseen natural force.

When I demonstrate mutation testing, I try to do it in the programming language my audience uses day-to-day. In most of the popular programming languages, there’s a usable, current mutation testing tool available. But for a long time, the .NET platform had none. That’s not to say there were never any decent mutation testing tools for .NET programs. There’s been several. But they had all fallen by the wayside.

Here’s the thing: some community-spirited developer kindly creates a mutation testing tool we can all use. That’s a sizable effort for no financial reward. But still they write it. It works. Folk are using it. And there’s no real need to add to it. Job done.

Then, one day, you try to use it with the new version of the unit testing tool you’ve been working with, and – mysteriously – it stops working. Like the Sailing Stones of Death Valley, the mutation testing is inexplicably 100 metres from where you left it, and to get it working again it has to be dragged back to its original position.

This is the hidden cost of a force I might call Dependency Drag. I see it all the time: developers forced to maintain software products that aren’t changing, but that are getting out of step with the environment in which they run, which is constantly changing under their feet.

GitHub – and older OSS repositories – is littered with the sun-bleached skeletons of code bases that got so out of step they simply stopped working, and maintainers didn’t want to waste any more time keeping them operational. Too much effort just to stand still.

Most of us don’t see Dependency Drag, because it’s usually hidden within an overall maintenance effort on a changing product. And the effect is usually slow enough that it looks like the stones aren’t actually moving.

But try and use some code that was written 5 years ago, 10 years ago, 20 years ago, if it hasn’t been maintained, and you’ll see it. The stones are a long way from where you left them.

This effect can include hardware, of course. I hang on to my old 3D TV so that I can play my 3D Blu-rays. One day, that TV will break down. Maybe I’ll be able to find another one on eBay. But 10 years from now? 20 years from now? My non-biodegradable discs may last centuries if kept safe. But it’s unlikely there’ll be anything to play them on 300 years from now.

This is why it will become increasingly necessary to preserve the execution environments of programs as well as the programs themselves. It’s no use preserving the 1960s Fortran compiler if you don’t have the 1960s computer and operating system and punch card reader it needs to work.

And as execution environments get exponentially more complex, the cost of Dependency Drag will multiply.

 

Timeless Design Principles at JAX – Slide Deck

If you were at my keynote at JAX this this morning – or if you weren’t and would like to see the slides – you can get them here.

Timeless Design Principles

I took the audience back in time – in my special non-copyrighted, royalty-free time machine – over 5 decades, looking at how basic principles of simple and modular design could have been applied in programming languages of the time.

Whether you’re working in Java or C#, Ruby or JavaScript, Visual Basic or C… these 8 timeless design principles can be applied.

So, no excuses! 😉

C and the Interface Segregation Principle

In a previous post I illustrated how S.O.L.I.D. principles could be applied in C, showing how function pointers can be used to achieve polymorphism. I made the point that interface segregation – the “I” in SOLID – was difficult in C. A few folk got in touch and asked me to expand on that.

In C, we have limited support for hiding functions a client doesn’t need to depend on using header files. Going back to the basic carpet quote example, we can define a .h with a function for calculating the area of a room:

…and a .h file for calculating how many fights of stairs will be involved based on which floor the room’s on (B, G, 1, 2, 3 etc).

We can implemen both of these functions in room.c.

A client that needs to know how many flights of stairs are involved doesn’t need to include room_area.h.

And a client that needs the room’s area doesn’t need to include floor_level.h.

So we have limited support for presenting client-specific interfaces for the same module. Specifically, we can do this is area() and flightsOfStairs() ony have one implementation. If we need to support multiple implementations – polymorphism – it gets much more complicated, involving convoluted logic around vtables, and impacts the readability of the code.

 

UPDATE: So, hey . I remembered how you can do interface segregation using vtables. Here’s a slide deck about my solid_c adventure. And here’s the final source code.

 

 

Are You A Full Full-Stack Developer?

This tweet from a conference talk by Kevlin Henney reminded me of a discussion I had with a developent team last week about the meaning of “full-stack developer”.

I think Kevlin’s absolutely right. It doesn’t just pertain to the technology stack. And I would go further: I believe a full-stack developer can be involved throughout the entire product lifecycle.

We can be there right at the start, when the business is envisioning strategic solutions to real business problems. Indeed, in some organisations, it’s developers who often put forward the ideas. Andd why not, after all? We probably have a wider toolbox to draw from when we consider how technology might hep to solve a business problem. And we probably have a better handle on what might be easy and what might be hard to do.

It’s also vitally important that dev teams have a good understanding of the problem they’re setting out to solve. Too often, when devs are brought in later in the process, they lack that understanding and the business pays the price in that lack of clear direction and the inability to prioritise the work.

Likewise with the early stages of the design process: teams that get handed, say, wireframes and told to “code this” often run into difficulties as they realise that UI mock-ups aren’t enough. Exactly what should happen when the user clicks that button? If they weren’t in the discussion, then they’ll need to have the discussion again. Or take a guess.

And at the other end, instead of throwing software over the wall into testing and then production and then waiting for the bug reports to start flooding in, developers can get involved there. Certainly, there’s much we can do to help as developers-in-test in automating and scaling testing so we can test more, and test faster. And by getting involved with software operations – monitoring, testing and observing our software in real use in the real world, we tend to learn a tonne of useful stuff that can feed back into the all-important next iteration of the product.

Kevlin hits the nail on the head: software development should start and end in the real world, with real end users, solving real problems. And that, to me, is best achieveed when developers are involved throughout. The most effective devs wear multiple hats: strategy, business analysis, requirements engineering, UX, architecture, database design and administration, information security, test design and automation, and operations and support.

We don’t need to be experts in all of them – as long as we have experts to drive those key activities – but generalising specialists who can contribute effectively in all those processes.

In other words, not just coders.

No, But Seriously…

My blog’s been going for 14 years (including the blog at the old location), and it seems I’ve posted many times on the topic of developers engaging directly with their end users. I’ve strongly recommended it many, many times.

I’m not talking about sitting in meeting rooms asking “What software would you like us to build?” That’s the wrong question. If your goal is to build effective solutions, we need to build a good understanding of the problems we’re setting out to solve.

My whole approach to software development is driven by problems – understanding them, designing solutions for them, testing those solutions in the real (or as real as possible) world, and feeding back lessons learned into the next design iteration.

That, to me, is software development.

And I’ve long held that to really understand our end users, we must become them. Even if it’s just for a short time. We need to walk a mile in their shoes, eat our own dog food, and any other euphamism for experiencing what it’s like to do their job using our software.

Traditional business and requirements analysis techniques – with which I’m very familiar – are completely inadequate to the task. No number of meetings, boxes and arrows, glossaries and other analysis paraphernalia will come close to seeing it and experiencing it for yourself.

And every time I say this, developers nod their heads and agree that this is sage advice indeed. And then they don’t do it. Ever.

In fact, many developers – at the suggestion of spending time actually embedded in the business, seeing how the busness works and the problems the business faces – run a mile in the opposite direction. Which is a real shame, because this really is – hands down – the most useful thing we could do. Trying to solve problems we don’t understand is a road to nowhere.

So, I’ll say it again – and keep saying it.

Developers – that’s you, probably – need to spend real, quality time embedded with their end users, seeing how they work, seeing how they use our software, and experiencing all of that for ourselves. It should be a regular thing. It should be the norm. Don’t let a business analyst take your place. Experienccing it second or third-hand is no substitute for the real thing. If the goal is to get the problem into your head, then your head really needs to be there.

If your software is used internally within the business, embed in those teams. If your software’s used by external customers, become one of them. And spend time in the sales team. Spend time in the support team. Spend time in the marketing team. Find out for yourself what it’s really like to live with the software you create. I’m always amazed at how many dev teams literally have no idea.

Likely as not, it will radically transform the way you think about your product and your development processes.

 

The Most Popular Programming Language in 2019? You’re Not Going To Like It…

I threw a curveball on Twitter yesterday.

I’m not at all surprised to see SQL scoring so low, with many folk asking “Why is SQL on this list and not Java?”

It depends, of course, on what we mean by “popular” (and by “programming language”). If we mean “liked by developers”, then I’m frankly surprised SQL scored as high as it did. I’m certainly no fan – always looking for ways to write no SQL at all if I can help it – and I know many devs are none too keen, either.

But if you ask employers, it’s a different story. In the UK, for example, SQL is the most in-demand programming language recruiters ask for. According to itjobswatch.co.uk, more jobs advertised over the last 6 months mentioned SQL than any other language. (When I searched some of the top job sites in other countries, the trend was the same: more results returned for “SQL” than any other language.)

And this makes sense, when you think about it. While, these days, most jobs don’t specifically ask for a “SQL developer”, many developer jobs do ask for some proficiency in using relational databases and in SQL. It’s a forgotten language, but still very much alive and in current use.

Some question whether SQL’s really a programming language at all – GitHub certainly don’t seem to think so. I guess it depends on the dialect of SQL we’re talking about: Transact-SQL, PostgreSQL, MySQL and PL/SQL have all of the features we’d expect from a programming language – variables, I/O, functions, control flow, etc. And I still see applications where more than half the code is written in stored procedures – though I certainly don’t condone that. But, yes, in those cases I think we have to concede that they are programming languages – every bit as much as Fortran and BASIC.

So, there you have it – the ugy truth. SQL is the most in-demand programming language. It might not be the one most developers want on their CV, but it’s one very many developers need on their CV.