The Power of Backwards

Last week, I posed a challenge on the @codemanship Twitter account to tell the story of Humpty Dumpty purely through code.

I tried this exercise two different ways: first, I had a go at designing a fluent API that I could use to tell the story, which turned out to be very hard. I abandoned that attempt and just started coding the story in a JUnit test.

This code, of course, won’t compile. If I paste this into a Java file, my IDE has many complaints – lots of red text.


But, thanks to the design of my editor, I can work backwards from that red text, declaring elements of code – methods, classes, variables, fields, constants etc – and start fleshing out a set of abstractions with which this nursery rhyme can be told.

humpty could be a local variable of type Humpty.


dumpty could be a field of Humpty, of type Dumpty.


sat() could be a method of Dumpty, with parameters on, a and wall.


And so on.

Working backwards from the usage code like this, I was able to much more easily construct the set of abstractions I needed to tell the story. That is to say, I started with the story I wanted to tell, and worked my way back to the language I needed to tell it.

(You can see the source code for my second attempt at

As well as being an interesting exercise in using code to tell a story – something we could probably all use some practice at – this could also be an exercise in working backwards from examples using your particular editor or IDE.

In TDD training and coaching, I encourage developers to write their test assertion first and work backwards to the set-up. This neat little kata has reminded me of why.

but don’t take my word for it. Try this exercise both ways – forwards (design the API then tell the story) and backwards (tell the story and reverse-engineer the API) – and see how you get on.

If I’ve spoiled Humpty Dumpty for you, here are some other nursery rhymes you could use.





Design Discovery In TDD Through Refactoring – An Example

There are many ways we can introduce new types of objects in a test-driven approach to design. Commonly, developers reference classes that don’t yet exist in their test, and then declare them so they can continue writing the test.

But if you favour back-loading the bulk of your design decisions until after you’ve passed the test, there’s a simple pattern I often use to help me discover objects through refactoring. The advantage of this see-the-code-then-extract-the-objects approach is that we’re more likely to end up with good abstractions, since the design of our objects is based purely on what’s needed to pass the test.

Let’s take a shopping basket example. If I write a test for adding an item to a shopping basket that references no implementation, with all the code contained inside the test:

This design currently suffers from a code smell called “Primitive Obsession”. There’s just raw, exposed data. And we’d expect this if we hadn’t encapsulated any of the logic inside classes (or closures, if you’re that way inclined.)

The test passes, though. So, as icky as our design might be, it works.

Time to start refactoring. First, let’s isolate the action we’re testing into its own method.

What I’m interested in when I see a stateless method like addItem() is whether there’s an object identity lurking among its parameters – a potential this pointer, if you like. Right now, this method does action->object. To make it OO, we want to flip that around to object.action. I reckon the object in this case – the thing to which an item is being added – is the basket collection. Let’s introduce a parameter object for it.

This method is about adding an item to the shopping basket – as evidenced by the line:


We can eliminate this Feature Envy by moving addItem() to shoppingBasket, making it the target of the invocation.

Now we can see that the code for calculating the total really belongs in the new ShoppingBasket class, putting that work where the data is. First we extract a total() method.

Then we can move total() to the object of its Feature Envy.

Looking inside the ShoppingBasket class, we see we have a bit of cleaning up to do.

Let’s properly encapsulate the list of items.

Our test code is much, much simpler now (a sign of improved encapsulation).

But we’re not done yet. Next, let’s turn our attention to the Long Parameter List code smell in addItem().

We can introduce a parameter object that holds all of the item data.

And now that we have a BasketItem class that holds the item data, we can add that to the list instead of the ugly and not-very-type-safe object array.

Then we clean up the test code a little more, with various inlinings.

Almost there.

The total() method has Feature Envy for data of BasketItem.

Let’s fix that.

This helps us to improve encapsulation in BasketItem.

Two last little notes: firstly, it turns out that for this particular action, product code and product description aren’t needed. I see developers do this often – modeling data based on their understanding of the domain rather than thinking specifically “what data is needed here?” This can lead to redundancy and unnecessary complexity. Until we have a feature that requires it, leave unused data out of the design. Always be led by function, not by data. We might get a requirement later to, say, report how many units of P111 we sold each day. We’d add the product code then.

Let’s fix that.

And finally, I started out with a test fixture for a “Shopping Cart”, but as the design emerged, that concept changed. It’s important to keep your tests – as living documentation for your code – in step with the emerging design, or things will get confusing.

Let’s fix that.

Now, you might have come up with this design up front. But then again, you might not. The benefit of this approach to discovering the design is that we start only with what we need, and end with an equivalent version with the code smells removed and nothing we don’t need.

The price is that you spend more time refactoring. But as the model emerges, we tend to find that this overheard decreases, and the emphasis shifts to reusing the design instead of discovering it every time. In other words, it gets easier as we go.


Foundations of Correct Code

When I introduce developers to practices like unit testing and TDD, I often find it useful to include a little background on the foundations of testing for software correctness.

You may well have heard the terms before: contract, pre-condition, post-condition. But you might not be aware of their origins. It’s good to know, because this is what forms the basis for describing what a program is expected to do, and therefore the basis for testing that it does.

Let’s jump in our time machine and vworp ourselves back to 1969. Computing pioneer Charles Antony Richard Hoare (Tony, for short – and now Sir Tony) proposed a simple logical formula for expressing the correct operation of a program.

{ P } C { Q }

If a pre-condition, P, holds when a command C is invoked, then if C executes correctly, a post-condition Q will be satisfied on completion. P and Q are predicates (assertions).

For example, consider a function that calculates square roots: sqrt(x). The pre-condition for calling sqrt would be that x must be a positive number, otherwise the function won’t work correctly. Provided x is positive, if sqrt executes correctly, the return value multiplied by itself will be equal to x.

Expressed as a Hoare triple:

{ x >= 0 }  sqrt(x)  { sqrt(x) * sqrt(x) = x }

We see Hoare triples all the time in testing. Indeed, every test of logic must have a pre-condition, an action or command, and a post-condition. You may have seen them described using various conventions, such as:

“Given that x is a positive number,

When I square root x,

Then the result multiplied by itself is equal to x.”


Arrange: x = 4

Act: y = sqrt(x)

Assert: y * y = x

Even when we write it all as one line of code in a unit test, it still has the three components of a Hoare triple.

assertEquals(4, sqrt(4) * sqrt(4))

Unit tests can check that, in specific scenarios, programs do what they’re supposed to. i.e., they can test post-conditions. The set-up for each test determines whether the pre-condition is satisfied. (e.g., x = 4 satisfies x >= 0 ).

But what unit tests can’t check is that programs aren’t invoked when their pre-conditions aren’t satisfied.

We have two choices here: defensive programming, and Design by Contract.

Defensive programming guards the body of a function or method, checking that the pre-condition holds and throwing an error if it doesn’t.

double sqrt(double x) {

        if( x < 0 ) throw new IllegalArgumentException();

        // main body of method is never reached when x < 0



In this implementation, throwing an exception when the pre-condition is broken is part of the correct operation of sqrt. Therefore, we could test that it does. But it complicates our logic. If client code can call sqrt with a negative number, then it we will also need to write code to handle that exception when it arises. We’ve made handling negative inputs part of the logic of our program.

A simpler approach would be to ensure that sqrt is never called when x < 0. For example, by ensuring that x is never less than zero.

Design by Contract works on the principle that for software to be correct, all of the interactions between its components must be correct. A Hoare triple describes a contract between a client and a supplier of a service (e.g., a method of a class), such that the supplier ensures that the post-condition is satisfied, and the client ensures that they never a method or function unless the pre-condition is satisfied. So there’s no need for defensive programming and no need for handling those edge cases in our internal logic.

Of course, if there’s an error in our code, a function might still be invoked when a pre-condition doesn’t hold. So fans of DbC will put pre-condition checks in their code, typically for use during testing, to flag up any such errors.

double sqrt(double x) {

        assert( x >= 0 );

        // execution is halted if x < 0 and 
        // assertion checking is switched on



Most build systems these days enable us to switch assertion checking on or off (e.g., on for testing, off for production).

Personally, I tend to use a combination of unit testing (sometimes quite exhaustive unit testing) and inline assertions to check pre-conditions during testing. If a pre-condition fails, that means there’s an error in my client code that I need to fix before it goes any further down the delivery pipeline. I’m not in the business of throwing “Oops. My Code Is Wrong” exceptions in production systems. Nor should you be.

Hoare triples can also be used as a basis for more meaningful and rigorous code inspections. If you think about it, it’s not just programs or functions or methods that have constraints on their correct operation: every piece of executable code – statements, blocks, expressions – has an implied contract.

int x = 100;

int y = 10;

int z = x / y;      // pre: y != 0

return sqrt(z);     // pre: z >= 0

As we read through the code, line by line, we can ask questions like “What should be true after this line has executed? When would this line of code fail?” This can point to test cases we didn’t think of, and guide us to trace back the origins of our input data to ensure it can never harm that code.

One final thought on defensive programming vs. Design by Contract, relating to the design of the system as a whole: when it’s us writing the client code, we have control over whether the inputs are correct. When someone else is providing the inputs – e.g., a web service client, or an end user – we don’t have that control. So it’s usually necessary to code defensively at system/service boundaries.

The best option where UX/UI design is concerned is usually to not give end users the ability to make invalid choices. For example, the UI of an ATM offers end users a choice of withdrawal amounts.


Offering users choices that aren’t actually available and then displaying an error message when they select them not only annoys the user, but also complicates the application’s internal logic as it now has to handle more edge cases.

I’ve found the best approach to delivering correct software that’s simple in its internal logic is to design the UX carefully to simplify the inputs, do some defensive programming just below the UI to validate the inputs, and then apply Design by Contract for all the internal logic, with test-time assertion checking in case we made a boo-boo in some client code. Systems, components and services should have a hard outer shell – zero trust in any inputs – and a soft inner centre where inputs are trusted to be (almost) certainly valid.

Whether it’s defensive programming or DbC, though, there’s one approach I would never condone, and that’s deploying systems that don’t handle every allowable input or user action meaningfully. If the software lets you do it, the software must have a good answer to it. Understanding the contracts between the user (be it a human being or another system) and our software is the key to making sure that it does.


The Hidden Cost of “Dependency Drag”


The mysterious Sailing Stones of Death Valley are moved by some unseen natural force.

When I demonstrate mutation testing, I try to do it in the programming language my audience uses day-to-day. In most of the popular programming languages, there’s a usable, current mutation testing tool available. But for a long time, the .NET platform had none. That’s not to say there were never any decent mutation testing tools for .NET programs. There’s been several. But they had all fallen by the wayside.

Here’s the thing: some community-spirited developer kindly creates a mutation testing tool we can all use. That’s a sizable effort for no financial reward. But still they write it. It works. Folk are using it. And there’s no real need to add to it. Job done.

Then, one day, you try to use it with the new version of the unit testing tool you’ve been working with, and – mysteriously – it stops working. Like the Sailing Stones of Death Valley, the mutation testing is inexplicably 100 metres from where you left it, and to get it working again it has to be dragged back to its original position.

This is the hidden cost of a force I might call Dependency Drag. I see it all the time: developers forced to maintain software products that aren’t changing, but that are getting out of step with the environment in which they run, which is constantly changing under their feet.

GitHub – and older OSS repositories – is littered with the sun-bleached skeletons of code bases that got so out of step they simply stopped working, and maintainers didn’t want to waste any more time keeping them operational. Too much effort just to stand still.

Most of us don’t see Dependency Drag, because it’s usually hidden within an overall maintenance effort on a changing product. And the effect is usually slow enough that it looks like the stones aren’t actually moving.

But try and use some code that was written 5 years ago, 10 years ago, 20 years ago, if it hasn’t been maintained, and you’ll see it. The stones are a long way from where you left them.

This effect can include hardware, of course. I hang on to my old 3D TV so that I can play my 3D Blu-rays. One day, that TV will break down. Maybe I’ll be able to find another one on eBay. But 10 years from now? 20 years from now? My non-biodegradable discs may last centuries if kept safe. But it’s unlikely there’ll be anything to play them on 300 years from now.

This is why it will become increasingly necessary to preserve the execution environments of programs as well as the programs themselves. It’s no use preserving the 1960s Fortran compiler if you don’t have the 1960s computer and operating system and punch card reader it needs to work.

And as execution environments get exponentially more complex, the cost of Dependency Drag will multiply.


Timeless Design Principles at JAX – Slide Deck

If you were at my keynote at JAX this this morning – or if you weren’t and would like to see the slides – you can get them here.

Timeless Design Principles

I took the audience back in time – in my special non-copyrighted, royalty-free time machine – over 5 decades, looking at how basic principles of simple and modular design could have been applied in programming languages of the time.

Whether you’re working in Java or C#, Ruby or JavaScript, Visual Basic or C… these 8 timeless design principles can be applied.

So, no excuses! 😉

C and the Interface Segregation Principle

In a previous post I illustrated how S.O.L.I.D. principles could be applied in C, showing how function pointers can be used to achieve polymorphism. I made the point that interface segregation – the “I” in SOLID – was difficult in C. A few folk got in touch and asked me to expand on that.

In C, we have limited support for hiding functions a client doesn’t need to depend on using header files. Going back to the basic carpet quote example, we can define a .h with a function for calculating the area of a room:

…and a .h file for calculating how many fights of stairs will be involved based on which floor the room’s on (B, G, 1, 2, 3 etc).

We can implemen both of these functions in room.c.

A client that needs to know how many flights of stairs are involved doesn’t need to include room_area.h.

And a client that needs the room’s area doesn’t need to include floor_level.h.

So we have limited support for presenting client-specific interfaces for the same module. Specifically, we can do this is area() and flightsOfStairs() ony have one implementation. If we need to support multiple implementations – polymorphism – it gets much more complicated, involving convoluted logic around vtables, and impacts the readability of the code.


UPDATE: So, hey . I remembered how you can do interface segregation using vtables. Here’s a slide deck about my solid_c adventure. And here’s the final source code.



Are You A Full Full-Stack Developer?

This tweet from a conference talk by Kevlin Henney reminded me of a discussion I had with a developent team last week about the meaning of “full-stack developer”.

I think Kevlin’s absolutely right. It doesn’t just pertain to the technology stack. And I would go further: I believe a full-stack developer can be involved throughout the entire product lifecycle.

We can be there right at the start, when the business is envisioning strategic solutions to real business problems. Indeed, in some organisations, it’s developers who often put forward the ideas. Andd why not, after all? We probably have a wider toolbox to draw from when we consider how technology might hep to solve a business problem. And we probably have a better handle on what might be easy and what might be hard to do.

It’s also vitally important that dev teams have a good understanding of the problem they’re setting out to solve. Too often, when devs are brought in later in the process, they lack that understanding and the business pays the price in that lack of clear direction and the inability to prioritise the work.

Likewise with the early stages of the design process: teams that get handed, say, wireframes and told to “code this” often run into difficulties as they realise that UI mock-ups aren’t enough. Exactly what should happen when the user clicks that button? If they weren’t in the discussion, then they’ll need to have the discussion again. Or take a guess.

And at the other end, instead of throwing software over the wall into testing and then production and then waiting for the bug reports to start flooding in, developers can get involved there. Certainly, there’s much we can do to help as developers-in-test in automating and scaling testing so we can test more, and test faster. And by getting involved with software operations – monitoring, testing and observing our software in real use in the real world, we tend to learn a tonne of useful stuff that can feed back into the all-important next iteration of the product.

Kevlin hits the nail on the head: software development should start and end in the real world, with real end users, solving real problems. And that, to me, is best achieveed when developers are involved throughout. The most effective devs wear multiple hats: strategy, business analysis, requirements engineering, UX, architecture, database design and administration, information security, test design and automation, and operations and support.

We don’t need to be experts in all of them – as long as we have experts to drive those key activities – but generalising specialists who can contribute effectively in all those processes.

In other words, not just coders.

No, But Seriously…

My blog’s been going for 14 years (including the blog at the old location), and it seems I’ve posted many times on the topic of developers engaging directly with their end users. I’ve strongly recommended it many, many times.

I’m not talking about sitting in meeting rooms asking “What software would you like us to build?” That’s the wrong question. If your goal is to build effective solutions, we need to build a good understanding of the problems we’re setting out to solve.

My whole approach to software development is driven by problems – understanding them, designing solutions for them, testing those solutions in the real (or as real as possible) world, and feeding back lessons learned into the next design iteration.

That, to me, is software development.

And I’ve long held that to really understand our end users, we must become them. Even if it’s just for a short time. We need to walk a mile in their shoes, eat our own dog food, and any other euphamism for experiencing what it’s like to do their job using our software.

Traditional business and requirements analysis techniques – with which I’m very familiar – are completely inadequate to the task. No number of meetings, boxes and arrows, glossaries and other analysis paraphernalia will come close to seeing it and experiencing it for yourself.

And every time I say this, developers nod their heads and agree that this is sage advice indeed. And then they don’t do it. Ever.

In fact, many developers – at the suggestion of spending time actually embedded in the business, seeing how the busness works and the problems the business faces – run a mile in the opposite direction. Which is a real shame, because this really is – hands down – the most useful thing we could do. Trying to solve problems we don’t understand is a road to nowhere.

So, I’ll say it again – and keep saying it.

Developers – that’s you, probably – need to spend real, quality time embedded with their end users, seeing how they work, seeing how they use our software, and experiencing all of that for ourselves. It should be a regular thing. It should be the norm. Don’t let a business analyst take your place. Experienccing it second or third-hand is no substitute for the real thing. If the goal is to get the problem into your head, then your head really needs to be there.

If your software is used internally within the business, embed in those teams. If your software’s used by external customers, become one of them. And spend time in the sales team. Spend time in the support team. Spend time in the marketing team. Find out for yourself what it’s really like to live with the software you create. I’m always amazed at how many dev teams literally have no idea.

Likely as not, it will radically transform the way you think about your product and your development processes.


The Most Popular Programming Language in 2019? You’re Not Going To Like It…

I threw a curveball on Twitter yesterday.

I’m not at all surprised to see SQL scoring so low, with many folk asking “Why is SQL on this list and not Java?”

It depends, of course, on what we mean by “popular” (and by “programming language”). If we mean “liked by developers”, then I’m frankly surprised SQL scored as high as it did. I’m certainly no fan – always looking for ways to write no SQL at all if I can help it – and I know many devs are none too keen, either.

But if you ask employers, it’s a different story. In the UK, for example, SQL is the most in-demand programming language recruiters ask for. According to, more jobs advertised over the last 6 months mentioned SQL than any other language. (When I searched some of the top job sites in other countries, the trend was the same: more results returned for “SQL” than any other language.)

And this makes sense, when you think about it. While, these days, most jobs don’t specifically ask for a “SQL developer”, many developer jobs do ask for some proficiency in using relational databases and in SQL. It’s a forgotten language, but still very much alive and in current use.

Some question whether SQL’s really a programming language at all – GitHub certainly don’t seem to think so. I guess it depends on the dialect of SQL we’re talking about: Transact-SQL, PostgreSQL, MySQL and PL/SQL have all of the features we’d expect from a programming language – variables, I/O, functions, control flow, etc. And I still see applications where more than half the code is written in stored procedures – though I certainly don’t condone that. But, yes, in those cases I think we have to concede that they are programming languages – every bit as much as Fortran and BASIC.

So, there you have it – the ugy truth. SQL is the most in-demand programming language. It might not be the one most developers want on their CV, but it’s one very many developers need on their CV.

Frameworks or Patterns? – Going Old School

Pairing with JavaScript developers this week, one of the things that struck me is how much many of us now rely on heavyweight frameworks to do sometimes quite simple things we used to do the old-fashioned way.

The example we were looking at is how Model-View-Controller is implemented in the web browser. MVC is a pretty simple design pattern, which typically builds on the even simpler Observer pattern. The goal is to have our user interface automatically update when changes are made to the state of our model so it can refresh.

We use observers to make it so that views can be called back when a model object’s state changes, without binding the model directly to the user interface.

So we split our logic into three distinct responsibilities: the model represents the internal data and logic of the application, independent of the user interface. Views represent that internal data and logic to the end user. And controllers respond to user actions and events and forward requests to the model. (In event-driven programming, we call these “event handlers”.)

In 2019, it’s customary for JavaScript developers to use a framework like React or Angular to wire MVC implementations together. But is it really necessary a lot of the time? Can we do MVC without them?

Well, yes we can. Quite easily.

Let’s look at a very simple example: a clock. Here’s our Clock model:

Aside from the core logic, you’ll notice a little extra code to add observers and notify them after the clock’s state has changed. The Observable class takes care of how this is handled.

The flexibility here is that we can add as many observers as we like, and Clock doesn’t need to know who is being notified. So we can have multiple views being updated on every state change.

In this example, we have two. One displays the clock’s state in hours, minutes and seconds.

And another displays the total number of seconds elapsed.

The views inject the inner HTML into a named placeholder in the web page.

And when it’s displayed, it looks like this:


Every time the clock ticks, both views are updated. This can allow us to build very reactive user experiences.

When the user clicks the Reset button, the clock is set back to 0:0:0 and starts ticking again. This is handled by a ClockController that is wired as a listener (another name for an observer) to the Reset button’s click event.

Note that all the controller does is forward the user’s request to the Clock object. It’s important controllers (and services) don’t include any core logic. Their job is purely to foward the request to wherever that core logic is implemented.

It’s all wired together from the outside using dependency injection.

Notice that all the implementaton dependencies are in this module. This is called Inversion of Control – the runtime order of implementation method calls in our MVC flow is determined by dependency injection, from above. This offers a lot of flexibility, as do the implicit abstracted invocations of the Observer pattern.

(Notice, too, how the document elements into which the view’s content will be injected are passed in as references to each view, allowing us to have multiple instances on the same web page. By this mechanism, we could also pass views into views, enabling us to create composite views.)

For additional flexibility, some developers use a Publish-Subscribe pattern instead of Observer. This can enable multiple threads and even multiple networked machines to receive notifications. Being asynchronous, it can scale MVC for high-volume distributed architectures. Observer has the limitation of being sychronous – typically (though it doesn’t have to be) – and of requiring observers to be in the same process as the observed. Having said all that, in 95% of applications, Observer is perfectly adequate (and considerably simpler).

And model observers don’t have to be views. I’ve used the same pattern in object persistence. If you think about it, a row in a database table is just another view of a model object. With a bit of adaptation, a Unit of Work – an object that captures such events – could be notified of a state change in a model object.

To sum up, then: MVC is pretty easy to do in vanilla JavaScript. Arguably it’s no easier when using the big front-end frameworks, and they can add a lot of extra code to your web page. You don’t need to buy the whole Mercedes if you just want the cigarette lighter.

(Okay, so if you look at the source code for my example, I have added Bootstrap, for prettifying my web page. Mea culpa!)