Visualising Architecture – London, Sept 4-5

Just wanted to quickly announce the first of a new series of hands-on 2-day workshops on software architecture aimed at Agile teams.

Visualising Architecture teaches diagramming techniques that developers can use to help them better understand and communicate key architectural ideas – essential to help quickly and effectively build a shared understanding among team members.

The public course will be in London W1A on June 4-5. If you’re looking to build your Architect Fu, this is a great investment in your’s or your dev team’s software design skills.

You can find out more and book your place by visiting the Eventbrite page.

Software Architecture (for Architectless Teams)

Since the rise of Agile Software Development post-2001, it’s become more and more common for development teams to work without a dedicated software architect. We, quite rightly, eschewed the evils of Big Design Up-Front (BDUF) after the Big Architecture excesses of the 1990s.

As someone who worked in senior architecture roles, I can attest that much of the work I was asked to do – e.g., writing architecture documents – added very little value to the end product. Teams rarely read them, let alone followed their blueprints. The resulting software turned out the way the developers wrote it, regardless of the architect’s input. Which is why I returned to hands-on development, realising it was the most direct way to have an influence on the actual software design.

But I can’t help feeling we threw the baby out with the bathwater. Too many teams seem to lack a vision of the bigger picture. They don’t see how the pieces fit together, or how the system fits in the real world of businesses or homes or cars or hospitals etc. The design emerges through the uncoordinated actions of individual developers, pairs and teams (and teams of teams) with no real oversight and nobody at the steering wheel.

Big decisions – decisions that are hard to reverse – get made on the fly with little thought given to the long-term consequences. For example, thanks to the rise of build and package management solutions like Maven, NuGet and NPM, developers these days can take on dependencies on third-party code without giving it much thought. We’ve seen where unmanaged dependencies can lead.

Most damaging, though, is the inability of many Agile teams to effectively collaborate on the design, making the bigger decisions together in response to a shared understanding of the architectural requirements. I’m reminded of a team I worked on who went off in their pairs after each planning meeting and came up with their own architectures for their specific user stories. The software quickly grew out of control, with no real conceptual integrity to the design. It became three architectures in one code base. We had three distinct MVC styles in the same software at the same time, and even three customer tables in the database. A little time around a whiteboard – or a bit of mob programming every now and again – could have minimised this divergence.

I firmly believe that software architecture is due for a comeback. I doubt it would return in the Big Architecture form we rejected after the 90s – and I’ll work hard to make sure it doesn’t. I doubt that such things as “software architects” will make a grand return, either. Most Agile teams will still work without a dedicated architect. Which is fine by me.

But teams without architects still need architecture. They need to be discussing, planning, visualising, evaluating and evolving the Big Picture design of their software as a cohesive unit, not as a bunch of people all doing their own thing.

Through Codemanship, I’m going to be doing my bit to bring back architecture through a new training offering aimed at Agile teams who don’t have a dedicated person in that role.

Rolling out later this summer, it will focus on key areas of software architecture:

  • Visualising Architecture (yes, the boxes and arrows are back!)
  • Architectural Requirements (runtime and development time requirements of design)
  • Collaborative Design Processes (how do we go from a user story to a software design, making key decisions as a team?)
  • Contextual Architecture (how does our software fit into the real world?)
  • Strategic Architecture (how does our software solve real business problems?)
  • Evaluating Architecture (how do we know if it’s working?)
  • Evolving Architecture (sure, it works now, but what about tomorrow?)
  • Architectural Principles (of maintainability, scalability, security etc)
  • Architectural Patterns (common styles of software architecture – SOA, Monolith, Event-Driven, Pipes-and-filters, etc)

That’s a lot of ground to cover. certainly a 2-3-day workshop won’t be anywhere near enough time to do it justice, so I’m devising a series of 2-day workshops that will focus on specific areas:

  1. Visualising
  2. Planning
  3. Evolving
  4. Principles & Patterns

All will place the emphasis on how architectural activities can work as part of an Agile development process, and all will assume that the team shares the responsibility for architecture.

Keep your eyes on the @codemanship Twitter account and this blog for news.

 

Wheels Within Wheels Within Wheels

Much is made of the cycles-within-cycles of Test-Driven Development.

At the core, we do micro-iterations with small, single-question unit tests to drive out the details of our internal design.

Surrounding those micro-cycles are the feedback loops provided by customer tests, which may require us to pass multiple unit tests to complete end-to-end.

User stories typically come with multiple customer tests – happy paths and edge cases – providing us with bigger cycles around our customer test feedback loops.

Orbiting those are release loops, where we bundle a set of user stories and await feedback from end users in the real world (or a simulated approximation of it for test purposes).

What’s not discussed, though, are the test criteria for those release loops. If we already established through customer testing that we delivered what we agreed we would i that release, what’s left to test for?

The minority of us who practice development driven by business goals may know the answer: we test to see if what we released achieves the goal(s) of that release.

feedbackloops

This is the outer feedback loop – the strategic feedback loop – that most dev teams are missing. if we’re creating software with a purpose, it stands to reason that at some point we must test for its fitness for that purpose. Does it do the job it was designed to do?

When explaining strategic feedback loops, I often use the example of a business start-up who deliver parcels throughout the London area. They have a fleet of delivery vans that go out every day across the city, delivering to a list of addresses parcels that were received into their depot overnight.

Delivery costs form the bulk of their overheads. They rent the vans. They charge them up with electrical power (it’s an all-electric fleet – green FTW!) They pay the drivers. And so on. It all adds up.

Business is good, and their customer base is growing rapidly. Do they rent more vans? Do they hire more drivers? Do they do longer routes, with longer driver hours, more recharging return-to-base trips, and higher energy bills? Or could the same number of drivers, in the same number of vans, deliver more parcels with the same mileage as before? Could their deliveries be better optimised?

Someone analyses the routes drivers have been taking, and theorises that they could have delivered the same parcels in less time driving less miles. They believe it could be done 35% more efficiently just by optimising the routes.

Importantly, using historical delivery and route data, they show on paper that an algorithm they have in mind would have saved 37% on miles and driver-hours. I, for one, would think twice about setting out to build a software system that implements unproven logic.

But the on-paper execution of it takes far too long. So they hatch a plan for a software system that selects the optimum delivery routes every day using this algorithm.

Taking route optimisation as the headline goal, the developers produce a first release in 2 weeks that takes in delivery addresses from an existing data source and – as command line utility initially – produces optimised routes in simple text files to be emailed to the drivers’ smartphones. It’s not pretty, and not a long-term solution by any means. But the core logic is in there, it’s been thoroughly unit and customer tested, and it seems to work.

While the software developers move on to thinking about the system could be made more user-friendly with a graphical UI (e.g., a smartphone app), the team – which includes the customer – monitor deliveries for the next couple of weeks very closely. How long are the routes taking? How many miles are vans driving? How much energy is being used on each route? How many recharging pit-stops are drivers making each day?

This is the strategic feedback loop: have we solved the problem? If we haven’t, we need to go around again and tweak the solution (or maybe even scrap it and try something else, if we’re so far off the target, we see no value in continuing down that avenue).

This is my definition of “done”; we keep iterating until we hit the target, learning lessons with each release and getting it progressively less wrong.

Then we move on to the next business goal.

When Should We Do Code Reviews?

One question that I get asked often is “When is the best time to do code reviews?” My pithy answer is: now. And now. And now. Yep, and now.

Typically, teams batch up a whole bunch of design decisions for a review – for example, in a pull request. If we’ve learned anything about writing good software, it’s that the bigger the batch, the more slips through the quality control net.

Releasing 50 features at a time, every 12 months, means we tend to bring less focus to testing each feature to see if it’s what the customer really needs. Releasing one feature at a time allows us to really focus in on that feature, see how it gets used, see how users respond to it.

Reviewing 50 code changes at a time gives similarly woolly results. A tonne of code smells tend to make it into production. Reviewing a handful of code changes – or, ideally, just one – at a time brings much more focus to each change.

Unsurprisingly, teams who review code continuously, working in rapid feedback cycles (e.g., doing TDD) tend to produce cleaner code – code that’s easier to understand, simpler, has less duplication and more loosely-coupled modules. (We’ve measured this – for example in this BBC TDD case study.)

One theory about why TDD tends to produce cleaner code is that the short feedback loops – “micro-cycles” – bring much more focus to every design decision. TDD deliberately has a step built in to each micro-cycle to stop, look at the code we just wrote or changed, and refactor if necessary. I strongly encourage developers not to waste this opportunity. The Green Light is our signal to do a mini code-review on the work we just did.

I’ve found, through working with many teams, that the most effective code reviews are rigorous and methodical. Check all the code that changed, and check for a list of potential code quality issues every single time. Don’t just look at the code to see if it “looks okay” to you.

In the Codemanship TDD course, I ask developers to run through a check list on every green light:

  • Is the code easy to understand? (Not sure? Ask someone else.)
  • Is there obvious duplication?
  • Is each method or function and class or module as simple as it could be?
  • Do any methods/functions or classes/modules have more than one responsibility?
  • Can you see any Feature Envy – where a method/function (or part of a method/function) of one class/module depends on multiple features of another class/module?
  • Are a class’s/module’s dependencies easily swappable?
  • Is the class/module exposed to things it isn’t using (e.g., methods of a C++ interface it doesn’t call, or unused imports from other modules)?

You may, according to your needs and your team’s coding standards, have a different checklist. What seems to make the difference is that your team has a checklist, and that you are in the habit of applying it whenever you have the luxury of working code.

This is where the relationship exists between code review and Continuous Delivery. If our code isn’t working , it isn’t shippable. If you go for hours at a time with failing automated tests (or no testing at all), code review is a luxury. Your top priority’s to get it working – that’s the most important quality of any software design. If it doesn’t work, and you can’t deploy it, then whether or not there are any, say, long parameter lists in it is rather academic.

Now, I appreciate that stopping on every passing test and going through a checklist for all the code you changed may sound like a real drag. But, once upon a time, writing a unit test, writing the test assertion first and working backwards, remembering to see the test fail, and all the the habits of effective TDD felt like a bit of a chore. Until I’d done them 10,000 times. And then I stopped noticing that I was doing them.

The same goes for code review checklists. The more we apply them, the more it becomes “muscle memory”. After a year or two, you’ll develop an intuitive sense of code quality – problems will tend to leap out at you when you look at code, just as one bum note in an entire orchestra might leap out at a conductor with years of listening experience and ear training. You can train your eyes to notice code smells like long methods, large classes, divergent change, feature envy, primitive obsession, data clumps and all the other things that can make code harder to change.

This is another reason why I encourage very frequent code reviews. If you were training your musical ear, one practice session every couple of weeks is going to be far less effective than 20 smaller practice sessions a day. And if each practice session is much more focused – i.e., we don’t ear-train musicians with whole symphonies – then that, too, will speed up the learning curve.

The other very important reason I encourage continuous code review is that when we batch them up, we also tend to end up with batches of remedial actions to rectify any problems. If I add a branch to a method, review that, and decide that method is now too logically complex, fixing it there and then is a doddle.

If I make 50 boo-boos like that, not only will an after-the-horse-has-bolted code review probably miss many of those 50 issues, but the resulting TO-DO list is likely to require an amount of time and effort that will make it a task that has to be scheduled – very possibly by someone who doesn’t understand the need to do them. In the zero-sum game of software development scheduling, the most common result is that the work never gets done.

 

The Power of Backwards

Last week, I posed a challenge on the @codemanship Twitter account to tell the story of Humpty Dumpty purely through code.

I tried this exercise two different ways: first, I had a go at designing a fluent API that I could use to tell the story, which turned out to be very hard. I abandoned that attempt and just started coding the story in a JUnit test.

This code, of course, won’t compile. If I paste this into a Java file, my IDE has many complaints – lots of red text.

humpty_initial

But, thanks to the design of my editor, I can work backwards from that red text, declaring elements of code – methods, classes, variables, fields, constants etc – and start fleshing out a set of abstractions with which this nursery rhyme can be told.

humpty could be a local variable of type Humpty.

humpty_backwards

dumpty could be a field of Humpty, of type Dumpty.

humpty_field

sat() could be a method of Dumpty, with parameters on, a and wall.

dumpty_method

And so on.

Working backwards from the usage code like this, I was able to much more easily construct the set of abstractions I needed to tell the story. That is to say, I started with the story I wanted to tell, and worked my way back to the language I needed to tell it.

(You can see the source code for my second attempt at https://github.com/jasongorman/HumptyDumpty)

As well as being an interesting exercise in using code to tell a story – something we could probably all use some practice at – this could also be an exercise in working backwards from examples using your particular editor or IDE.

In TDD training and coaching, I encourage developers to write their test assertion first and work backwards to the set-up. This neat little kata has reminded me of why.

but don’t take my word for it. Try this exercise both ways – forwards (design the API then tell the story) and backwards (tell the story and reverse-engineer the API) – and see how you get on.

If I’ve spoiled Humpty Dumpty for you, here are some other nursery rhymes you could use.

 

 

 

 

Design Discovery In TDD Through Refactoring – An Example

There are many ways we can introduce new types of objects in a test-driven approach to design. Commonly, developers reference classes that don’t yet exist in their test, and then declare them so they can continue writing the test.

But if you favour back-loading the bulk of your design decisions until after you’ve passed the test, there’s a simple pattern I often use to help me discover objects through refactoring. The advantage of this see-the-code-then-extract-the-objects approach is that we’re more likely to end up with good abstractions, since the design of our objects is based purely on what’s needed to pass the test.

Let’s take a shopping basket example. If I write a test for adding an item to a shopping basket that references no implementation, with all the code contained inside the test:

This design currently suffers from a code smell called “Primitive Obsession”. There’s just raw, exposed data. And we’d expect this if we hadn’t encapsulated any of the logic inside classes (or closures, if you’re that way inclined.)

The test passes, though. So, as icky as our design might be, it works.

Time to start refactoring. First, let’s isolate the action we’re testing into its own method.

What I’m interested in when I see a stateless method like addItem() is whether there’s an object identity lurking among its parameters – a potential this pointer, if you like. Right now, this method does action->object. To make it OO, we want to flip that around to object.action. I reckon the object in this case – the thing to which an item is being added – is the basket collection. Let’s introduce a parameter object for it.

This method is about adding an item to the shopping basket – as evidenced by the line:

shoppingBasket.getBasket().add(item);

We can eliminate this Feature Envy by moving addItem() to shoppingBasket, making it the target of the invocation.

Now we can see that the code for calculating the total really belongs in the new ShoppingBasket class, putting that work where the data is. First we extract a total() method.

Then we can move total() to the object of its Feature Envy.

Looking inside the ShoppingBasket class, we see we have a bit of cleaning up to do.

Let’s properly encapsulate the list of items.

Our test code is much, much simpler now (a sign of improved encapsulation).

But we’re not done yet. Next, let’s turn our attention to the Long Parameter List code smell in addItem().

We can introduce a parameter object that holds all of the item data.

And now that we have a BasketItem class that holds the item data, we can add that to the list instead of the ugly and not-very-type-safe object array.

Then we clean up the test code a little more, with various inlinings.

Almost there.

The total() method has Feature Envy for data of BasketItem.

Let’s fix that.

This helps us to improve encapsulation in BasketItem.

Two last little notes: firstly, it turns out that for this particular action, product code and product description aren’t needed. I see developers do this often – modeling data based on their understanding of the domain rather than thinking specifically “what data is needed here?” This can lead to redundancy and unnecessary complexity. Until we have a feature that requires it, leave unused data out of the design. Always be led by function, not by data. We might get a requirement later to, say, report how many units of P111 we sold each day. We’d add the product code then.

Let’s fix that.

And finally, I started out with a test fixture for a “Shopping Cart”, but as the design emerged, that concept changed. It’s important to keep your tests – as living documentation for your code – in step with the emerging design, or things will get confusing.

Let’s fix that.

Now, you might have come up with this design up front. But then again, you might not. The benefit of this approach to discovering the design is that we start only with what we need, and end with an equivalent version with the code smells removed and nothing we don’t need.

The price is that you spend more time refactoring. But as the model emerges, we tend to find that this overheard decreases, and the emphasis shifts to reusing the design instead of discovering it every time. In other words, it gets easier as we go.

 

Foundations of Correct Code

When I introduce developers to practices like unit testing and TDD, I often find it useful to include a little background on the foundations of testing for software correctness.

You may well have heard the terms before: contract, pre-condition, post-condition. But you might not be aware of their origins. It’s good to know, because this is what forms the basis for describing what a program is expected to do, and therefore the basis for testing that it does.

Let’s jump in our time machine and vworp ourselves back to 1969. Computing pioneer Charles Antony Richard Hoare (Tony, for short – and now Sir Tony) proposed a simple logical formula for expressing the correct operation of a program.

{ P } C { Q }

If a pre-condition, P, holds when a command C is invoked, then if C executes correctly, a post-condition Q will be satisfied on completion. P and Q are predicates (assertions).

For example, consider a function that calculates square roots: sqrt(x). The pre-condition for calling sqrt would be that x must be a positive number, otherwise the function won’t work correctly. Provided x is positive, if sqrt executes correctly, the return value multiplied by itself will be equal to x.

Expressed as a Hoare triple:

{ x >= 0 }  sqrt(x)  { sqrt(x) * sqrt(x) = x }

We see Hoare triples all the time in testing. Indeed, every test of logic must have a pre-condition, an action or command, and a post-condition. You may have seen them described using various conventions, such as:

“Given that x is a positive number,

When I square root x,

Then the result multiplied by itself is equal to x.”

Or:

Arrange: x = 4

Act: y = sqrt(x)

Assert: y * y = x

Even when we write it all as one line of code in a unit test, it still has the three components of a Hoare triple.

assertEquals(4, sqrt(4) * sqrt(4))

Unit tests can check that, in specific scenarios, programs do what they’re supposed to. i.e., they can test post-conditions. The set-up for each test determines whether the pre-condition is satisfied. (e.g., x = 4 satisfies x >= 0 ).

But what unit tests can’t check is that programs aren’t invoked when their pre-conditions aren’t satisfied.

We have two choices here: defensive programming, and Design by Contract.

Defensive programming guards the body of a function or method, checking that the pre-condition holds and throwing an error if it doesn’t.

double sqrt(double x) {

        if( x < 0 ) throw new IllegalArgumentException();

        // main body of method is never reached when x < 0

        .....

}

In this implementation, throwing an exception when the pre-condition is broken is part of the correct operation of sqrt. Therefore, we could test that it does. But it complicates our logic. If client code can call sqrt with a negative number, then it we will also need to write code to handle that exception when it arises. We’ve made handling negative inputs part of the logic of our program.

A simpler approach would be to ensure that sqrt is never called when x < 0. For example, by ensuring that x is never less than zero.

Design by Contract works on the principle that for software to be correct, all of the interactions between its components must be correct. A Hoare triple describes a contract between a client and a supplier of a service (e.g., a method of a class), such that the supplier ensures that the post-condition is satisfied, and the client ensures that they never a method or function unless the pre-condition is satisfied. So there’s no need for defensive programming and no need for handling those edge cases in our internal logic.

Of course, if there’s an error in our code, a function might still be invoked when a pre-condition doesn’t hold. So fans of DbC will put pre-condition checks in their code, typically for use during testing, to flag up any such errors.

double sqrt(double x) {

        assert( x >= 0 );

        // execution is halted if x < 0 and 
        // assertion checking is switched on

        .....

}

Most build systems these days enable us to switch assertion checking on or off (e.g., on for testing, off for production).

Personally, I tend to use a combination of unit testing (sometimes quite exhaustive unit testing) and inline assertions to check pre-conditions during testing. If a pre-condition fails, that means there’s an error in my client code that I need to fix before it goes any further down the delivery pipeline. I’m not in the business of throwing “Oops. My Code Is Wrong” exceptions in production systems. Nor should you be.

Hoare triples can also be used as a basis for more meaningful and rigorous code inspections. If you think about it, it’s not just programs or functions or methods that have constraints on their correct operation: every piece of executable code – statements, blocks, expressions – has an implied contract.

int x = 100;

int y = 10;

int z = x / y;      // pre: y != 0

return sqrt(z);     // pre: z >= 0

As we read through the code, line by line, we can ask questions like “What should be true after this line has executed? When would this line of code fail?” This can point to test cases we didn’t think of, and guide us to trace back the origins of our input data to ensure it can never harm that code.

One final thought on defensive programming vs. Design by Contract, relating to the design of the system as a whole: when it’s us writing the client code, we have control over whether the inputs are correct. When someone else is providing the inputs – e.g., a web service client, or an end user – we don’t have that control. So it’s usually necessary to code defensively at system/service boundaries.

The best option where UX/UI design is concerned is usually to not give end users the ability to make invalid choices. For example, the UI of an ATM offers end users a choice of withdrawal amounts.

atm

Offering users choices that aren’t actually available and then displaying an error message when they select them not only annoys the user, but also complicates the application’s internal logic as it now has to handle more edge cases.

I’ve found the best approach to delivering correct software that’s simple in its internal logic is to design the UX carefully to simplify the inputs, do some defensive programming just below the UI to validate the inputs, and then apply Design by Contract for all the internal logic, with test-time assertion checking in case we made a boo-boo in some client code. Systems, components and services should have a hard outer shell – zero trust in any inputs – and a soft inner centre where inputs are trusted to be (almost) certainly valid.

Whether it’s defensive programming or DbC, though, there’s one approach I would never condone, and that’s deploying systems that don’t handle every allowable input or user action meaningfully. If the software lets you do it, the software must have a good answer to it. Understanding the contracts between the user (be it a human being or another system) and our software is the key to making sure that it does.