How To (And Not To) Use Interfaces

If you’re writing code in a language that supports the concept of interfaces – or variants on the theme of pure abstract types with no implementation – then I can think of several good reasons for using them.

Polymorphism

There are often times when our software needs the ability to perform the same task in a variety of ways. Take, for example, calculating the area of a room. This code generates quotes for fitted carpets based on room area.

double quote(double pricePerSqMtr, Room room) {
double area = room.area();
return pricePerSqMtr * Math.ceil(area);
}
view raw Carpet.java hosted with ❤ by GitHub

Rooms can have different shapes. Some are rectangular, so the area is the width multiplied by the length. Some are even circular, where the area is π r².

We could have a big switch statement that does a different calculation for each room shape, but every time we want to add new shapes to the software, we have to go back and modify it. That’s not very extensible. Ideally, we’d like to be able to add new room shapes without changing our lovely tested existing code.

If we define an interface for calculating the area, then we can easily have multiple implementations that our client code binds to dynamically.

public interface Room {
double area();
}
public class RectangularRoom implements Room {
private final double width;
private final double length;
public RectangularRoom(double width, double length) {
this.width = width;
this.length = length;
}
@Override
public double area() {
return width * length;
}
}
public class CircularRoom implements Room {
private final double radius;
public CircularRoom(double radius) {
this.radius = radius;
}
@Override
public double area() {
return PI * Math.pow(radius, 2);
}
}
view raw Room.java hosted with ❤ by GitHub

Hiding Things

Consider a class that has multiple features for various purposes (e.g., for testing, or for display).

public class Movie {
private final String title;
private int availableCopies = 1;
private List<Member> onLoanTo = new ArrayList<>();
public Movie(String title){
this.title = title;
}
public void borrowCopy(Member member){
availableCopies -= 1;
onLoanTo.add(member);
}
public void returnCopy(Member member){
availableCopies++;
onLoanTo.remove(member);
}
public String getTitle() {
return title;
}
public int getAvailableCopies() {
return availableCopies;
}
public Boolean isOnLoanTo(Member member) {
return onLoanTo.contains(member);
}
}
view raw Movie.java hosted with ❤ by GitHub

Then consider a client that only needs a subset of those features.

public class LoansView {
private Member member;
private Movie selectedMovie;
public LoansView(Member member, Movie selectedMovie){
this.member = member;
this.selectedMovie = selectedMovie;
}
public void borrowMovie(){
selectedMovie.borrowCopy(member);
}
public void returnMovie(){
selectedMovie.returnCopy(member);
}
}
view raw LoansView.java hosted with ❤ by GitHub

We can use client-specific interfaces to hide features for clients who don’t need to (or shouldn’t) use them, simplifying the interface and protecting clients from changes to features they never use.

public interface Loanable {
void borrowCopy(Member member);
void returnCopy(Member member);
}
public class Movie implements Loanable {
private final String title;
private int availableCopies = 1;
private List<Member> onLoanTo = new ArrayList<>();
public Movie(String title) {
this.title = title;
}
@Override
public void borrowCopy(Member member) {
availableCopies -= 1;
onLoanTo.add(member);
}
@Override
public void returnCopy(Member member) {
availableCopies++;
onLoanTo.remove(member);
}
public String getTitle() {
return title;
}
public int getAvailableCopies() {
return availableCopies;
}
public Boolean isOnLoanTo(Member member) {
return onLoanTo.contains(member);
}
}
public class LoansView {
private Member member;
private Loanable selectedMovie;
public LoansView(Member member, Loanable selectedMovie) {
this.member = member;
this.selectedMovie = selectedMovie;
}
public void borrowMovie(){
selectedMovie.borrowCopy(member);
}
public void returnMovie(){
selectedMovie.returnCopy(member);
}
}
view raw Loanable.java hosted with ❤ by GitHub

In languages with poor support for encapsulation, like Visual Basic 6.0, we can use interfaces to hide what we don’t want client code to be exposed to instead.

Many languages support classes or modules implementing multiple interfaces, enabling us to present multiple client-specific views of them.

Faking It ‘Til You Make It

There are often times when we’re working on code that requires some other problem to be solved. For example, when processing the sale of a CD album, we might need to take the customer’s credit card payment.

Instead of getting caught in a situation where we have to solve every problem to deliver or test a feature, we can instead use interfaces as placeholders for those parts of the solution, defining explicitly what we expect that class or module to do without the need to write the code to make it do it.

public interface Payments {
Boolean process(double amount, CreditCard card);
}
public class BuyCdTest {
private Payments payments;
private CompactDisc cd;
private CreditCard card;
@Before
public void setUp() {
payments = mock(Payments.class);
when(payments.process(any(), any())).thenReturn(true); // payment accepted
cd = new CompactDisc(10, 9.99);
card = new CreditCard(
"MR P SQUIRE",
"1234234534564567",
"10/24",
567);
}
@Test
public void saleIsSuccessful(){
cd.buy(1, card);
assertEquals(9, cd.getStock());
}
@Test
public void cardIsChargedCorrectAmount(){
cd.buy(2, card);
verify(payments).process(19.98, card);
}
}
view raw Payments.java hosted with ❤ by GitHub

Using interfaces as placeholders for parts of the design we’re eventually going to get to – including external dependencies – is a powerful technique that allows us to scale our approach. It also tends to lead to inherently more modular designs, with cleaner separation of concerns. CompactDisc need not concern itself with how payments are actually being handled.

Describing Protocols

In statically-typed languages like Java and C#, if we say that an implementation must implement a certain interface, then there’s no way of getting around that.

But in dynamic languages like Python and JavaScript, things are different. Duck typing allows us to present client code with any implementation of a method or function that matches the signature of what the client invokes at runtime. This can be very freeing, and can cut out a lot of code clutter, as there’s no need to have lots of interfaces defined explicitly.

It can also be dangerous. With great power comes great responsibility (and hours of debugging!) Sometimes it’s useful to document that fact that, say, a parameter needs to look a certain way.

In those instances, experienced programmers might define a class – since Python, for example, doesn’t have interfaces – that has no implementation, that developers are instructed to extend and override when they create their implementations. Think of an interface in Python as a class that only defines methods that you must only override.

A class that processes sales of CD albums might need a way to handle payments through multiple different payment processors (e.g., Apple Pay, PayPal etc). The code that invokes payments defines a contract that any payment processor must fulfil, but we might find it helpful to document exactly what that interface looks like with a base class.

class Payments(object):
def pay(self, credit_card, amount):
raise Exception("This an an abstract class")
view raw payments.py hosted with ❤ by GitHub

Type hinting in Python enables us to make it clear that any object passed in as the payments constructor parameter should extend this class and override its method.

class CompactDisc(object):
def __init__(self, stock, price, payments: Payments):
self.payments = payments
self.price = price
self.stock = stock
def buy(self, quantity, credit_card):
self.stock -= quantity
self.payments.pay(credit_card, self.price)
view raw compact_disc.py hosted with ❤ by GitHub

You can do this in most dynamic languages, but the usefulness of explicitly defining abstractions in Python is acknowledged by the widely-used Abstract Base Class (ABC) Python library that enforces their rules.

from abc import ABC, abstractmethod
class Payments(ABC):
@abstractmethod
def pay(self, credit_card, amount):
pass
view raw payments.java hosted with ❤ by GitHub

So, from a design point of view, interfaces are really jolly useful. They can make our lives easier in a variety of ways, and are very much the key to achieving clean separation of concerns on modular systems, and to scaling our approach to software developer.

But they can also have their downsides.

How Not To Use Interfaces

Like all useful things, interfaces can be overused and abused. For every code base I see where there are few if any interfaces, I see one where everything has an interface, regardless of motive.

When is separation of concerns not separation of concerns?

If an interface does not provide polymorphism (i.e., there’s only ever one implementation), and does not hide features, and is not a placeholder for something you’re Faking Until You’re Making, and describes no protocol that isn’t already explicitly defined by the class that implements it, then you can clutter up your code base with large amounts of useless indirection.

In real code bases of the order of tens or hundreds of thousands, or even millions, of lines of code, classes tend to cluster. As our code grows, we may split out multiple helper classes that are intimately tied together – if one changes, they all change – by the job they collaborate to do.

A better design acknowledges these clusters and packages them together behind a simple public interface. Think of each of these packages as being like an internal microservice. (They may literally be microservices, of course. But even if they’re all released in the same component, we can treat them as internal microservices.)

Hide clusters of classes that change together behind simple interfaces

In practising outside-in Test-Driven Development, I will use interfaces to stub or mock solutions to other problems to separate those concerns from the problem I’m currently solving. So I naturally introduce interfaces within an architecture.

But I also refactor quite mercilessly, and many problems require more than one class or module to solve them. These will emerge through the refactoring process, and they tend to stay hidden behind their placeholder interfaces.

(Occasionally I’ll introduce an interface as part of a refactoring because it solves one of the problems described above and adds value to the design.)

So, interfaces – useful and powerful. But don’t overdo it.

New Refactoring: Remove Feature

If you’re using a modern, full-featured IDE like IntelliJ or Rider, you may have used an automated refactoring called Safe Delete. This is a jolly handy thing that I use often. If, say, I want to delete a Java class file, it will search for any references to that class first. If the class is still be used, then it will warn me.

Occasionally, I want to delete a whole feature, though. And for this, I am imagining a new refactoring which I’m calling – wait for it! – Remove Feature. Say what you see.

Let’s say I want to delete a public method in a class, like rentFor() in this Video class.

public void rentFor(Customer customer) throws CustomerUnderageException {
if(isUnderAge(customer))
throw new CustomerUnderageException();
customer.addRental(this);
}

view raw
Video.java
hosted with ❤ by GitHub

Like Safe Delete, first we would look for any references to rentFor() in the rest of the code. If there are no references, it will not only delete rentFor(),  but also any other features of this and other classes that rentFor() uses – but only if they’re not being used anywhere else. It’s a recursive Safe Delete.

isUnderAge() is only used by rentFor(), so that would be deleted. CustomerUnderageException is also only used by rentFor(), so that too would be deleted. And finally, the addRental() method of the Customer class is only used here, so that would also go.

This is a recursive refactoring, so we’d also look inside isUnderAge(), CustomerUnderageException and addRental() to see if there’s anything they’re using in our code that can be Safe Deleted.

In addRental(), for example:

public List<Video> getRentedVideos() {
return rentals;
}
public void addRental(Video video) {
rentals.add(video);
}

view raw
Customer.java
hosted with ❤ by GitHub

…we see that it uses a rentals collection field. This field is also used by getRentedVideos(), so it can’t be Safe Deleted. Our recursion would end here.

If the method or function we’re removing is the only public (or exported) feature in that class or module, then the module would be deleted also. (Since private/non-exported features can’t be used anywhere else).

But if customers can’t rent videos, what’s the purpose of this field? Examining my tests reveals that the getter only exists to test rentFor().

@Test(expected=CustomerUnderageException.class)
public void customerMustBeOverTwelveToRentAVideoRatedTwelve() throws Exception {
Customer customer = new Customer(null, null, "2012-01-01");
Video video = new Video(null, Rating.TWELVE);
video.rentFor(customer);
}
@Test
public void videoRentedByCustomerOfLegalAgeIsAddedToCustomersRentedVideos() throws Exception {
Customer customer = new Customer(null, null, "1964-01-01");
Video video = new Video(null, Rating.TWELVE);
video.rentFor(customer);
assertTrue(customer.getRentedVideos().contains(video));
}

view raw
VideoTests.java
hosted with ❤ by GitHub

One could argue that the feature isn’t defined by the API, but by the code that uses the API – in this case, the test code reflects the set of public methods that make up this feature.

So I might choose to Remove Feature by selecting one or more tests, identifying what they use from the public API, and recursively Safe Deleting each method before deleting those tests.

Test-Driven Development in JavaScript

I’m in the process of redesigning elements of the Codemanship training workshops, and I’ve been spit-balling new demos in JavaScript on TDD. Rather than taking copious notes, I’ve recorded screencasts of these demos so I can refer back and see what I actually did in each one.

I thought it might be useful to post these screencasts online, so if you’re a JS developer – or have ambitions to be one (TDD is a sought-after skill) – here they are.

I’ve strived for each demonstration to make three key points to remember.

#1 – The 3 Steps of TDD

  • Start by writing a test that fails
  • Write the simplest code to pass the test
  • Refactor to make changing the code easier

 

#2 – Assert First & Useful Tests

  • Write the test assertion first and work backwards to the setup
  • See the test fail before you make it pass
  • Tests should only have one reason to fail

 

#3 – What Should We Test?

  • List your tests
  • Test meaningful behaviour and let those tests drive design details, not the other way around
  • When the implementation is obvious, just write it

 

#4 – Duplication & The Rule of Three

  • Removing duplication to reveal abstractions
  • The Rule Of Three
  • When to leave duplicate code in

 

#5 – Part I – Inside-Out TDD

  • Advantage: tests pinpoint failures better in the stack
  • Drawbacks
    • Risk the pieces don’t fit together
    • Tests are coupled closely to internal design

 

#5 – Part II – Outside-In TDD

  • Advantages
    • Pieces guaranteed to fit together
    • Test code more decoupled from internal design
  • Disadvantage: tests don’t pinpoint source of failure easily

 

#6 – Stubs, Mocks & Dummies

  • Writing unit tests with external dependencies using:
    • Stubs to return test data
    • Mocks to test that messages were sent
    • Dummies as placeholders so we can run the tests
  • Driving complex multi-layered designs from the outside in using stubs, mocks and dummies
    • Advantage: pieces guaranteed to fit and tests pinpoint sources of failure better
    • Risk: (not discussed in video) excessive use of test doubles un-encapsulates details of internal design, tightly coupling test code to implementation
  • More unit-testable code – achieved with dependency injection – tends to lead to more modular architectures

 

These videos are rough and ready first attempts, but I think you may find the useful as they are if you’re new to TDD.

I’ll be doing versions of these in Python soon.

Code Craft Bootstrapped

I’ll be blogging about this soon, but just wanted to share some initial thoughts on a phenomenon I’ve observed in very many development teams. A lot of teams confuse their tools with associated practices.

“We do TDD” often really means “We’re using JUnit”. “We refactor” often means “We use Resharper”. “We do CI” often means “We’re using Jenkins”. And so on.

As two current polls I’m running strongly suggest, a lot of teams who think they’re doing Continuous Integration appear to develop on long-lived branches (e.g., “feature branches”). But because they’re using the kind of tools we associate with CI, they believe that’s what they’re doing.

This seems to me to be symptomatic of our “solution first” culture in software development. Here’s a solution. Solution to what, exactly? We put the cart before the horse, adopting, say, Jenkins before we think about how often we merge our changes and how we can test those merges frequently to catch conflicts and configuration problems earlier.

Increasingly, I believe that developers should learn the practices first – without the tools. It wasn’t all that long ago when many of these tools didn’t exist, after all. And all the practices predate the tools we know today. You can write automated tests in a main() method, for example, and learn the fundamentals of TDD without a unit testing framework. (Indeed, as you refactor the test code, you may end up discovering a unit testing framework hiding inside the duplication.)

Talking of refactoring, once upon a time we had no automated refactoring tools beyond cut, copy and paste and maybe Find/Replace. Maybe developers will grok refactoring better if they start learning to do refactorings the old-school way?

And for many years we automated our builds using shell scripts. Worked just fine. We just got into the habit of running the script on the build machine every time we checked in code.

These tools make it easier to apply these practices, and help us scale them up by taking out a lot of donkey work. But I can’t help wondering if starting without them might help us focus on the practices initially, as well as maybe helping us to develop a real appreciation for how much they can help – when used appropriately.

Automated Tests Aren’t Just For The Long-Term

Something I hear worryingly often still is teams – especially managers – saying “Oh, we don’t need to automate our tests because there’s only going to be one release.”

The perceived wisdom is that investing in fast-running automated tests is only worth it if the software’s going to have a long lifespan, with many subsequent releases. (This is a sentiment often expressed about code craft in general.)

The assumption is that fast-running unit tests have less – or zero – value in the short-to-medium term. But this is easily disproved.

Ask ourselves what we need fast-running tests for in the first place? To guard against regressions when we change the code. The inexperienced team or manager might argue that “we won’t be changing the code, because there’s only going to to be one release”.

Analysis by GitLab’s data sciences team clearly shows that code churn – when classified as code that changes within 2-3 weeks of being checked in – for the average team runs at about 25%. An average team of, say, four developers might check in 10,000 LOC on a 12-week release schedule. 2,500 lines of that code will change within 2-3 weeks. That’s a lot of changes.

And that’s normal. Expect it.

This is before we take into account the many changes a programmer will make to code before they check it in. If only tested my code when it was time to check it in, I think I’d really struggle.

It’s a question of batch size. If I make one change and then re-test, and I’ve broken something, it’s much, much easier to pinpoint what’s gone wrong. And it’s way, way easier to get back to code that works. If I make 100 changes and re-test, I’m probably going to end up knee-deep in the debugger and up to me neck in print statements, and reverting to the last working copy means losing a tonne of work.

So I test pretty much continuously, and find even on relatively small projects that my hide gets saved multiple times by having these tests.

Change is much easier with fast-running tests, and change is a normal part of delivery.

And then there’s the whole question of whether it really will be the only release of the software. Experience has taught me that if software gets used, it gets changed. The only one-shot deals I’ve experienced in harumpty-twelve years of writing software have been the unsuccessful ones.

Imagine we’re asked to dig out an underground shelter for our customer. They tell us they need a chamber 8 ft x 8 ft x 6 ft – big enough for a bed – and we dutifully start digging. Usually, we would put up wooden supports as we dig, to stop the chamber from caving in. “No need”, says the customer. “It’s only one room, and we’ll only use it once.”

So, we don’t put in any supports. And that makes completing the chamber harder, because it keeps caving in due to the vibrations of our ongoing excavations. For every cubic metre of dirt we excavate, we end up digging out another half a cubic metre from the cave-ins. But we get there in the end, and the customer pays us our money and moves their bed in.

Next week, we get a phone call. “Where do we keep our food supplies?” Turns out, they’ll need another room. Would they like us to put supports up in the main chamber before we start digging again? “No time! We need our food store ASAP.” Okey dokey. We start digging gain, and the existing chamber starts caving in again, but we dig out the loose earth and carry on as best we can. We manage to get the food store done, but with a lot more work this time, because both spaces keep caving in, and we keep having to dig them out again and again, recreating spaces we’d already excavated several times.

The customer moves in their food supplies, but their elderly mother now refuses to go into the shelter because she’s not sure it’s safe.

A week later: “Oh hi. Er. Where do we go to the bathroom?” Work begins on a third chamber. Would they like us to put supports in to the other two chambers first? “No. Need a bathroom ASAP!!!” they exclaim with a rather pained expression. So we dig and dig and dig, now so tired that we barely notice that most of the space we’re excavating has been excavated before, and most of the earth we’re removing has been coming from the ceilings of the existing chambers as well as from the new bathroom.

This is what it is to work without fast-running tests. Even on small, one-shot deals of just a few days, regressions can become a major expense, quickly outweighing the cost of writing tests in the first place.

When Should We Do Code Reviews?

One question that I get asked often is “When is the best time to do code reviews?” My pithy answer is: now. And now. And now. Yep, and now.

Typically, teams batch up a whole bunch of design decisions for a review – for example, in a pull request. If we’ve learned anything about writing good software, it’s that the bigger the batch, the more slips through the quality control net.

Releasing 50 features at a time, every 12 months, means we tend to bring less focus to testing each feature to see if it’s what the customer really needs. Releasing one feature at a time allows us to really focus in on that feature, see how it gets used, see how users respond to it.

Reviewing 50 code changes at a time gives similarly woolly results. A tonne of code smells tend to make it into production. Reviewing a handful of code changes – or, ideally, just one – at a time brings much more focus to each change.

Unsurprisingly, teams who review code continuously, working in rapid feedback cycles (e.g., doing TDD) tend to produce cleaner code – code that’s easier to understand, simpler, has less duplication and more loosely-coupled modules. (We’ve measured this – for example in this BBC TDD case study.)

One theory about why TDD tends to produce cleaner code is that the short feedback loops – “micro-cycles” – bring much more focus to every design decision. TDD deliberately has a step built in to each micro-cycle to stop, look at the code we just wrote or changed, and refactor if necessary. I strongly encourage developers not to waste this opportunity. The Green Light is our signal to do a mini code-review on the work we just did.

I’ve found, through working with many teams, that the most effective code reviews are rigorous and methodical. Check all the code that changed, and check for a list of potential code quality issues every single time. Don’t just look at the code to see if it “looks okay” to you.

In the Codemanship TDD course, I ask developers to run through a check list on every green light:

  • Is the code easy to understand? (Not sure? Ask someone else.)
  • Is there obvious duplication?
  • Is each method or function and class or module as simple as it could be?
  • Do any methods/functions or classes/modules have more than one responsibility?
  • Can you see any Feature Envy – where a method/function (or part of a method/function) of one class/module depends on multiple features of another class/module?
  • Are a class’s/module’s dependencies easily swappable?
  • Is the class/module exposed to things it isn’t using (e.g., methods of a C++ interface it doesn’t call, or unused imports from other modules)?

You may, according to your needs and your team’s coding standards, have a different checklist. What seems to make the difference is that your team has a checklist, and that you are in the habit of applying it whenever you have the luxury of working code.

This is where the relationship exists between code review and Continuous Delivery. If our code isn’t working , it isn’t shippable. If you go for hours at a time with failing automated tests (or no testing at all), code review is a luxury. Your top priority’s to get it working – that’s the most important quality of any software design. If it doesn’t work, and you can’t deploy it, then whether or not there are any, say, long parameter lists in it is rather academic.

Now, I appreciate that stopping on every passing test and going through a checklist for all the code you changed may sound like a real drag. But, once upon a time, writing a unit test, writing the test assertion first and working backwards, remembering to see the test fail, and all the the habits of effective TDD felt like a bit of a chore. Until I’d done them 10,000 times. And then I stopped noticing that I was doing them.

The same goes for code review checklists. The more we apply them, the more it becomes “muscle memory”. After a year or two, you’ll develop an intuitive sense of code quality – problems will tend to leap out at you when you look at code, just as one bum note in an entire orchestra might leap out at a conductor with years of listening experience and ear training. You can train your eyes to notice code smells like long methods, large classes, divergent change, feature envy, primitive obsession, data clumps and all the other things that can make code harder to change.

This is another reason why I encourage very frequent code reviews. If you were training your musical ear, one practice session every couple of weeks is going to be far less effective than 20 smaller practice sessions a day. And if each practice session is much more focused – i.e., we don’t ear-train musicians with whole symphonies – then that, too, will speed up the learning curve.

The other very important reason I encourage continuous code review is that when we batch them up, we also tend to end up with batches of remedial actions to rectify any problems. If I add a branch to a method, review that, and decide that method is now too logically complex, fixing it there and then is a doddle.

If I make 50 boo-boos like that, not only will an after-the-horse-has-bolted code review probably miss many of those 50 issues, but the resulting TO-DO list is likely to require an amount of time and effort that will make it a task that has to be scheduled – very possibly by someone who doesn’t understand the need to do them. In the zero-sum game of software development scheduling, the most common result is that the work never gets done.

 

The Hidden Cost of “Dependency Drag”

 

16736646645_f4cfd8f770_b
The mysterious Sailing Stones of Death Valley are moved by some unseen natural force.

When I demonstrate mutation testing, I try to do it in the programming language my audience uses day-to-day. In most of the popular programming languages, there’s a usable, current mutation testing tool available. But for a long time, the .NET platform had none. That’s not to say there were never any decent mutation testing tools for .NET programs. There’s been several. But they had all fallen by the wayside.

Here’s the thing: some community-spirited developer kindly creates a mutation testing tool we can all use. That’s a sizable effort for no financial reward. But still they write it. It works. Folk are using it. And there’s no real need to add to it. Job done.

Then, one day, you try to use it with the new version of the unit testing tool you’ve been working with, and – mysteriously – it stops working. Like the Sailing Stones of Death Valley, the mutation testing is inexplicably 100 metres from where you left it, and to get it working again it has to be dragged back to its original position.

This is the hidden cost of a force I might call Dependency Drag. I see it all the time: developers forced to maintain software products that aren’t changing, but that are getting out of step with the environment in which they run, which is constantly changing under their feet.

GitHub – and older OSS repositories – is littered with the sun-bleached skeletons of code bases that got so out of step they simply stopped working, and maintainers didn’t want to waste any more time keeping them operational. Too much effort just to stand still.

Most of us don’t see Dependency Drag, because it’s usually hidden within an overall maintenance effort on a changing product. And the effect is usually slow enough that it looks like the stones aren’t actually moving.

But try and use some code that was written 5 years ago, 10 years ago, 20 years ago, if it hasn’t been maintained, and you’ll see it. The stones are a long way from where you left them.

This effect can include hardware, of course. I hang on to my old 3D TV so that I can play my 3D Blu-rays. One day, that TV will break down. Maybe I’ll be able to find another one on eBay. But 10 years from now? 20 years from now? My non-biodegradable discs may last centuries if kept safe. But it’s unlikely there’ll be anything to play them on 300 years from now.

This is why it will become increasingly necessary to preserve the execution environments of programs as well as the programs themselves. It’s no use preserving the 1960s Fortran compiler if you don’t have the 1960s computer and operating system and punch card reader it needs to work.

And as execution environments get exponentially more complex, the cost of Dependency Drag will multiply.

 

Architects – Hang Up Your Capes & Go Back To The Code

Software architecture is often framed as a positive career move for a developer. Organisations tend to promote their strongest technical people into these strategic and supervisory roles. The pay is better, so the lure is obvious.

I progressed into lead architecture roles in the early 00s, having “earned my spurs” as a developer and then tech lead in the 1990s. But I came to realise that, from my ivory tower, I was having less and less influence over the code that got written, and therefore less and less influence over the actual design and architecture of the software.

I could draw as many boxes and arrows as I liked, give as many PowerPoint presentations as I liked, write as many architecture and standards documents as I liked: none of it made much difference. It was like to trying to direct traffic using my mind.

So I hung up my shiny architect cape and pointy architect wizard hat and went back to working directly with developers on real code as part of the team.

Instead of decreeing “Thou shalt…”, I could – as part of a programming pair (and a programming mob, which was quite the thing with me) – instead suggest “Maybe we could…” and then take the keyboard and demonstrate what I meant. On the actual code. That actually got checked in and ended up in the actual product, instead of just in a Word document nobody ever reads.

The breakthrough for me was realising that “big design decisions” vs “small design decisions” was an artificial distinction. Most architecture decisions are about dependencies: what uses what? And “big” software dependencies – microservice A uses microservice B, for example – can be traced to “small” design decisions – a class in microservice A uses a class in microservice B – which can be traced to even “smaller” design decisions – a line of code in the class in microservice A needs a data value from the class in microservice B.

The “big” architecture decisions start in the code. And the code is full of tiny design decisions that have the potential to become “big”. And drawing an arrow pointing from a box labeled “Microservice A” to a box labeled “Microservice B” doesn’t solve the problems.

Try as we might to dictate the components, their roles and their and dependencies in a system up-front, the reality often deviates wildy from what the architect planned. This is how “layered architectures” – the work of the devil – permeated software architecture for so long, despite it being a complete falsehood that they “separate concerns”. (Spoiler Alert: they don’t.)

Don’t get me wrong: I’m all for visualisation and for a bit of up-front planning when it comes to software design. But sooner rather than later, we have to connect with the reality as the code emerges and evolves. And the most valuable service a software architect can offer to a dev team is to be right there with them fighting the complexity and the dependencies – and helping them to make sense of it all – on the front line.

You can offer more value in the long term by mentoring developers and helping them to reason about design and ultimately make better design decisions – “big” or “small” – than attempting to direct the whole effort from 30,000 ft.

Plus, it seems utter folly to me to take your most experienced developers and promote them away from the thing you believe they do well. (And paying them more to have less impact just adds insult to injury.)

 

Classes Start With Functions, Not Data

A common mistake developers make when designing classes is to start with a data model in mind and then try to attach functions to that data (e.g., a Zoo has a Keeper, who has a first name and a last name, etc). This data-centred view of classes tends to lead us towards anaemic models, where classes are nothing more than data containers and the logic that uses the data is distributed throughout the system. This lack of encapsulation creates huge amounts of low-level coupling.

Try instead to start with the function you need, and see what data it requires. This can be illustrated with a bit of TDD. In this example, we want to buy a CD. I start by writing the buy function, without any class to hang that on.

class BuyCdTest {
@Test
void buyCdPaymentAccepted() {
int stock = 10;
double price = 9.99;
String creditCardNumber = "1234";
Payments payments = new PaymentsStub(PaymentResponse.ACCEPTED);
stock = buy(stock, price, creditCardNumber, payments);
assertEquals(9, stock );
}
private int buy(int stock, double price, String creditCardNumber, Payments payments) {
if(payments.process(price, creditCardNumber) == PaymentResponse.ACCEPTED)
stock;
return stock;
}
}

view raw
BuyCdTest.java
hosted with ❤ by GitHub

The parameters for buy() tell us what data this function needs. If we want to encapsulate some of that data, so that clients don’t need to know about all of them, we can introduce a parameter object to group related params.

@Test
void buyCdPaymentAccepted() {
int stock = 10;
double price = 9.99;
String creditCardNumber = "1234";
Payments payments = new PaymentsStub(PaymentResponse.ACCEPTED);
stock = buy(new CompactDisc(stock, price, payments), creditCardNumber);
assertEquals(9, stock );
}
private int buy(CompactDisc cd, String creditCardNumber) {
int stock = cd.getStock();
if(cd.getPayments().process(cd.getPrice(), creditCardNumber) == PaymentResponse.ACCEPTED)
stock;
return stock;
}

view raw
BuyCdTest.java
hosted with ❤ by GitHub

This has greatly simplified the signature of the buy() function, and we can easily move buy() to the cd parameter.

@Test
void buyCdPaymentAccepted() {
int stock = 10;
double price = 9.99;
String creditCardNumber = "1234";
Payments payments = new PaymentsStub(PaymentResponse.ACCEPTED);
CompactDisc cd = new CompactDisc(stock, price, payments);
stock = cd.buy(creditCardNumber);
assertEquals(9, stock );
}

view raw
BuyCdTest.java
hosted with ❤ by GitHub

Inside the new CompactDisc class…

public class CompactDisc {
private int stock;
private double price;
private Payments payments;
public CompactDisc(int stock, double price, Payments payments) {
this.stock = stock;
this.price = price;
this.payments = payments;
}
public int getStock() {
return stock;
}
public double getPrice() {
return price;
}
public Payments getPayments() {
return payments;
}
int buy(String creditCardNumber) {
int stock = getStock();
if(getPayments().process(getPrice(), creditCardNumber) == PaymentResponse.ACCEPTED)
stock;
return stock;
}
}

view raw
CompactDisc.java
hosted with ❤ by GitHub

We have a bunch of getters we don’t need any more. Let’s inline them.

public class CompactDisc {
private int stock;
private final double price;
private final Payments payments;
public CompactDisc(int stock, double price, Payments payments) {
this.stock = stock;
this.price = price;
this.payments = payments;
}
int buy(String creditCardNumber) {
if(payments.process(price, creditCardNumber) == PaymentResponse.ACCEPTED)
stock;
return stock;
}
}

view raw
CompactDisc.java
hosted with ❤ by GitHub

Now, you may argue that you would have come up with this data model for a CD anyway. Maybe. But the point is that the data model is specifically there to support buying a CD.

When we start with the data, there’s a greater risk of ending up with the wrong data (e.g., many devs who try this exercise start by asking “What can we know about a CD?” and give it fields the functions don’t use), or with the right data in the wrong place – which is where we end up with Feature Envy and message chains and other coupling code smells galore.

Refactoring to Functions

While I’ve been porting the Codemanship Software Design Principles code examples to JavaScript – in both OO and FP styles – I’ve been thinking a lot about the relationship between those two programming styles.

Possibly the best way to illustrate might be to refactor an object oriented code example into a functional example that’s logically equivalent. This might also serve to illustrate how we might move from one style to the other in a disciplined way, without breaking the code.

This is the simple class I’m going to start with.

function BankAccount() {
this.balance = 0;
this.credit = function (amount) {
this.balance += amount
}
this.debit = function (amount) {
if (amount > this.balance) {
throw "Insufficient funds error";
}
this.balance -= amount;
}
}

view raw
bank_account.js
hosted with ❤ by GitHub

And these are its tests.

const BankAccount = require("../../src/liskov_substitution/bank_account");
describe('bank account', () => {
it('credit account', () => {
const account = new BankAccount();
account.credit(50);
expect(account.balance).toBe(50);
})
it('debit account with sufficient funds', () => {
const account = new BankAccount();
account.credit(50);
account.debit(50);
expect(account.balance).toBe(0);
})
it('debit account with insufficient funds', () => {
const account = new BankAccount();
account.credit(50);
expect(() => account.debit(51)).toThrow('Insufficient funds error');
})
})

view raw
bank_account.test.js
hosted with ❤ by GitHub

The first refactoring step might be to make each method of the class properly stateless (i.e., they don’t reference any fields).

To achieve this, we’ll have to add a parameter to each method that accepts an instance of BankAccount. Then we replace this with a reference to that parameter. This will work if the BankAccount we pass in is the exact same object this refers to.

function BankAccount() {
this.balance = 0;
this.credit = function (account, amount) {
account.balance += amount
}
this.debit = function (account, amount) {
if (amount > account.balance) {
throw "Insufficient funds error";
}
account.balance -= amount;
}
}

view raw
bank_account.js
hosted with ❤ by GitHub

So, in our tests, we pass in the BankAccount object we were invoking credit() and debit() on.

const BankAccount = require("../../src/liskov_substitution/bank_account");
describe('bank account', () => {
it('credit account', () => {
const account = new BankAccount();
account.credit(account, 50);
expect(account.balance).toBe(50);
})
it('debit account with sufficient funds', () => {
const account = new BankAccount();
account.credit(account, 50);
account.debit(account, 50);
expect(account.balance).toBe(0);
})
it('debit account with insufficient funds', () => {
const account = new BankAccount();
account.credit(account, 50);
expect(() => account.debit(account, 51)).toThrow('Insufficient funds error');
})
})

view raw
bank_account.test.js
hosted with ❤ by GitHub

Now we can pull these instance methods out of BankAccount and turn them into global functions.

function BankAccount() {
this.balance = 0;
}
const credit = function (account, amount) {
account.balance += amount
}
const debit = function (account, amount) {
if (amount > account.balance) {
throw "Insufficient funds error";
}
account.balance -= amount;
}
module.exports = {BankAccount, credit, debit};

view raw
bank_account.js
hosted with ❤ by GitHub

The tests can now invoke them directly.

const {BankAccount, credit, debit} = require("../../src/liskov_substitution/bank_account");
describe('bank account', () => {
it('credit account', () => {
const account = new BankAccount();
credit(account, 50);
expect(account.balance).toBe(50);
})
it('debit account with sufficient funds', () => {
const account = new BankAccount();
credit(account, 50);
debit(account, 50);
expect(account.balance).toBe(0);
})
it('debit account with insufficient funds', () => {
const account = new BankAccount();
credit(account, 50);
expect(() => debit(account, 51)).toThrow('Insufficient funds error');
})
})

view raw
bank_account.test.js
hosted with ❤ by GitHub

One last piece of business: the BankAccount data object. We can replace it in two steps. First, let’s use a JSON version instead that matches the schema credit() and debit() expected. To make this the smallest change possible (so we don’t have to re-write those functions yet), let’s make them mutable.

const {BankAccount, credit, debit} = require("../../src/liskov_substitution/bank_account");
describe('bank account', () => {
it('credit account', () => {
let account = {balance: 0};
credit(account, 50);
expect(account.balance).toBe(50);
})
it('debit account with sufficient funds', () => {
let account = {balance: 0};
credit(account, 50);
debit(account, 50);
expect(account.balance).toBe(0);
})
it('debit account with insufficient funds', () => {
let account = {balance: 0};
credit(account, 50);
expect(() => debit(account, 51)).toThrow('Insufficient funds error');
})
})

view raw
bank_account.test.js
hosted with ❤ by GitHub

Then we can re-write credit() and debit() to return mutated copies.

const credit = function (account, amount) {
return {account, balance: account.balance + amount};
}
const debit = function (account, amount) {
if (amount > account.balance) {
throw "Insufficient funds error";
}
return {account, balance: account.balance amount};
}

view raw
bank_account.js
hosted with ❤ by GitHub

This will require us to re-write the tests to use the mutated copies.

describe('bank account', () => {
it('credit account', () => {
const credited = credit({balance: 0}, 50);
expect(credited.balance).toBe(50);
})
it('debit account with sufficient funds', () => {
const debited = debit(credit({balance: 0}, 50), 50);
expect(debited.balance).toBe(0);
})
it('debit account with insufficient funds', () => {
const credited = credit({balance: 0}, 50);
expect(() => debit(credited, 51)).toThrow('Insufficient funds error');
})
})

view raw
bank_account.test.js
hosted with ❤ by GitHub

So, there you have it: from OO to FP (well, functional-ish, maybe) for a simple class with no collaborators. In the next post, I’ll refactor some a code example that involves several related classes so we can examine the relationshi between dependency injection and high-order functions.