Frameworks or Patterns? – Going Old School

Pairing with JavaScript developers this week, one of the things that struck me is how much many of us now rely on heavyweight frameworks to do sometimes quite simple things we used to do the old-fashioned way.

The example we were looking at is how Model-View-Controller is implemented in the web browser. MVC is a pretty simple design pattern, which typically builds on the even simpler Observer pattern. The goal is to have our user interface automatically update when changes are made to the state of our model so it can refresh.

We use observers to make it so that views can be called back when a model object’s state changes, without binding the model directly to the user interface.

So we split our logic into three distinct responsibilities: the model represents the internal data and logic of the application, independent of the user interface. Views represent that internal data and logic to the end user. And controllers respond to user actions and events and forward requests to the model. (In event-driven programming, we call these “event handlers”.)

In 2019, it’s customary for JavaScript developers to use a framework like React or Angular to wire MVC implementations together. But is it really necessary a lot of the time? Can we do MVC without them?

Well, yes we can. Quite easily.

Let’s look at a very simple example: a clock. Here’s our Clock model:

Aside from the core logic, you’ll notice a little extra code to add observers and notify them after the clock’s state has changed. The Observable class takes care of how this is handled.

The flexibility here is that we can add as many observers as we like, and Clock doesn’t need to know who is being notified. So we can have multiple views being updated on every state change.

In this example, we have two. One displays the clock’s state in hours, minutes and seconds.

And another displays the total number of seconds elapsed.

The views inject the inner HTML into a named placeholder in the web page.

And when it’s displayed, it looks like this:

mvc_js

Every time the clock ticks, both views are updated. This can allow us to build very reactive user experiences.

When the user clicks the Reset button, the clock is set back to 0:0:0 and starts ticking again. This is handled by a ClockController that is wired as a listener (another name for an observer) to the Reset button’s click event.

Note that all the controller does is forward the user’s request to the Clock object. It’s important controllers (and services) don’t include any core logic. Their job is purely to foward the request to wherever that core logic is implemented.

It’s all wired together from the outside using dependency injection.

Notice that all the implementaton dependencies are in this module. This is called Inversion of Control – the runtime order of implementation method calls in our MVC flow is determined by dependency injection, from above. This offers a lot of flexibility, as do the implicit abstracted invocations of the Observer pattern.

(Notice, too, how the document elements into which the view’s content will be injected are passed in as references to each view, allowing us to have multiple instances on the same web page. By this mechanism, we could also pass views into views, enabling us to create composite views.)

For additional flexibility, some developers use a Publish-Subscribe pattern instead of Observer. This can enable multiple threads and even multiple networked machines to receive notifications. Being asynchronous, it can scale MVC for high-volume distributed architectures. Observer has the limitation of being sychronous – typically (though it doesn’t have to be) – and of requiring observers to be in the same process as the observed. Having said all that, in 95% of applications, Observer is perfectly adequate (and considerably simpler).

And model observers don’t have to be views. I’ve used the same pattern in object persistence. If you think about it, a row in a database table is just another view of a model object. With a bit of adaptation, a Unit of Work – an object that captures such events – could be notified of a state change in a model object.

To sum up, then: MVC is pretty easy to do in vanilla JavaScript. Arguably it’s no easier when using the big front-end frameworks, and they can add a lot of extra code to your web page. You don’t need to buy the whole Mercedes if you just want the cigarette lighter.

(Okay, so if you look at the source code for my example, I have added Bootstrap, for prettifying my web page. Mea culpa!)

 

 

The Mentoring Paradox

I recently ran a pair of Twitter polls asking experienced developers if mentoring was an official part of their duties, and asking inexperienced developers if they received regular dedicated mentoring.

It’s a tale in two parts: 3/4 experienced devs said mentoring was part of their job, 8/9 inexperienced devs said they don’t get regular dedicated mentoring.

At first glance, this might appear to be a paradox. But I think it can be explained with two extra pieces of information:

  • Our profession is a pyramid, with the most experienced developers greatly outnumbered by less experienced developers
  • Opinions differ widely on what we mean by “mentoring”

Some developers equate mentoring with practices like pair programming. If an experienced developer pairs with a less experienced developer, they might class that as “mentoring”. What we’ve found at Codemanship, though, is that pairing != mentoring necessarily. It’s unstructured, lacks clear goals for what the mentee needs or wants to learn, and is often done in a naive way by people who may well be technically strong but who lack mentoring skills and experience. And we also need to remember that pair programming’s still pretty rare. Most employers don’t allow it.

A lot of new developers report that pair programming with experienced developers can be a frustrating and demoralising experience. Being a great violinist doesn’t necessarily make you a good violin teacher. In a lot of cases, whatever the mentor thinks they’re doing, the mentee doesn’t see it as mentoring.

The other problem with this kind of ad hoc it’s-kind-of-mentoring-but-not-in-a-structured-way mentoring is that it promotes mostly reactive learning. Mentees learn stuff that just happens to come up. To give a developer a solid and well-rounded foundation in dev fundamentals, there needs to be a game plan, and thought needs to be put into creating the necessary learning opportunities within a reasonable timeframe. This necessitates a balance with proactive learning. Even some of the most advanced employers I speak to admit they have no such plan, and little time and resources dedicated to creating the necessary learning opportunities.

To give you an example, let’s imagine we agree it’s time a new hire learns how to refactor Feature Envy. In a reactive environment, we wait until Feature Envy crops up. In coaching developers, I’ve learned that it can be a long wait. And when it does crop up, we may be too busy or distracted dealing with the 1,001 other things we need to think about to take advantage of the opportunity. You need to be super-super on the ball. It’s far easier to enccourage the team to “bottle” code smells* before they eliminate them, so a learning opportunity like this comes ready-made and easy to locate.

*Check in the code with a commit message that identifies the location of the code smell

We found that devs learn refactoring skills much faster when the opportunities to practice come ready-made like this. (There’s also the side effect when a team does a lot of refactoring that certain code smells get eliminated completely from the code base. Like diseases we wiped out, there is value in keeping some samples in the freezer to experiment on.)

Bottling code smells takes extra thought and effort. Practicing refactoring on code smells that have already been eliminated adds no value to the current code base. Proactive learning comes at a cost that most employers are unwilling to pay. So, instead, they pay in an increased cost of changing code, with the knock-on effect that has on their business. (And I’ve seen a high cost of changing code kill some pretty big businesses.)

Effective long-term mentoring of junior developers costs time and money. There’s no way around that – no magic fix, no silver bullet. You’ll need to give junior developers time out for proactive learning. You’ll need to sacrifice the “productive” time of senior developers to provide good mentoring – which includes time to plan and prepare to mentor. (I spend a good deal of my time learning stuff so I can stay one step ahead of devs I’m mentoring – learning the shiny new languages, tools and techniques – filling the gaps in my knowledge before I try to fill the gaps in theirs.)

Nowhere is this more evident than in the UK government’s Software Developer Apprenticeship programme. While there are some shining beacons who do a superb job with apprentices, I hear from many employers who grossly underestimated the investment they’d have to make – especially in dedicated structured mentoring. There are too many places where apprentices are left to figure it out for themselves.

I would argue that possibly the most productive way experienced developers could use their time is in helping less experienced developers build their skills. At my level of experience, I choose to be almost completely dedicated to it. Devs with more than two decades of professional experience are outnumbered 13 to 1, and I’m not a 13x developer.

The way I see it, if companies are happy to promote their most experienced developers into non-technical management roles – losing most of the benefit of that experience – they might as well promote them into hands-on mentoring roles instead. Either way, less experienced developers will be writing the code. At least this way, they’ll be writing better code sooner.

I also genuinely believe that mentoring has many benefits for even the most experienced developers. I’ve had to learn a tonne of stuff specifically so I can explain and demonstrate it to someone else. And to explain it, you’ve really got to wrap your head around it. There’s all sorts of things I kind-of-sort-of understood, but not really, that I’m now 100% clear on purely because I had to get my story straight before I told it to other people. It’s taken me many years to build my Explaining Fu – and while I’m no Richard Feynman, that clarity has definitely benefitted me and my mentees. It also finds its way into my code quite often. I’m way more S.O.L.I.D. aware than I used to be, for example. That’s because I’ve done example after example after example. It’s like ear training for musicians.

These experiences have built my confidence, as well. I’ve given the fundamentals so much thought, and explained and demonstrated them so many times in front of so many very different audiences, that I feel my horizons have widened considerably. Need to learn Kotlin? No probs. Need to prepare a workshop? No worries. Need to present to the board? No sweat. I’m much more fearless after two decades of teaching and mentoring. Sure, it scared the crap out of me in 1999. In 2019, give me a spear and show me where the mammoth’s are at.

So, not only are there people out there who are better developers because of my mentoring, I’m also a better developer for it, too.

This is why I believe structured mentoring needs to be part of the developer journey. First, as a mentee, and then eventually as a dedicated mentor. Our profession needs to be structured so this is normal: the rule and not the exception.

 

If you’d like to talk about developer training and mentoring in your team or organisation, drop me a line.

 

 

Should We Write Unit Tests For Every Unit?

It’s a trick question, of course. It depends entirely on what you mean by a “unit”.

Some developers have been taught that they should write a unit test for every public method of every class (or every exported function in every module, if you’re that way inclined.) This can end with very large numbers of low-level tests, including tests for getters and setters.

But in TDD – if we do it in the classic style – many classes and methods will emerge through refactoring. Do we need to add tests for them, too?

This simple example might illustrate how I think about the question.

I’ve written all the implementation code inside the test itself, so – as of now – what is the “unit” this unit test is testing? I would argue the “unit” being tested here is the behaviour of transferring funds between two accounts.

If you’ve been led to believe that a behavioural test means a customer test or acceptance tests, I’m afraid you’ve been misled. A behaviour is simply something the software does in response to some action, event or input. An “action” could be a user clicking a button, or it could be an object calling a method. Like “unit”, “behaviour” is open to interpretation.

So, we have our transfer behaviour, and it’s described in this transfer NUnit test.

The test’s passing, so it’s time to think about refactoring. Firstly, our test method does more than one thing. The comments are a bit of a giveaway. Let’s break it down into a composed test.

I took the opportunity to also turn the new Transfer() method into a composed method, so it’s all at one level of abstraction. So I now have a unit test for a Transfer() method, but notice that I don’t have individual unit tests for Credit() and Debit().

Let’s keep going. These new methods don’t belong in this test fixture, so let’s extract them into their own class.

And I now let’s tackle some of this Primative Obsession. Whenever I see variables being passed by reference to a method, I can’t help thinking the parameter represents some kind of object identity. In Credit() and Debit(), the balance parameter represents a bank account. So I introduce a BankAccount parameter object that takes the place of balance and move these methods to it. And then, after a bit of clean-up, I end up with:

Now I have two classes and three public methods, but still only one unit test. Would I write dedicated unit tests for Credit() and Debit(), perhaps in a BankAccountTests fixture? Probably not.

If I were to break one of those methods, the original transfer test would fail. So, as regression tests, they’d add no real value – just more tests to support for no added assurance. They are being tested, just not individually.

Likewise, problems can arise when people believe that a true unit test doesn’t include any class or module dependencies. A mocking purist might argue that our transfer test should only test that the Debit() and Credit() methods are invoked on the payer and payee accounts.

This over-mocking can have severe consquences going forward. Firstly, there’s the very real risk that developers miss the fact that some of their implementations never got wired together because they had no tests that checked for that. (Some folk call those “integration tests”, which may be why they don’t think unit tests should include even internal dependencies that are defined in the same component – again, it really depends what you mean by a “unit”.) And secondly, mocking unencapsulates the internal design of classes just as perniciously as exposing data does. Over-mocking is essentially the interaction equivalent of Feature Envy.

Just as I might write unit tests for small clusters of collaborating classes – the internal details of which could be mostly hidden from the tests – judicious use of mocking can exercise just some key interactions between small clusters of collaborating classes hidden behind simple interfaces. If you find yourself mocking every collaboration, you’ve gone too low-level in the same way as when you find yourself writing tests for every public method of every class. In statically-typed languages like C# and Java, over-mocking can also introduce way too many interfaces that add little value in terms of either testability or design flexibility.

This is how it tends to go when I write unit tests. My goal is to test every interesting behaviour, typically driven directly from tests I’ve agreed with the customer. I let the fine details – like getters and setters – fall out from the process of passing that behavioural test, and then refactoring the code afterwards.

Having said that, if we approach our unit tests from too high a level, we can end up in a situation where passing a single test could involve dozens of classes and methods. In this case, there can be serious drawbacks. If a test that exercises a complex network of “units” fails, it can be harder to pinpoint where things have gone wrong. So I’m not a fan of relying totally on customer tests or API tests, either – some internal pinpointing is useful.

The truth lies somewhere in between: my unit tests often exercise small clusters of interacting “units”, but so many that I end up in the debugger or writing print statements to debug failures, and not so few that I have a test for every public method on every class, because that can be a major maintanance headache, creating reams of extra test code and producing a test suite that sticks to the solution’s internal design like melted cheese.

 

How I Do Requirements

The final question of our Twitter code craft quiz seems to have divided the audience.

The way I explain it is that the primary goal of code craft is to allow us to rapidly iterate our solutions, and to sustain the pace of iterating for longer. We achieve that by delivering a succession of production-ready prototypes – tested and ready for use – that are open to change based on the feedback users give us.

(Production-ready because we learned the hard way with Rapid Application Development that when you hit on a solution that works, you tend not to be given the time and resources to make it production-ready afterwards. And also, we discovered that production-ready tends not to cost much more or take much more time than “quick-and-dirty”. So we may as well.)

Even in the Agile community – who you’d think might know better – there’s often too much effort on trying to get it right first time. The secret sauce in Agile is that it’s not necessary. Agile is an iterative search algorithm. Our initial input – our first guess at what’s needed – doesn’t need to be perfect, or even particularly good. It might take us an extra couple of feedback loops if release #1 is way off. What matters more are:

  • The frequency of iterations
  • The number of iterations

Code craft – done well – is the enabler of rapid and sustainable iteration.

And, most importantly, iterating requires a clear and testable goal. Which, admittedly, most dev teams suffer a lack of.

To illustrate how I handle software requirements, imagine this hypothetical example that came up in a TDD workshop recently:

First, I explore with my customer a problem that technology might be able to help solve. We do not discuss solutions at all. It is forbidden at this stage. We work to formulate a simple problem statement.

Walking around my city, there’s a risk of falling victim to crime. How can I reduce that risk while retaining the health and enviromental benefits of walking?

The next step in this process is to firm up a goal, by designing a test for success.

A sufficiently large sample of people experience significantly less crime per mile walked than the average for this city.

This is really vital: how will we know our solution worked? How can we steer our iterative ship without a destination? The failure of so very many development efforts seems, in my experience, to stem from the lack of clear, testable goals. It’s what leads us to the “feature factory” syndrome, where teams end up working through a plan – e.g. a backlog – instead of working towards a goal.

I put a lot of work into defining the goal. At this point, the team aren’t envisioning technology solutions. We’re collecting data and refining measures for success. Perhaps we poll people in the city to get an estimate of average miles walked per year. Perhaps we cross-reference that with crimes statistics – freely available online – for the city, focusing on crimes that happened outside on the streets like muggings and assaults. We build a picture of the current reality.

Then we paint a picture of the desired future reality: what does the world look like with our solution in it? Again, no thought yet is given to what that solution might look like. We’re simply describing a solution-shaped hole into which it must fit. What impact do we want it to have on the world?

If you like, this is our overarching Given…When…Then…

Given that the average rate of street crime in our city is currently 1.2 incidents per 1,000 person-miles walked,

When people use our solution,

They should experience an average rate of street crime of less than 0.6 incidents per 1,000 miles walked

Our goal is to more than halve the risk for walkers who use our solution of being a victim of crime on the streets. Once we have a clear idea of where we’re aiming, only then do we start to imagine potential solutions.

I’m of the opinion that the best software developent organisations are informed gamblers. So, at this early stage I think it’s a good idea to have more than one idea for a solution. Don’t put all our eggs in one solution’s basket! So I might split the team up into pairs – dependending on how big the team is – and ask each pair to envisage a simple solution to our problem. Each pair works closely wth the customer while doing this, to get input and feedback on their basic idea.

Imagine I’m in Pair A: given a clear goal, how do we decide what features our solution will need? I always go for the headline feature first. Think of this is “the button the user would press to make their goal happen” – figuratively speaking. Pair A imagines a button that, given a start point and a destination, will show the user the walking route with the least reported street crime.

We write a user story for that:

As a walker, I want to see the route for my planned journey that has the least reported street crime, so I can get there safely.

The headline feature is important. It’s the thread we pull on that reveals the rest of the design. We need a street map we can use to do our search in. We need to know what the start point and destination are. We need crime statistics by street.

All of these necessary features are consequences of the headline feature. We don’t need a street map because the user wants a street map. We don’t need crime statistics because the user wants crime statistics. The user wants to see the safest walking route. As I tend to put it: nobody uses software because they want to log in. Logging in is a consequence of the real reason for using the software.

This splits features into:

  • Headline
  • Supporting

In Pair A, we flesh out half a dozen user stories driven by the headline feature. We work with our customer to storyboard key scenarios for these features, and refine the ideas just enough to give them a sense of whether we’re on the right track – that is, could this solve the problem?

We then come back together with the other pairs and compare our ideas, allowing the customer to decide the way forward. Some solution ideas will fall by the wayside at this point. Some will get merged. Or we might find that none of the ideas is in the ballpark, and go around again.

Once we’ve settled on a potential solution – described as a headline feature and a handful of supporting features – we reform as a team, and we’re in familiar territory now. We assign features to pairs. Each pair works with the customer to drive out the details – e.g., as customer tests and wireframes etc. They deliver in a disciplined way, and as soon as there’s working software the customer can actually try, they give it a whirl. Some teams call this a “Minimum Viable Product”. I call it Prototype #1 – the first of many.

Through user testing, we realise that we have no way of knowing if people got to their destination safely. So the next iteration adds a feature where users “check in” at their destination – Prototype #2.

We increase the size of the user testing group from 100 to 1,000 people, and learn that – while they on average felt safer from crime – some of the recommended walking routes required them to cross some very dangerous roads. We add data on road traffic accidents involving pedestrians for each street – Prototype #3.

With a larger testing group (10,000 people), we’re now building enough data to see what the figure is on incidents per 1000 person-miles, and it’s not as low as we’d hoped. From observing a selected group of suitably incentivised users, we realise that time of day makes quite a difference to some routes. We add that data from the crime statistics, and adapt the search to take time into account – Prototype #4.

And rinse and repeat…

The point is that each release is tested against our primary goal, and each subsequent release tries to move us closer to it by the simplest means possible.

This is the essence of the evolutionary design process described in Tom Gilb’s book Competitive Engineering. When we combine it with technical practices that enable rapid and sustained iteration – with each release being production-ready in case it needs to be ( let’s call it “productizing”), then that, in my experience, is the ultimate form of “requirements engineering”.

I don’t consider features or change requests beyond the next prototype. There’s no backlog. There is a goal. There is a product. And each iteration closes the gap between them.

The team is organised around achieving the goal. Whoever is needed is on the team, and the team works one goal at a time, one iteration at a time, to do what is necessary to achieve that iteration’s goal. Development, UX design, testing, documentation, operations – whatever is required to make the next drop production-ready – are all included, and they all revolve around the end goal.

 

Codemanship Twitter Code Craft Quiz – Answers

Yesterday evening – for fun and larks – I posted 20 quiz questions about code craft as Twitter polls. It’s been fun watching the percentages for each answer emerge, but now it’s time to reveal my answers so you can see how yours compare.

The correct answer is Always Shippable. The goal of CD is to empower our customer to release our software whenever they choose, without having to go through a long testing and release process. Many of the principles and practices of code craft – e.g., unit testing and TDD – contribute to that goal.

Evidently, a lot of folk get Continuous Delivery confused with Continuous Deployment, and that’s understandable because the name kind of implies something similar. Perhaps we should have called it “Continuously Shippable”?

The correct answer is Comment Block. There’s no such refactoring. If you want to remove code, do a Safe Delete (delete code, but only if no other code references it). If you want to keep old code, use version control.

The correct answer is Refactoring. They were separate disciplines in the original description of Extreme Programming practices, but folk quickly realised that refactoring needed to be an explicit step in the TFD process.

The correct answer is Tell, Don’t Ask. The goal of Tell, Don’t Ask is to better encapsulate – hide the data of – modules so that they know less about each other.

The correct answer is Feature Envy. Feature Envy is when a method of one class references the features of another class – typically the data – more than its own. It’s “Ask, Don’t Tell”.

The best answer is Examples. Yes, it is true that BDD uses executable specifications, but what makes those specifications executable? The thing that makes them executable is the thing that makes them precise and unambiguous – Examples! BDD, TDD and ATDD are all examples of Specification By Example.

The correct answer is the Facade pattern.

The correct answer is Property-Based Testing. This is sometimes more descriptively called “Generative Testing”, because we write code to generate potentially very large sets of test inputs automatically (e.g., random numbers, combinations of inputs, etc). It has a similar aim to Exploratory Testing, but isn’t manual like ET, and therefore can scale to mind-boggling numbers of test cases with minimal extra code, and run far, far faster.

The correct answer is Automated Testing. If it takes you 5 hours to manually re-test your software, you can only check in safely every 5 hours at the earliest. Which doesn’t sound very “continuous” to me. Good to see that message getting through.

The best answer is Stubs and Mocks. The challenge in testing multithreaded logic is that thread scheduling – e.g., by the OS or a VM – is usually beyond our control, so we can’t guarantee how operations in separate threads will be interleaved. This can lead to unpredictable test results that are difficult to reproduce – “Heisenbugs” and “flickering builds”. One simple way to reduce this effect is to test as much “multithreaded” logic as possible in a single thread. Test Doubles can be used to pretend to be the other end of a multithreaded conversation. For example, we can use mock objects to test that callbacks were invoked as expected, or we can use stubs that provide synchronous implementations of asynchronous methods. The goal is to get as much of the logic as possible into places where it can be tested synchronously. This is compatible with a goal of good multithreaded code design – which is to have a little of it as possible.

The correct answer is Tell, Don’t Ask. I was very surprised by how few people got this. Tell, Don’t Ask is about designing more cohesive classes in order to reduce class coupling. The underlying goal of Common Closure – things that change together belong together – and Common Reuse – things that are reused together belong together – is more cohesive packages, in order to reduce package coupling. They share the goal of improving encapsulation. IMO, package design principles have been historically explained poorly, and this may go some way to explaining why a lot of developers struggle to grok them. In practice, they’re the exact same principles at the class/module and package level. The way I try to explain them attempts to be consistent at every level of code organisation.

The correct answer is 3. This is about the Rule of Three. We wait to see three examples of code duplication before we refactor it into a single generalisation or abstraction. The rule of thumb describes a simple way to balance the risks of refactoring too early, before we’ve seen enough examples to form a good abstraction (the number one cause of “leaky abstractions”), and refactoring too late, when we have more duplication to deal with.

The best answer is Identify Change Points. In his book, Working Effectively With Legacy Code, Michael Feathers describes a process for safely making changes – i.e., with the benefit of fast-running automated tests (“unit tests”) – to legacy software. There are two reasons why I wouldn’t start by writing a system test:

  1. How do I know what system tests I’ll need without identifing which features lie on the critical path of the code I’m going to change? Do I write system tests for all of it?
  2. How long do I want to live with those system tests? Is it worth writing them just to have them for as long a it takes to introduce unit tests? My goal is to get fast-running tests for the logic in place ASAP.

If I’m refactoring code that has few or no automated tests, a Golden Master – a test that uses an example output (e.g., a web page) to compare against any potentially broken output – can be a relatively quick way of establishing basic safety. But, again, how do I know what output(s) to use without identifying which features would need to be retested for the change I’m planning to make. And a Golden Master test would effectively be another slow-running system test, which I probably wouldn’t want to live with for long enough to justify writing one in the first place.

After we’ve identified what parts of the code need to change, our goal should be to get fast-running tests around those parts of the code. While we break any dependencies that are getting in our way, I will usually re-test the software manually. Gasp! The point being, I’m not manually testing it for very long before I can add unit tests. It might take me a morning. Is it worth automating system tests that you’re not going to want to rely on going forward, just for a morning?

Having said all that, if I was the only developer on my team writing unit tests on a legacy system, I’d introduce a Golden Master into the build pipeline to protect against obvious regressions. But not on a per change basis. I’d do that before even thinking about changes.

The best answer is Check In. I would have hoped that wouldn’t need explaining! A big part of the discipline of Continuous Integration is to try to ensure that the code you have in VCS – the code that is, in theory, always shippable – is never broken. When it is broken – for whatever reason – any changes you push on to it risk being lost if the code has to be reverted. Plus, there’s no way of knowing if your build succeeded. Don’t push on to broken code.

The correct answer is C++. If I change a C++ interface, even clients that aren’t using the methods I changed have to be recompiled, re-tested and re-deployed. C++ clients bind to the whole interface at compile time. In dynamic languages, this generally isn’t the case. Ruby, Python and JavaScript clients bind at runtime, and only to the methods they use. Indeed, the object doesn’t even have to have a specific type, just as long as it implements compatible methods. Much of S.O.L.I.D. is language-dependent in this way.

The correct answer is See The Test Fail. More specifically, see the test assertion fail. So you know, going forward, it’s a good test that you can rely to fail when the result is wrong. Test your tests.

The best answer is When The Tests Pass. Refactoring was added as an explicit step in the TDD micro-cycle. But refactor what, exactly? I encourage developers to do a little review on code they’ve added or changed whenever they get to a green light:

  • Is it easy to understand?
  • Is there duplication I should remove?
  • Is it as simple as I can make it?
  • Does each part do one thing?
  • Is there Feature Envy between modules?
  • Are modules exposed to things they’re not using?
  • Are module dependencies easily swappable?

I find from experience and from client studies that code reviews on a less frequent basis tend to be too little, too late. TDD and refactoring and CI/CD are practices specifically aimed at breaking work down into the smallest chunks, so we can get very frequent feedback, and bring more focus to each design decision.

And when we’re programming in pairs, the thinking is that code review is continuous. It’s one of the main reasons we do it.

When we chunk code reviews into pull requests – or even larger batches of design decisions – we tend to miss a whole bunch of things. This is borne out by the resulting quality of the code.

I also see how, for many teams, pull requests become a significant bottleneck, which is usually the consequence of batching feedback. The whole point of Extreme Programming is to turn all the good dials up to 11. PR code reviews set the dial at about 5-6.

If you still feel your merge process needs that last line of defence, consider investing in automating code quality testing in your build pipeline instead.

It’s a hot take for PR fans, I know! You may now start throwing the furniture around.

The best answer is Refactoring. This has been a painful lesson for many, many developers. When we open up discussions about refactoring with people who manage our time, the risk is that we’re inviting them to say “no” to it. And, nine times out of ten, they will. Which is why 9 out of 10 coe bases end up too rigid and brittle to accomodate change, and the pace of innovation slows to a very expensive crawl.

Refactoring is an absolutely essential part of code craft. We should be doing it continuously. It’s part of how we write code. End of discussion.

The correct answer is Liskov Substitution. The LSP states that we should be able to substitute an instance of any class with an instance of any of its subclasses. (In modern parlance, we might use the word “type” instead of “class”.) This is all about contracts. If I define an interface for, say, a device driver to be used with my operating system, there are certain rules all device drivers need to obey to function correctly in my OS. I could write a suite of contract tests – tests that are written against that interface, with the actual implementation under test deferred/injected – so that anyone implementing a device driver can assure themselves it will obey the device driver contract. Indeed, this is exactly what Microsoft did for their device driver interfaces.

The best answer is True. Now, this is going to take some explaining…

Firstly, if we include Specification By Example in code craft – which I do – then a good chunk of it is about pinning down what the customer wants. It may not necessarily turn out to be what the customer needs, though. Which is what the rest of code craft is about.

The traditional view of requirements engineering is that we try to specify up-front what the customer needs and then deliver that. We learned that this doesn’t really work almost as soon as people started programming computers.

Our first pass at a solution will almost always be – to some extent – wrong. So we take another pass and get it less wrong. And another. And another. Until the solution is good enough for our customer’s real needs.

In building the right thing, feedback cycles matter more than up-front guesses. The faster we can iterate our design, the sooner we can arrive at a workable solution. Fast customer feedback cycles are enabled by code craft. The whole point of code craft is to help us learn our way to the right solution.

Acting on customer feedback means we’ll be changing the code. If the code is difficult to change, then we can’t sustain the pace of learning. The wrong design gets baked in to code that’s too rigid and brittle to evolve into the right design.

And software can have an operational lifespan that long surpasses the original needs of the customer. Legacy code is a very real and very damaging limiting factor on tens of thousands of businesses. Marketing would love to be able to offer their customers the spiffy new widget the competition just rolled out, but if it’s going to cost millions and take years, it’s not an option.

So, in a very real and direct sense, code craft is all about building the right thing by building it right.

 

 

 

Architects – Hang Up Your Capes & Go Back To The Code

Software architecture is often framed as a positive career move for a developer. Organisations tend to promote their strongest technical people into these strategic and supervisory roles. The pay is better, so the lure is obvious.

I progressed into lead architecture roles in the early 00s, having “earned my spurs” as a developer and then tech lead in the 1990s. But I came to realise that, from my ivory tower, I was having less and less influence over the code that got written, and therefore less and less influence over the actual design and architecture of the software.

I could draw as many boxes and arrows as I liked, give as many PowerPoint presentations as I liked, write as many architecture and standards documents as I liked: none of it made much difference. It was like to trying to direct traffic using my mind.

So I hung up my shiny architect cape and pointy architect wizard hat and went back to working directly with developers on real code as part of the team.

Instead of decreeing “Thou shalt…”, I could – as part of a programming pair (and a programming mob, which was quite the thing with me) – instead suggest “Maybe we could…” and then take the keyboard and demonstrate what I meant. On the actual code. That actually got checked in and ended up in the actual product, instead of just in a Word document nobody ever reads.

The breakthrough for me was realising that “big design decisions” vs “small design decisions” was an artificial distinction. Most architecture decisions are about dependencies: what uses what? And “big” software dependencies – microservice A uses microservice B, for example – can be traced to “small” design decisions – a class in microservice A uses a class in microservice B – which can be traced to even “smaller” design decisions – a line of code in the class in microservice A needs a data value from the class in microservice B.

The “big” architecture decisions start in the code. And the code is full of tiny design decisions that have the potential to become “big”. And drawing an arrow pointing from a box labeled “Microservice A” to a box labeled “Microservice B” doesn’t solve the problems.

Try as we might to dictate the components, their roles and their and dependencies in a system up-front, the reality often deviates wildy from what the architect planned. This is how “layered architectures” – the work of the devil – permeated software architecture for so long, despite it being a complete falsehood that they “separate concerns”. (Spoiler Alert: they don’t.)

Don’t get me wrong: I’m all for visualisation and for a bit of up-front planning when it comes to software design. But sooner rather than later, we have to connect with the reality as the code emerges and evolves. And the most valuable service a software architect can offer to a dev team is to be right there with them fighting the complexity and the dependencies – and helping them to make sense of it all – on the front line.

You can offer more value in the long term by mentoring developers and helping them to reason about design and ultimately make better design decisions – “big” or “small” – than attempting to direct the whole effort from 30,000 ft.

Plus, it seems utter folly to me to take your most experienced developers and promote them away from the thing you believe they do well. (And paying them more to have less impact just adds insult to injury.)

 

When Are We ‘Done’? – What Iterating Really Means

This week saw a momentous scientific breakthrough, made possible by software. The Event Horizon Telescope – an international project that turned the Earth into a giant telescope – took the first real image of a super-massive black hole in the M87 galaxy, some 55 million light years away.

This story serves to remind me – whenever I need reminding – that the software we write isn’t an end in itself. We set out to achieve goals and to solve problems: even when that goal is to learn a particuar technology or try a new technique. (Yes, the point of FizzBuzz isn’t FizzBuzz itself. Somebody already solved that problem!)

The EHT image is the culmination of years of work by hundreds of scientists around the world. The image data itself was captured two years ago, on a super-clear night, coordinated by atomic clocks. Ever since then, the effort has been to interpret and “stitch together” the massive amount of image data to create the photo that broke the Internet this week.

Here’s Caltech computer scientist Katie Bouman, who designed the algorithm that pulled this incredible jigsaw together, explaining the process of photographing M87 last year.

From the news stories I’ve read about this, it sounds like much time was devoted to testing the results to ensure the resulting image had fidelity – and wasn’t just some software “fluke” – until the team had the confidence to release the image to the world.

They weren’t “done” after the code was written (you can read the code on Github). They weren’t “done” after the first result was achieved. They were “done” when they were confident they had achieved their goal.

This is a temporary, transient “done”, of course. EHT are done for now. But the work goes on. There are other black holes and celestial objects of interest. They built a camera: ain’t gonna take just the one picture with it, I suspect. And the code base has a dozen active pull requests, so somebody’s still working on it. The technology and the science behind it will be refined and improved, and the next picture will be better. But that’s the next goal.

I encourage teams to organise around achieving goals and solving problems together, working one goal at a time. (If there are two main goals, that’s two teams, as far as I’m concerned.) The team is defined by the goal. And the design process iterates towards that goal.

Iterating is goal-seeking – we’re supposed to be converging on something. When it’s not, then we’re not iterating; we’re just going around in circles. (I call it “orbiting” when teams deliver every week, over and over, but the problem never seems to get solved. The team is orbiting the problem.)

This is one level of team enlightment above a product focus. Focusing on products tends to produce… well, products. The goal of EHT was not to create a software imaging product. That happened as a side effect of achieving the main goal: to photograph the event horizon of a black hole.

Another really important lesson here is EHT’s definition of “team”: hundreds of people – physicists, astronomers, engineers, computer scientists, software and hardware folk – all part of the same multi-disciplinary team working towards the same goal. I’d be surprised if the software team at MIT referred to the astrophysicists as their “customer”. The “customer” is us – the world, the public, society, civilisation, and the taxpayers who fund science.

That got me to thinking, too: are our “customers” really our customers? Or are they part of the same team as us, defined by a shared end goal or a problem they’re tasked with solving?

Photographing a black hole takes physics, astronomy, optical engineering, mechanical and electrical and electronic engineering, software, computer networks, and a tonne of other stuff.

Delivering – say – print-on-demand birthday cards takes graphic design, copywriting, printing, shipping, and a tonne of other stuff. I genuinely believe we’re not “done” until the right card gets to the right person, and everyone involved in making that happen is part of the team.