Modularity & Readability

One thing I hear very often from less experienced developers is how difficult it can be for them to understand modular code.

The principles of modular design – that modules should:

  • Do one job
  • Hide their inner workings
  • Have swappable dependencies

– tend to lead to code that’s composed of small pieces that bind to abstractions for the other modules they use to do their jobs.

The key to making this work in practice rests on two factors: firstly, that developers get good at naming the boxes clearly enough so that people don’t have to look inside them to understand what they do, and secondly that developers accustom themselves to reading highly-composed code. More bluntly, developers have to learn how to read and write modular code.

Schools, universities and code clubs generally don’t get as far as modularity when they teach programming. Well, they may teach the mechanics of declaring and using modules, but they don’t present students with much opportunity to write larger, composed systems. The self-contained nature of programming problems in education typically presents students with algorithms whose implementations are all laid out on the screen in front of them.

Software at scale, though, doesn’t fit on a screen. They are jigsaw puzzles, and much more attention to how the pieces fit together is needed. Software design grows from being about algorithms and program flow to being about relationships between parts, at multiple levels of code organisation.

In this sense, young programmers leave school like freshly-minted playwrights who’ve only ever written short monologues. They know nothing of character, or motivation, or dialogue, of plotting, or pacing, or the three-act structure, nor have they ever concerned themselves with staging and practical considerations that just don’t come up when a single actor reads a single page standing centre-stage under a single spotlight.

Then they get their first job as a script assistant on a production of Noises Off and are all like “What are all these ‘stage directions’? Why are there so many scenes? I can’t follow this plot. Can we have just one character say all the lines?”

Here’s the thing; reading and writing modular code is an acquired skill. It doesn’t just happen overnight. As more and more young developers flood into the industry, I see more and more teams full of people who are easily bamboozled by modular, composed code.

Readability is about the audience. Programmers have a “reading age” defined by their ability to understand code, and code needs to be pitched to the audience’s reading age. This means that we may have to sacrifice some modularity for teams of less experienced developers. They’re not ready for it yet.

Having said all of that, of course, we get better at reading by being challenged. If we only ever read books that contained words we already know, we’d learn no new words.

I learned to read OO code by reading OO code written by more experienced programmers than me. They simultaneously pitched the code to be accessible to my level of understanding, and also a very little out of my current reach so that I had to stretch to follow the logic.

I know I’m a broken record on this topic, but that’s where mentoring comes in. Yes, there are many, many developers who lack the ability to read and write modular code. But every one of those teams could have someone who has lots of experience writing modular code who can challenge and guide them and bring them along over time – until one day it’s their turn to pay it forward.

The woeful lack of structured mentoring in our profession means that many developers go their entire careers never learning this skill. A lack of understanding combined with a lot of experience can be a dangerous mixture. “It is not that I don’t understand this play. This play is badly written. Good plays have a single character who stands in the centre of the stage under a single spotlight and reads out a 100-page monologue. Always.”

For those developers, a late-career Damascene conversion is unlikely to happen. I wish them luck.

For new developers, though, there’s a balance to be struck between working at a level they’re comfortable with today, and moving them forward to reading and writing more modular code in the future. Every program we write is both a solution to a problem today, and a learning experience to help us write a better solution tomorrow.

10 Things Every *Good* Software Development Method Does

I’ve been a programmer for the best part of four decades – three of them professionally – and, for the last 25 years, a keen student of this thing we call “software development”.

I’ve studied and applied a range of software development methods, principles, and techniques over those years. While, on the surface, Fusion may look different to the Unified Process, which may look different to Extreme Programming, which may look different to DSDM, which may look different to Cleanroom Software Engineering, when you look under the hood of these approaches, they actually have some fundamental things in common.

Here are the 10 things every software developer should know:

  1. Design starts with end users and their goals – be it with use cases, or with user stories, or with the “features” of Feature-Driven Development, the best development approaches drive their solution designs by first asking: Who will be using this software, and what will they be using it to do?
  2. Designs grow one usage scenario at a time – scenarios or examples drive the best solution designs, and those designs are fleshed out one scenario at a time to satisfy the user’s goal in “happy paths” (or to recover gracefully from not satisfying the user’s goal, which we call “edge cases”). Developers who try to consider multiple scenarios simultaneously tend to bite off more than they can chew.
  3. Solutions are delivered one scenario at a time – teams who deliver working software in end-to-end slices of functionality (e.g., the UI, business logic and database required to do a thing the user requires) tend to fare better than teams who deliver horizontal slices across their architecture (the UI components for all scenarios, and then the business logic, and then the database code). This is ffor two key reasons. Firstly, they can get user feedback from working features sooner, which speeds up the learning process. Secondly, if they only manage to deliver 75% of the software before a release date, they will have delivered 75% of end-to-end working features, instead of 75% of the layers of all features. We call this incremental delivery.
  4. Solutions evolve based on user feedback from increments – the other key ingredient in the way we deliver working software is how we learn from the feedback we get from end users in each increment of the software. With the finest requirements and design processes – and the best will in the world – we can’t expect to get it right first time. Maybe our solution doesn’t give them what they wanted. Maybe what they wanted turns out to be not what they really needed. The only way to find out for sure is to deliver what they asked for and let them take it for a spin. And then the feedback starts flooding in. The best approaches accept that feedback is not just unavoidable, it’s very desirable, and teams seek it out as often as possible.
  5. Plans change – if we can’t know whether we’re delivering the right software for sure until we’ve delivered it, then our approach to planning must be highly adaptable. Although the wasteland of real-world software development is littered with the bleached bones of “waterfall” projects that attempted to get it right first time (and inevitably failed), the idealised world of software development methods rejected that idea many decades ago. All serious methods are iterative, and all serious methods tell us that the plan will necessarily change. It’s management who resist change, not methods.
  6. Code changes – if plans change based on what we learn from end users, then it stands to reason that our code must also change to accommodate their feedback. This is the sticking point on many “agile” development teams. Their management processes may allow for the plan to change, but their technical practices (or the lack of them) may mean that changing the code is difficult, expensive and risky. There are a range of factors in the cost of changing software, but in the wider perspective, it essentially boils down to “How long will it take to deliver the next working iteration to end users?” If the answer is “months”, then change is going to be slow and the users’ feedback will be backed up like the LA freeway on a Monday morning. If it’s “minutes” then you can iterate very rapidly and learn your way to getting it right much faster. Delivery cycles are fundamental. They’re the metabolism of software development.
  7. Testing is fast and continuous – if the delivery cycle of the team is its metabolism, then testing is its thyroid. How long it takes to establish if our software’s broken will determine how fast our delivery cycle’ can be (if the goal is to avoid delivering broken software, of course.) If you aspire to a delivery cycle of minutes, then that leaves minutes to re-test your software. If all your testing’s done manually, then a modestly complex system will likely take weeks to re-test. And it’s a double whammy. Studies show that the longer a bug goes undetected, the exponentially greater it costs to fix it. If I break some code now and find out a minute from now, it’s a trifle to fix it. If I find out 6 weeks from now, it’s a whole other ball game. Teams who leave testing late typically end up spending most of their time fixing bugs instead of delivering valuable features and changes. All of this can profoundly impact delivery cycles and the cost of adapting to user feedback. Testing early and often is a feature of all serious methods. Automating our tests so they run fast is a feature of all the best methods.
  8. All work is undo-able – If we accept that its completely unrealistic to expect to get things right first time, then we must also accept that all the work we do is essentially an experiment from which we must learn. Sometimes, what we’ll learn is that what we’ve done is simply no good, and we need to do over. Software Configuration Management (of which version control is the central pillar) is a key component of all serious software development methods. A practice like Continuous Integration, done right, can bring us high levels of undo-ability, which massively reduces risk in what is a pretty risky endeavour. To use an analogy, think of software development as a multi-level computer game. Experienced gamers know to back up their place in the game frequently, so they don’t have to replay huge parts of it after a boo-boo. Same thing with version control and SCM. We don’t want our versions to be too far apart, or we’ll end up in a situation where we have to redo weeks or months of work because we took a wrong turn in the maze.
  9. Architecture is a process (not a person or a thing) – The best development methods treat software architecture and design as an ongoing activity that involves all stakeholders and is never finished. Good architectures are driven directly from user goals, ensuring that those goals are satisfied by the design above all else (e.g., use case realisations in the Unified Process), and applying organising principles – Simple Design, “Tell, Don’t Ask”, SOLID etc – to the internals of the solution design to ensure the code will be malleable enough to change to meet future needs. As an activity, architecture encompasses everything from the goals and tasks of end users, to the modular structure of the solution, to the everyday refactorings that are performed against code that falls short, the test suites that guard against regressions, the documentation that ships with the end product, and everything else which is informed by the design process. Since architecture is all-encompassing, all serious development methods mandate that it be a shared responsibility. The best methods strongly encourage a high level of architectural awareness within the team through continuous visualisation and review of the design. To some extent, everyone involved is defining the architecture. It is ever-changing and everyone’s responsibility.
  10. “Done” means we achieved the customer’s end goal – All of our work is for nothing if we don’t solve the problem we set out to solve. Too many teams are short-sighted when it comes to evaluating their success, considering only that a list of requested features was delivered, or that a product vision was realised. But all that tells us is that we administered the medicine. It doesn’t tell us if the medicine worked. If iterative development is a search algorithm, then it’s a goal-seeking search algorithm. One generation of working software at a time, we ask our end users to test the solution as a fit to their problem, learn what worked and what didn’t, and then go around again with an improved solution. We’re not “done” until the problem’s been solved. While many teams pay lip service to business goals or a business context, it’s often more as an exercise in arse-covering – “We need a business case to justify this £10,000,000 CRM system we’ve decided to build anyway!” – than the ultimate driver of the whole development process. Any approach that makes defining the end goal a part of the development process has put the cart before the horse. If we don’t have an end goal – a problem to be solved – then development shouldn’t begin. But all iterative development methods – and they’re all iterative to some degree – can be augmented with an outer feedback loop that considers business goals and tests working software in business situations, driving everything from there.

As a methodologist, I could spin you up an infinite number of software development methods with names like Goal-Oriented Object Delivery, or Customer Requirement Architectural Process. And, on the surface, I could make them all look quite different. But scratch the surface, and they’d all be fundamentally the same, in much the same way that programming languages – when you look past their syntax – tend to embrace the same underlying computing concepts.

Save yourself some time. Embrace the concepts.

Automate, Automate, Autonomy!

Thanks to pandemic-induced economic chaos, you’ve been forced to take a job on the quality assurance line at a factory that produces things.

The machine creates all kinds of random things, but your employer only sells a very specific subset of those things. All the things that don’t fit the profile have to be rejected, and melted down and fed back into the machine to make more things.

On your first day, you get training. (Oh, would that were true in software development!)

They stand you at the quality gate and start up the machine. All kinds of things come down the line at you. Your line manager tells you “Only let the green things through”. You grab all the things that aren’t green and throw them into the recycle bin. So far, so good.

“Only let the green round things through!” shouts your line manager. Okay, you think. Bit harder now. All non-green, non-round things go in the bin.

“Only let the green round small things through!” Now you’re really having to concentrate, a few green round small things end up in the bin, and a few non-green, non-round, non-small things get through.

“Only let the green round small things with Japanese writing on them through!” That’s a lot to process at the same time. Now your brain is struggling to cope. A bunch of blue and red things with Japanese writing on them get through. A bunch of square things get through. Your score has gone from 100% accurate to just 90%. Either someone will have to go through the boxes that have been packed and pick out all the rejects, or they’ll have to deal with 10% customer returns after they’ve been shipped.

“Only let the green round small things with Japanese writing on them that have beveled edges and a USB charging port on the opposite side to the writing and a power button in the middle of the writing and a picture of a horse  – not a donkey, mind, reject those ones! – and that glow in the dark through!”

Now it’s chaos. Almost every box shipped contains things that should have been thrown in the recycle bin. Almost every order gets returned. That’s just too much to process. Too many criteria.

We have several choices here:

  1. Slow down the line so we can methodically examine every thing against our checklist, one criteria at a time.
  2. Hire a whole bunch of people and give them one check each to do.
  3. Reset customer expectations about the quality of the things they’re buying.
  4. Automate the checking using cameras and robots and lasers and super-advanced A.I. so all those checks can be made at production speed to a high enough accuracy.

Number 4 is the option that potentially gives us the win-win of customer satisfaction and high productivity without the bigger payroll. It’s been the driving force behind the manufacturing revolutions in East Asia for the last 70 years: automate, automate, automate.

But it doesn’t come for free. High levels of automation require considerable ongoing investment in time, technology and training. In the UK, we’ve under-invested, becoming more and more inefficient and expensive while the quality of our output has declined. Shareholders want their return now. There’s no appetite for making improvements for the future.

There are obvious parallels in software development. Businesses want their software now. Most software organisations have little inclination to invest the time, technology and training required to reach the high levels of automation needed to achieve the coveted continuous delivery that would allow them to satisfy customer needs sooner, cheaper, and for longer.

The inescapable reality is that frictionless delivery demands an investment of 20-25% of your total software development budget. To put it more bluntly, everyone should be spending 1 day a week not on immediate customer requirements, but on making improvements in the delivery process that would mean meeting future customer requirements will be easier.

And so, for most teams, it never gets easier. The software just gets buggier, later and more expensive year after year.

What distinguishes those software teams who are getting it right from the rest? From observation, I’ve seen the same factor every time: autonomy. Teams will invest that 20-25% when it’s their choice. They’re tasked with delivering value, and allowed to figure out how best to do that. Nobody’s telling them how to do their jobs.

How did this blissful state come about? Again, from observation, those teams have autonomy because they took it. Freedom is rarely willingly given.

Now, I appreciate this is a whole can of worms. To take their autonomy, teams need to earn trust. The more trust a team has earned, the more likely they’ll be left alone. And this can be a chicken and egg kind of situation. To earn trust, the team has to reliably deliver. To reliably deliver, the team needs autonomy. This whole process must begin with a leap of faith on the business’s part. In other words, they have to give teams the benefit of the doubt long enough to see the results.

And here come the worms… Teams have to win over their customer from the start, before anything’s been delivered – before the customer’s had a chance to taste our pudding. This means that developers need to inspire enough confidence with their non-technical stakeholders – remember, this is a big money game – to reassure everyone that they’re in good hands. And we’re really, really bad at this.

The temptation is to over-promise, and set unrealistic expectations. This pretty much guarantees disappointment. The best way to inspire confidence is to have a good track record. No lawyer can guarantee to win your case. But a lawyer who won 9 of their last 10 cases is going to inspire more confidence than a lawyer who’s taking this as their first case promising you a win.

And we’re really, really bad at this, too – chiefly because software development teams are newly formed for that specific piece of work and don’t have a track record to speak of. Sure, individual developers may be known quantities, but in software, the unit of delivery is the team. I’ve watched teams of individually very strong developers fall flat on their collective arse.

And this is why I believe that this picture won’t change until organisations start to view teams as assets, and invest in them for a long-term pay-off as well as short-term delivery, 20/80. And, again, I don’t think this will be willingly given. So maybe we – as a profession – need to take the decision out of their hands.

It could all start with one big act of collective autonomy.

 

 

Why COBOL May Be The Language In Your Future

Yes, I know. Preposterous! COBOL’s 61 years old, and when was the last time you bumped into a COBOL programmer still working? Surely, Java is the new COBOL, right?

Think again. COBOL is alive and well. Some 220 billion lines of it power 71% of Fortune 500 companies. If a business is big enough and been around long enough, there’s a good chance the lion’s share of the transactions you do with that business involve some COBOL.

Fact is, they’re kind of stuck with it. Mainframe systems represent a multi-trillion dollar investment going back many decades. COBOL ain’t going nowhere for the foreseeable future.

What’s going is not the language but the programmers who know it and who know those critical business systems. The average age of a COBOL programmer in 2014 was 55. No doubt in 2020 it’s older than that, as young people entering IT aren’t exactly lining up to learn COBOL. Colleges don’t teach it, and you rarely hear it mentioned within the software development community. COBOL just isn’t sexy in the way Go or Python are.

As the COBOL programmer community edges towards statistical retirement – with the majority already retired (and frankly, dead) – the question looms: who is going to maintain these systems in 10 years or 20 years time?

One thing we know for sure: businesses have two choices – they can either replace the programmers, or replace the programs. Replacing legacy COBOL systems has proven to be very time-consuming and expensive for some banks. Commonwealth Bank of Australia took 5 years and $750 million to replace its core COBOL platform in 2012, for example.

And to replace a COBOL program, developers writing the new code at least need to be able to read the old code, which will require a good understanding of COBOL. There’s no getting around it: a bunch of us are going to have to learn COBOL one way or another.

I did a few months of COBOL programming in the mid-1990s, and I’d be lying if I said I enjoyed it. Compared to modern languages like Ruby and C#, COBOL is clunky and hard work.

But I’d also be lying if I said that COBOL can’t be made to work in the context of modern software development. In 1995, we “version controlled” our source files by replacing listings in cupboards. We tested our programs manually (if we tested them at all before going live). Our release processes were effectively the same as editing source files on the live server (on the mainframe, in this case).

But it didn’t need to be like that. You can manage versions of your COBOL source files in a VCS like Git. You can write unit tests for COBOL programs. You can do TDD in COBOL (see Exhibit A below).

You can refactor COBOL code (“Extract Paragraph”, “Extract Program”, “Move Field” etc), and you can automate a proper build an release process to deploy changed code safely to a mainframe (and roll it back if there’s a problem).

It’s possible to be agile in COBOL. The reason why so much COBOL legacy code fails in that respect has much more to do with decades of poor programming practices and very little to do with the language or the associated tools themselves.

I predict that, as more legacy COBOL programmers retire, the demand – and the pay – for COBOL programmers will rise to a point where some of you out there will find it irresistible.  And the impact on society if they can’t be found will be severe.

The next generation of COBOL programmers may well be us.

Is Your Agile Transformation Just ‘Agility Theatre’?

I’ve talked before about what I consider to be the two most important feedback loops in software development.

When I explain the feedback loops – the “gears” – of Test-Driven Development, I go to great pains to highlight which of those gears matter most, in terms of affecting our odds of success.

tdd_gears

Customer or business goals drive the whole machine of delivery – or at least, they should. We are not done because we passed some acceptance tests, or because a feature is in production. We’re only done when we’ve solved the customer’s problem.

That’s very likely going to require more than one go-around. Which is why the second most important feedback loop is the one that establishes if we’re good to go for the next release.

The ability to establish quickly and effectively if the changes we made to the software have broken it is critical to our ability to release it. Teams who rely on manual regression testing can take weeks to establish this, and their release cycles are inevitably very slow. Teams who rely mostly on automated system and integration tests have faster release cycles, but still usually far too slow for them to claim to be “agile”. Teams who can re-test most of the code in under a minute are able to release as often as the customer wants – many times a day, if need be.

The speed of regression testing – of establishing if our software still works – dictates whether our release cycles span months, weeks, or hours. It determines the metabolism of our delivery cycle and ultimately how many throws of the dice we get at solving the customer’s problem.

It’s as simple as that: faster tests = more throws of the dice.

If the essence of agility is responding to change, then I conclude that fast-running automated tests lie at the heart of that.

What’s odd is how so many “Agile transformations” seem to focus on everything but that. User stories don’t make you responsive to change. Daily stand-ups don’t make you responsive to change. Burn-down charts don’t make you responsive to change. Kanban boards don’t make you responsive to change. Pair programming doesn’t make you responsive to change.

It’s all just Agility Theatre if you’re not addressing the two must fundamental feedback loops, which the majority of organisations simply don’t. Their definition of done is “It’s in production”, as they work their way through a list of features instead of trying to solve a real business problem. And they all too often under-invest in the skills and the time needed to wrap software in good fast-running tests, seeing that as less important than the index cards and the Post-It notes and the Jira tickets.

I talk often with managers tasked with “Agilifying” legacy IT (e.g., mainframe COBOL systems). This means speeding up feedback cycles, which means speeding up delivery cycles, which means speeding up build pipelines, which – 99.9% of the time – means speeding up testing.

After version control, it’s #2 on my list of How To Be More Agile. And, very importantly, it works. But then, we shouldn’t be surprised that it does. Maths and nature teach us that it should. How fast do bacteria or fruit flies evolve – with very rapid “release cycles” of new generations – vs elephants or whales, whose evolutionary feedback cycles take decades?

There are two kinds of Agile consultant: those who’ll teach you Agility Theatre, and those who’ll shrink your feedback cycles. Non-programmers can’t help you with the latter, because the speed of the delivery cycle is largely determined by test execution time. Speeding up tests requires programming, as well as knowledge and experience of designing software for testability.

70% of Agile coaches are non-programmers. A further 20% are ex-programmers who haven’t touched code for over a decade. (According to the hundreds of CVs I’ve seen.) That suggests that 90% of Agile coaches are teaching Agility Theatre, and maybe 10% are actually helping teams speed up their feedback cycles in any practical sense.

It also strongly suggests that most Agile transformations have a major imbalance; investing heavily in the theatre, but little if anything in speeding up delivery cycles.

How The Way We Measure “Productivity” Makes Us Take Bad Bets

Encouraging news from Oxford as researcher Sarah Gilbert says she’s “80% confident” the COVID-19 vaccine her team has been testing will work and may be ready by the autumn.

Except…

As a software developer, the “80%” makes my heart sink. I know from bitter experience that 80% Done on solving a problem is about as meaningless a measure as you can get. The vaccine will either solve the problem – allowing us to get back to normal – or it won’t.

In software development, I apply similarly harsh logic. Teams may tell me that they’re “80% done” when they mean “We’ve built 80% of the features” or “We’ve written 80% of the code”. More generally: “We’re 80% through a plan”.

A plan is not necessarily a solution. Several promising vaccines are undergoing human trials as we speak, though. So, while Gilbert’s 80% Done may eventually turn out to be the 20% Not Done after more extensive real-world testing, there are enough possible solutions out there to give me hope that a vaccine will be forthcoming within a year.

Think of “80% done” as a 4/5 chance that it’ll work. There are several 4/5 chances – several rolls of the dice, which give the world cumulatively pretty good odds. Bill Gate’s plan to build multiple factories and start manufacturing multiple vaccines before the winner’s been identified will no doubt speed things up. And there are more efforts going on around the world if those all fail.

Software, not so much. Typically, a software development effort is the only game in town – all the eggs in a single basket, if you like. And this has always struck me as irrational behaviour on the part of organisations. At best, the design of a solution is complete guesswork as to whether or not it will solve the customer’s problem. It’s a coin toss. But a lot of organisations plan just to toss a single coin, and only once. Two coin tosses would give them 75% odds. 3 would give them 87.5%. 4 would give them 93.75%. And so on.

It’s more complex than that, of course. In real life, there’s significant odds that we’re barking up completely the wrong tree. We can’t fix a fundamentally flawed solution by refining it. So iterating only helps when we’re in the ballpark to begin with.

Software solutions – to have the best odds of succeeding in the real world – need to start with populations of possible solutions, just like the COVID-19 solution starts with a population of potential vaccines. If there was only one team working on one vaccine, I’d be very, very worried right now.

Smart organisations – of which there are sadly very few, it would seem – start projects by inviting teams or individuals to propose solutions. The most promising of those are given a little bit if budget to develop further, so they can at least go through some preliminary testing with small groups of end users. These Minimum Viable Products are whittled down further to the most promising, and more budget is assigned to evolve them into potential products. Eventually, one of those products will win out, and the rest of the budget is assigned to that to take it to market (which could mean rolling it out as an internal system into the business, if that’s the kind of problem being solved.)

We know from decades of experience and some big studies that the bulk of the cost of software is downstream. For example, my team was given a budget of £200,000 to develop a job site in the late 90s. The advertising campaign for the site’s launch cost £4 million. The team of sales, marketing people and admin people who ran the site cost £2.5 million a year. The TCO of the software itself was about £2.8 million over 5 years.

Looking back, it seems naive in the extreme that the business went with the first and only idea for the design of the site that was presented to them, given the size of the bet they were about to place. (Even more naive that the design was presented as a database schema, with no use cases – but that’s a tale for another day.)

Had I been investing that kind of money, I would have spent maybe £10,000 each on the four most promising prototypes – assigning one pair of developers to each one. After 2 weeks, I would have jettisoned the two least promising – based on real end user testing – and merged the fallow pairs into two teams, then given them £40,000 each for further development. After another 4 weeks, and more user testing, I would have selected the best of the two, merged the two teams into one, and invested the remaining £80,000 in four more weeks of development to turn it into a product.

Four throws of the dice buys you a 93.75% chance of success. Continuous user feedback on the most promising solution raises the odds even further.

But what most managers hear when I say “Start with 8 developers working on 4 potential solutions” is WASTE. 75% of the effort in the first two weeks is “wasted”. 50% of the effort in the next 4 weeks is “wasted”. The total waste is only 27.5%, though – measured in weeks spent by developers on software that ultimately won’t get used.

Three quarters of the time invested is devoted to the winning solution. That’s in exchange for much higher odds of success. If we forecasted waste by time spent multiplied by odds of failure, then having all 8 developers work on a single possible solution – a toss of a coin – presents a risk of wasting 40 weeks of developer time (or half our budget).

Starting with 4 possible solutions uses the exact same amount of developer time and budget for a 93.75%+ chances of succeeding. Risk of waste is actually – in real terms – only 6.25% of that total budget, even though we know that a quarter of the software written won’t get used.

But that’s only if you measure waste in terms of problems not solved instead of software not delivered.

The same investment: £200,000. Starkly different odds of success. Far lower risk of wasting time and money.

And that’s just the money we spent on writing the software. Now think about my job site. Many millions more were spent on the business operation that was built on that software. Had we delivered the wrong solution  – spoiler alert: we had – then that’s where the real waste would be.

Focusing on solving problems makes us more informed gamblers.

 

Why I Abandoned Business Modeling

So, as you may have gathered, I have a background in model-driven processes. I drank the UML Kool-Aid pretty early on, and by 2000 was a fully paid-up member of the Cult of Boxes And Arrows Solve Every Problem.

The big bucks for us architect types back then – and, probably still – came with a job title called Enterprise Architect. Enterprise Architecture is built on the idea that organisations like businesses are essentially machines, with moving connected parts.

Think of it like a motor car; there was a steering wheel, which executives turn to point the car in the direction they wanted to go. This was connected through various layers of mechanisms – business processes, IT systems, individual applications, actual source code – and segregated into connected vertical slices for different functions within the business, different business locations and so on.

The conceit of EA was that we could connect all those dots and create strategic processes of change where the boss changes a business goal and that decision works its way seamlessly through this multi-layered mechanism, changing processes, reconfiguring departments and teams, rewriting systems and editing code so that the car goes in the desired new direction.

It’s great fun to draw complex picture of how we think a business operates. But it’s also a fantasy. Businesses are not mechanistic or deterministic in this way at all. First of all, modeling a business of any appreciable size requires us to abstract away all the insignificant details. In complex systems, though, there are no such things as “insignificant details”. The tiniest change can push a complex system into a profoundly different order.

And that order emerges spontaneously and unpredictably. I’ve watched some big businesses thrown into chaos by the change of a single line of code in a single IT system, or by moving the canteen to a different floor in HQ.

2001-2003 was a period of significant evolution of my own thinking on this. I realised that no amount of boxes and arrows could truly communicate what a business is really like.

In philosophy, they have this concept of qualia – individual instances of subjective, conscious experience. Consider this thought experiment: you’re locked in a tower on a remote island. Everything in it is black and white. The tower has an extensive library of thousands of books that describe everything you could possibly need to know about the colour orange. You have studied the entire contents of that library, and are now the world’s leading authority on orange.

Then, one day, you are released from your tower and allowed to see the world. The first thing you do, naturally, is go and find an orange. When you see the colour orange for the first time – given that you’ve read everything there is to know about it – are you surprised?

Two seminal professional experiences I had in 2002-2004 convinced me that you cannot truly understand a business without seeing and experiencing it for yourself. In both cases, we’d had teams of business analysts writing documents, creating glossaries, and drawing boxes and arrows galore to explain the organisational context in which our software was intended to be used.

I speak box-and-arrow fluently, but I just wasn’t getting it. So many hidden details, so many unanswered questions. So, after months of going round in circles delivering software that didn’t fit, I said “Enough’s enough” and we piled into a minibus and went to the “shop floor” to see these processes for ourselves. The mist cleared almost immediately.

Reality is very, very complicated. All we know about conscious experience suggests that our brains are only truly capable of understanding complex things from first-hand experience of them. We have to see them and experience them for ourselves. Accept no substitutes.

Since then, my approach to strategic systems development has been one of gaining first-hand experience of a problem, and trying simple things we believe might solve those problems, seeing and measuring what effect they have, and feeding back into the next attempt.

Basically, I replaced Enterprise Architecture with agility. Up to that point, I’d viewed Agile as a way of delivering software. I was already XP’d up to the eyeballs, but hadn’t really looked beyond Extreme Programming to appreciate its potential strategic role in the evolution of a business. There have to be processes outside of XP that connect business feedback cycles to software delivery cycles. And that’s how I do it (and teach it) now.

Don’t start with features. Start with a problem. Design the simplest solution you can think of that might solve that problem, and make it available for real-world testing as soon as you can. Observe (and experience) the solution being used in the real world. Feed back lessons learned and go round again with an evolution of your solution. Rinse and repeat until the problem’s solved (my definition of “done”). Then move on to the next problem.

The chief differences between Enterprise Architecture and this approach are that:

a. We don’t make big changes. In complex adaptive systems, big changes != big results. You can completely pull a complex system out of shape, and over time the underlying – often unspoken – rule of the system (the “insignificant details” your boxes and arrows left out, usually) will bring it back to its original order. I’ve watched countless big change programmes produce no lasting, meaningful change.

b. We begin and end in the real world

In particular, I’ve learned from experience that the smallest changes can have the largest impact. We instinctively believe that to effect change at scale, we must scale our approach. Nothing could be further from the truth. A change to a single line of code can cause chaos at airport check-ins and bring traffic in an entire city to a standstill. Enterprise Architecture gave us the illusion of control over the effects of changes, because it gave us the illusion of understanding.

But that’s all it ever was: an illusion.