Why COBOL May Be The Language In Your Future

Yes, I know. Preposterous! COBOL’s 61 years old, and when was the last time you bumped into a COBOL programmer still working? Surely, Java is the new COBOL, right?

Think again. COBOL is alive and well. Some 220 billion lines of it power 71% of Fortune 500 companies. If a business is big enough and been around long enough, there’s a good chance the lion’s share of the transactions you do with that business involve some COBOL.

Fact is, they’re kind of stuck with it. Mainframe systems represent a multi-trillion dollar investment going back many decades. COBOL ain’t going nowhere for the foreseeable future.

What’s going is not the language but the programmers who know it and who know those critical business systems. The average age of a COBOL programmer in 2014 was 55. No doubt in 2020 it’s older than that, as young people entering IT aren’t exactly lining up to learn COBOL. Colleges don’t teach it, and you rarely hear it mentioned within the software development community. COBOL just isn’t sexy in the way Go or Python are.

As the COBOL programmer community edges towards statistical retirement – with the majority already retired (and frankly, dead) – the question looms: who is going to maintain these systems in 10 years or 20 years time?

One thing we know for sure: businesses have two choices – they can either replace the programmers, or replace the programs. Replacing legacy COBOL systems has proven to be very time-consuming and expensive for some banks. Commonwealth Bank of Australia took 5 years and $750 million to replace its core COBOL platform in 2012, for example.

And to replace a COBOL program, developers writing the new code at least need to be able to read the old code, which will require a good understanding of COBOL. There’s no getting around it: a bunch of us are going to have to learn COBOL one way or another.

I did a few months of COBOL programming in the mid-1990s, and I’d be lying if I said I enjoyed it. Compared to modern languages like Ruby and C#, COBOL is clunky and hard work.

But I’d also be lying if I said that COBOL can’t be made to work in the context of modern software development. In 1995, we “version controlled” our source files by replacing listings in cupboards. We tested our programs manually (if we tested them at all before going live). Our release processes were effectively the same as editing source files on the live server (on the mainframe, in this case).

But it didn’t need to be like that. You can manage versions of your COBOL source files in a VCS like Git. You can write unit tests for COBOL programs. You can do TDD in COBOL (see Exhibit A below).

IDENTIFICATION DIVISION.
PROGRAM-ID. BASKET-TEST.
DATA DIVISION.
FILE SECTION.
WORKING-STORAGE SECTION.
COPY 'total_params.cpy'.
COPY 'test_context.cpy'.
01 expected PIC 9(04)V9(2).
PROCEDURE DIVISION.
MAIN-PROCEDURE.
PERFORM EMPTY-BASKET.
PERFORM SINGLE_ITEM.
PERFORM TWO_ITEMS.
PERFORM QUANTITY_TWO.
DISPLAY 'Tests passed: ' passes.
DISPLAY 'Tests failed: ' fails.
DISPLAY 'Tests run: ' totalRun.
STOP RUN.
EMPTY-BASKET.
INITIALIZE basket REPLACING NUMERIC DATA BY ZEROES.
CALL 'TOTAL' USING basket, total.
MOVE 0 TO expected.
CALL 'ASSERT_EQUAL' USING 'EMPTY BASKET',
expected, total, test-context.
SINGLE_ITEM.
INITIALIZE basket REPLACING NUMERIC DATA BY ZEROES.
MOVE 100 TO unitprice(1).
MOVE 1 TO quantity(1).
CALL 'TOTAL' USING basket, total.
MOVE 100 TO expected.
CALL 'ASSERT_EQUAL' USING 'SINGLE_ITEM',
expected, total, test-context.
TWO_ITEMS.
INITIALIZE basket REPLACING NUMERIC DATA BY ZEROES.
MOVE 100 TO unitprice(1).
MOVE 1 TO quantity(1).
MOVE 200 TO unitprice(2).
MOVE 1 TO quantity(2).
CALL 'TOTAL' USING basket, total.
MOVE 300 TO expected.
CALL 'ASSERT_EQUAL' USING 'TWO_ITEMS',
expected, total, test-context.
QUANTITY_TWO.
INITIALIZE basket REPLACING NUMERIC DATA BY ZEROES.
MOVE 100 TO unitprice(1).
MOVE 2 TO quantity(1).
CALL 'TOTAL' USING basket, total.
MOVE 200 TO expected.
CALL 'ASSERT_EQUAL' USING 'QUANTITY_TWO',
expected, total, test-context.
END PROGRAM BASKET-TEST.

view raw
basket_test.cbl
hosted with ❤ by GitHub

You can refactor COBOL code (“Extract Paragraph”, “Extract Program”, “Move Field” etc), and you can automate a proper build an release process to deploy changed code safely to a mainframe (and roll it back if there’s a problem).

It’s possible to be agile in COBOL. The reason why so much COBOL legacy code fails in that respect has much more to do with decades of poor programming practices and very little to do with the language or the associated tools themselves.

I predict that, as more legacy COBOL programmers retire, the demand – and the pay – for COBOL programmers will rise to a point where some of you out there will find it irresistible.  And the impact on society if they can’t be found will be severe.

The next generation of COBOL programmers may well be us.

Is Your Agile Transformation Just ‘Agility Theatre’?

I’ve talked before about what I consider to be the two most important feedback loops in software development.

When I explain the feedback loops – the “gears” – of Test-Driven Development, I go to great pains to highlight which of those gears matter most, in terms of affecting our odds of success.

tdd_gears

Customer or business goals drive the whole machine of delivery – or at least, they should. We are not done because we passed some acceptance tests, or because a feature is in production. We’re only done when we’ve solved the customer’s problem.

That’s very likely going to require more than one go-around. Which is why the second most important feedback loop is the one that establishes if we’re good to go for the next release.

The ability to establish quickly and effectively if the changes we made to the software have broken it is critical to our ability to release it. Teams who rely on manual regression testing can take weeks to establish this, and their release cycles are inevitably very slow. Teams who rely mostly on automated system and integration tests have faster release cycles, but still usually far too slow for them to claim to be “agile”. Teams who can re-test most of the code in under a minute are able to release as often as the customer wants – many times a day, if need be.

The speed of regression testing – of establishing if our software still works – dictates whether our release cycles span months, weeks, or hours. It determines the metabolism of our delivery cycle and ultimately how many throws of the dice we get at solving the customer’s problem.

It’s as simple as that: faster tests = more throws of the dice.

If the essence of agility is responding to change, then I conclude that fast-running automated tests lie at the heart of that.

What’s odd is how so many “Agile transformations” seem to focus on everything but that. User stories don’t make you responsive to change. Daily stand-ups don’t make you responsive to change. Burn-down charts don’t make you responsive to change. Kanban boards don’t make you responsive to change. Pair programming doesn’t make you responsive to change.

It’s all just Agility Theatre if you’re not addressing the two must fundamental feedback loops, which the majority of organisations simply don’t. Their definition of done is “It’s in production”, as they work their way through a list of features instead of trying to solve a real business problem. And they all too often under-invest in the skills and the time needed to wrap software in good fast-running tests, seeing that as less important than the index cards and the Post-It notes and the Jira tickets.

I talk often with managers tasked with “Agilifying” legacy IT (e.g., mainframe COBOL systems). This means speeding up feedback cycles, which means speeding up delivery cycles, which means speeding up build pipelines, which – 99.9% of the time – means speeding up testing.

After version control, it’s #2 on my list of How To Be More Agile. And, very importantly, it works. But then, we shouldn’t be surprised that it does. Maths and nature teach us that it should. How fast do bacteria or fruit flies evolve – with very rapid “release cycles” of new generations – vs elephants or whales, whose evolutionary feedback cycles take decades?

There are two kinds of Agile consultant: those who’ll teach you Agility Theatre, and those who’ll shrink your feedback cycles. Non-programmers can’t help you with the latter, because the speed of the delivery cycle is largely determined by test execution time. Speeding up tests requires programming, as well as knowledge and experience of designing software for testability.

70% of Agile coaches are non-programmers. A further 20% are ex-programmers who haven’t touched code for over a decade. (According to the hundreds of CVs I’ve seen.) That suggests that 90% of Agile coaches are teaching Agility Theatre, and maybe 10% are actually helping teams speed up their feedback cycles in any practical sense.

It also strongly suggests that most Agile transformations have a major imbalance; investing heavily in the theatre, but little if anything in speeding up delivery cycles.

How The Way We Measure “Productivity” Makes Us Take Bad Bets

Encouraging news from Oxford as researcher Sarah Gilbert says she’s “80% confident” the COVID-19 vaccine her team has been testing will work and may be ready by the autumn.

Except…

As a software developer, the “80%” makes my heart sink. I know from bitter experience that 80% Done on solving a problem is about as meaningless a measure as you can get. The vaccine will either solve the problem – allowing us to get back to normal – or it won’t.

In software development, I apply similarly harsh logic. Teams may tell me that they’re “80% done” when they mean “We’ve built 80% of the features” or “We’ve written 80% of the code”. More generally: “We’re 80% through a plan”.

A plan is not necessarily a solution. Several promising vaccines are undergoing human trials as we speak, though. So, while Gilbert’s 80% Done may eventually turn out to be the 20% Not Done after more extensive real-world testing, there are enough possible solutions out there to give me hope that a vaccine will be forthcoming within a year.

Think of “80% done” as a 4/5 chance that it’ll work. There are several 4/5 chances – several rolls of the dice, which give the world cumulatively pretty good odds. Bill Gate’s plan to build multiple factories and start manufacturing multiple vaccines before the winner’s been identified will no doubt speed things up. And there are more efforts going on around the world if those all fail.

Software, not so much. Typically, a software development effort is the only game in town – all the eggs in a single basket, if you like. And this has always struck me as irrational behaviour on the part of organisations. At best, the design of a solution is complete guesswork as to whether or not it will solve the customer’s problem. It’s a coin toss. But a lot of organisations plan just to toss a single coin, and only once. Two coin tosses would give them 75% odds. 3 would give them 87.5%. 4 would give them 93.75%. And so on.

It’s more complex than that, of course. In real life, there’s significant odds that we’re barking up completely the wrong tree. We can’t fix a fundamentally flawed solution by refining it. So iterating only helps when we’re in the ballpark to begin with.

Software solutions – to have the best odds of succeeding in the real world – need to start with populations of possible solutions, just like the COVID-19 solution starts with a population of potential vaccines. If there was only one team working on one vaccine, I’d be very, very worried right now.

Smart organisations – of which there are sadly very few, it would seem – start projects by inviting teams or individuals to propose solutions. The most promising of those are given a little bit if budget to develop further, so they can at least go through some preliminary testing with small groups of end users. These Minimum Viable Products are whittled down further to the most promising, and more budget is assigned to evolve them into potential products. Eventually, one of those products will win out, and the rest of the budget is assigned to that to take it to market (which could mean rolling it out as an internal system into the business, if that’s the kind of problem being solved.)

We know from decades of experience and some big studies that the bulk of the cost of software is downstream. For example, my team was given a budget of £200,000 to develop a job site in the late 90s. The advertising campaign for the site’s launch cost £4 million. The team of sales, marketing people and admin people who ran the site cost £2.5 million a year. The TCO of the software itself was about £2.8 million over 5 years.

Looking back, it seems naive in the extreme that the business went with the first and only idea for the design of the site that was presented to them, given the size of the bet they were about to place. (Even more naive that the design was presented as a database schema, with no use cases – but that’s a tale for another day.)

Had I been investing that kind of money, I would have spent maybe £10,000 each on the four most promising prototypes – assigning one pair of developers to each one. After 2 weeks, I would have jettisoned the two least promising – based on real end user testing – and merged the fallow pairs into two teams, then given them £40,000 each for further development. After another 4 weeks, and more user testing, I would have selected the best of the two, merged the two teams into one, and invested the remaining £80,000 in four more weeks of development to turn it into a product.

Four throws of the dice buys you a 93.75% chance of success. Continuous user feedback on the most promising solution raises the odds even further.

But what most managers hear when I say “Start with 8 developers working on 4 potential solutions” is WASTE. 75% of the effort in the first two weeks is “wasted”. 50% of the effort in the next 4 weeks is “wasted”. The total waste is only 27.5%, though – measured in weeks spent by developers on software that ultimately won’t get used.

Three quarters of the time invested is devoted to the winning solution. That’s in exchange for much higher odds of success. If we forecasted waste by time spent multiplied by odds of failure, then having all 8 developers work on a single possible solution – a toss of a coin – presents a risk of wasting 40 weeks of developer time (or half our budget).

Starting with 4 possible solutions uses the exact same amount of developer time and budget for a 93.75%+ chances of succeeding. Risk of waste is actually – in real terms – only 6.25% of that total budget, even though we know that a quarter of the software written won’t get used.

But that’s only if you measure waste in terms of problems not solved instead of software not delivered.

The same investment: £200,000. Starkly different odds of success. Far lower risk of wasting time and money.

And that’s just the money we spent on writing the software. Now think about my job site. Many millions more were spent on the business operation that was built on that software. Had we delivered the wrong solution  – spoiler alert: we had – then that’s where the real waste would be.

Focusing on solving problems makes us more informed gamblers.

 

Why I Abandoned Business Modeling

So, as you may have gathered, I have a background in model-driven processes. I drank the UML Kool-Aid pretty early on, and by 2000 was a fully paid-up member of the Cult of Boxes And Arrows Solve Every Problem.

The big bucks for us architect types back then – and, probably still – came with a job title called Enterprise Architect. Enterprise Architecture is built on the idea that organisations like businesses are essentially machines, with moving connected parts.

Think of it like a motor car; there was a steering wheel, which executives turn to point the car in the direction they wanted to go. This was connected through various layers of mechanisms – business processes, IT systems, individual applications, actual source code – and segregated into connected vertical slices for different functions within the business, different business locations and so on.

The conceit of EA was that we could connect all those dots and create strategic processes of change where the boss changes a business goal and that decision works its way seamlessly through this multi-layered mechanism, changing processes, reconfiguring departments and teams, rewriting systems and editing code so that the car goes in the desired new direction.

It’s great fun to draw complex picture of how we think a business operates. But it’s also a fantasy. Businesses are not mechanistic or deterministic in this way at all. First of all, modeling a business of any appreciable size requires us to abstract away all the insignificant details. In complex systems, though, there are no such things as “insignificant details”. The tiniest change can push a complex system into a profoundly different order.

And that order emerges spontaneously and unpredictably. I’ve watched some big businesses thrown into chaos by the change of a single line of code in a single IT system, or by moving the canteen to a different floor in HQ.

2001-2003 was a period of significant evolution of my own thinking on this. I realised that no amount of boxes and arrows could truly communicate what a business is really like.

In philosophy, they have this concept of qualia – individual instances of subjective, conscious experience. Consider this thought experiment: you’re locked in a tower on a remote island. Everything in it is black and white. The tower has an extensive library of thousands of books that describe everything you could possibly need to know about the colour orange. You have studied the entire contents of that library, and are now the world’s leading authority on orange.

Then, one day, you are released from your tower and allowed to see the world. The first thing you do, naturally, is go and find an orange. When you see the colour orange for the first time – given that you’ve read everything there is to know about it – are you surprised?

Two seminal professional experiences I had in 2002-2004 convinced me that you cannot truly understand a business without seeing and experiencing it for yourself. In both cases, we’d had teams of business analysts writing documents, creating glossaries, and drawing boxes and arrows galore to explain the organisational context in which our software was intended to be used.

I speak box-and-arrow fluently, but I just wasn’t getting it. So many hidden details, so many unanswered questions. So, after months of going round in circles delivering software that didn’t fit, I said “Enough’s enough” and we piled into a minibus and went to the “shop floor” to see these processes for ourselves. The mist cleared almost immediately.

Reality is very, very complicated. All we know about conscious experience suggests that our brains are only truly capable of understanding complex things from first-hand experience of them. We have to see them and experience them for ourselves. Accept no substitutes.

Since then, my approach to strategic systems development has been one of gaining first-hand experience of a problem, and trying simple things we believe might solve those problems, seeing and measuring what effect they have, and feeding back into the next attempt.

Basically, I replaced Enterprise Architecture with agility. Up to that point, I’d viewed Agile as a way of delivering software. I was already XP’d up to the eyeballs, but hadn’t really looked beyond Extreme Programming to appreciate its potential strategic role in the evolution of a business. There have to be processes outside of XP that connect business feedback cycles to software delivery cycles. And that’s how I do it (and teach it) now.

Don’t start with features. Start with a problem. Design the simplest solution you can think of that might solve that problem, and make it available for real-world testing as soon as you can. Observe (and experience) the solution being used in the real world. Feed back lessons learned and go round again with an evolution of your solution. Rinse and repeat until the problem’s solved (my definition of “done”). Then move on to the next problem.

The chief differences between Enterprise Architecture and this approach are that:

a. We don’t make big changes. In complex adaptive systems, big changes != big results. You can completely pull a complex system out of shape, and over time the underlying – often unspoken – rule of the system (the “insignificant details” your boxes and arrows left out, usually) will bring it back to its original order. I’ve watched countless big change programmes produce no lasting, meaningful change.

b. We begin and end in the real world

In particular, I’ve learned from experience that the smallest changes can have the largest impact. We instinctively believe that to effect change at scale, we must scale our approach. Nothing could be further from the truth. A change to a single line of code can cause chaos at airport check-ins and bring traffic in an entire city to a standstill. Enterprise Architecture gave us the illusion of control over the effects of changes, because it gave us the illusion of understanding.

But that’s all it ever was: an illusion.

Refactoring Complex Conditionals To Conditional Look-ups

In a previous post, I demonstrated how you could refactor IF statements with straight x == y conditions to lambda maps. This is useful when we need to retain codes – literal values – and look up the appropriate action for each. (Think, for example, of the Mars Rover kata, where our rover accepts strings of characters representing instructions to turn ‘L’ and ‘R’ or move ‘F’ and ‘B’).

But what happens when the conditions aren’t straightforward? Consider this example that calculates postage for orders based on where the order is to be shipped, the weight of the order and the total price.

function postage(order) {
if(order.location === 'UK') {
if (totalPrice(order) >= 100) {
return 0;
}
if (totalWeight(order) < 1) {
return 2.99;
}
if (totalWeight(order) >= 1 && totalWeight(order) < 3) {
return 4.99;
}
if (totalWeight(order) >= 3) {
return 6.99
}
}
if(order.location === 'EU') {
if (totalWeight(order) < 1) {
return 4.99;
}
if (totalWeight(order) >= 1 && totalWeight(order) < 3) {
return 6.99;
}
if (totalWeight(order) >= 3) {
return 8.99
}
}
}

Could we refactor this to something equally dynamic as a lambda map? Of course we could. All it takes is a loop.

function match(conditions) {
for (let i = 0; i < conditions.length; i++) {
if (conditions[i][0]()) {
return conditions[i][1]();
}
}
return undefined;
}
function postage(order) {
return match([
[() => order.location === 'UK',
() => match([
[() => totalPrice(order) >= 100, () => 0],
[() => totalWeight(order) < 1, () => 2.99],
[() => totalWeight(order) >= 1 && totalWeight(order) < 3, () => 4.99],
[() => totalWeight(order) >= 3, () => 6.99]
])
],
[() => order.location === 'EU',
() => match([
[() => totalWeight(order) < 1, () => 4.99],
[() => totalWeight(order) >= 1 && totalWeight(order) < 3, () => 6.99],
[() => totalWeight(order) >= 3, () => 8.99]
])
]
]);
}

Using this approach, we can recursively compose conditional logic in a dynamic fashion, and with the judicious use of spacing, we can have our mappings read in a more tabular fashion. It also opens up the possibility of dynamically altering the rules – perhaps from a file, or based on some condition or triggered by an event (e.g., a promotional sale that removes postage on more orders).

 

 

Is UML Esperanto for Programmers?

Back in a previous life, when I wore the shiny cape and the big pointy hat of a software architect, I thought the Unified Modeling Language was a pretty big deal. So much, in fact, that for quite a few years, I taught it.

In 2000, there was demand for that sort of thing. But by 2006 demand for UML training – and for UML on teams – had faded away to pretty much nothing. I rarely see it these days, on whiteboards or on developer machines. I occasionally see the odd class diagram or sequence diagram, often in a book. I occasionally draw the odd class diagram or sequence diagram myself – maybe a handful of times a year, when the need arises to make a point that such diagrams are well-suited to explaining.

UML is just one among many visualisation tools in my paint box. I use Venn diagrams when I want to visualise complex rules, for example. I use tables a lot – to visualise how functions should respond to inputs, to visualise state transitions, and to visualise conditional logic (e.g., truth tables). But we fixated on just that one set of diagrams, until UML became synonymous with software visualisation.

I’m a fan of pictures, you see. I’m a very visual thinker. But I’m aware that visual thinkers seem to be in a minority in computing. I often find myself being the only one in the room who gets it when they see a picture. Many programmers want to see code. So, on training courses now, I show them code, and then they get it.

Although UML has withered away, its vestigial limb remains in the world of academia. A lot of universities teach it, and in significant depth. In Computer Science departments around the world, Executable UML is still very much a thing and students may spend a whole semester learning how to specify systems in UML.

Then they graduate and rarely see UML again – certainly not Executable UML. The ones who continue to use it – and therefore not lose that skill – tend to be the ones who go on to teach it. Teaching keeps UML alive in the classroom long after it all but died in the office.

My website parlezuml.com still gets a few thousand visitors every month, and the stats clearly show that the vast majority are coming from university domains. In industry, UML is as dead a language as Latin. It’s taught to people who may go on to teach it, and elements of it can be found in many spoken languages today. (There were a lot of good ideas in UML). But there’s no country I can go to where the population speak Latin.

Possibly a more accurate comparison to UML,  might be Esperanto. Like UML, Esperanto was created – I think perhaps, aside from Klingon, only one example of a completely artificial spoken language – in an attempt to unify people and get everyone “speaking the same language”. As noble a goal as that may be, the reality of Esperanto is that the people who can speak it today mostly speak it to teach it to people who may themselves go on to teach it. Esperanto lives in the classroom – my Granddad Ray taught it for many years – and at conferences for enthusiasts. There’s no country I can go to where the population speak it.

And these days, I visit vanishingly few workplaces where I see UML being used in anger. It’s the Esperanto of software development.

I guess my point is this: if I was studying to be an interpreter, I would perhaps consider it not to be a good use of my time to learn Esperanto in great depth. For sure, there may be useful transferable concepts, but would I need to be fluent in Esperanto to benefit from them?

Likewise, is it really worth devoting a whole semester to teaching UML to students who may never see it again after they graduate? Do they need to be fluent in UML to learn its transferable lessons? Or would a few hours on class diagrams and sequence diagrams serve that purpose? Do we need to know the UML meta-meta-model to appreciate the difference between composition and aggregation, or inheritance and implementation?

Do I need to understand UML stereotypes to explain the class structure of my Python program, or the component structure of my service-oriented architecture? Or would boxes and arrows suffice?

If the goal of UML is to be understood (and to understand ourselves), then there are many ways beyond UML. How much of the 794-page UML 2.5.1 specification do I need to know to achieve that goal?

And why have they still not added Venn diagrams, dammit?! (So useful!)

So here’s my point: after 38 years programming – 28 of them for money – I know what skills I’ve found most essential to my work. Visualisation – drawing pictures – is definitely in that mix. But UML itself is a footnote. It beggars belief how many students graduate having devoted a lot of time to learning an almost-dead language but somehow didn’t find time to learn to write good unit tests or to use version control or to apply basic software design principles. (No, lecturers, comments are not a design principle.)

Some may argue that such skills are practical, technology-specific and therefore vocational. I disagree. There’s JUnit. And then there’s unit testing. I apply the same ideas about test structure, about test code design, about test organisation and optimisation in RSpec, in Mocha, in xUnit.net etc.

And UML is a technology. It’s an industry standard, maintained by an industry body. There are tools that apply – some more loosely than others – the standard, just like browsers apply W3C standards. Visual modeling with UML is every bit as vocational as unit testing with NUnit, or version control with Git. There’s an idea, and then that idea is applied with a technology.

You may now start throwing the furniture around. Message ends.

Introduction to Test-Driven Development Video Series

Over the last month, I’ve been recording screen casts introducing folks to the key ideas in TDD.

Each series covers 6 topics over 7 videos, with practical demonstrations instead of slides – just the way I like it.

They’re available in the four most used programming languages today:

Of course, like riding a bike, you won’t master TDD just by watching videos. You can only learn TDD by doing it.

On our 2-day TDD training workshop, you’ll get practical, hands-on experience applying these ideas with real-time guidance from me.

New Refactoring: Replace Conditional With Lambda Map

There are tried-and-tested routes to replacing conditional statements that effectively map identities (e.g., if(x == “UK shipping”) ) to polymorphic implementations of what to do when an identity is determined (e.g. create a Shipping interface and then have a UKShipping class that knows what to do in that situation and pass that in to the method).

But sometimes I find, when the literal that represents the identity has to be preserved (for example, if it’s part of an API) that mapping identities to actions works better.

In these instances, I have found myself converting my conditionals to a map or dictionary instead. Each identity is mapped to a lambda expression that can be looked up and then executed.

Take this example of a function that scores games of Rock, Paper, Scissors in JavaScript:

const rock = "rock";
const scissors = "scissors";
const paper = "paper";
function play(game, player1, player2) {
let score = {game.score};
let winner = undefined;
if (player1 === rock) {
if (player2 === scissors) {
score.player1++;
} else {
if (player2 === paper) {
score.player2++;
}
}
}
if (player1 === paper) {
if (player2 === rock) {
score.player1++;
} else {
if (player2 === scissors) {
score.player2++;
}
}
}
if (player1 === scissors) {
if (player2 === paper) {
score.player1++;
} else {
if (player2 === rock) {
score.player2++;
}
}
}
if (score.player1 === 2) {
winner = 1;
}
if (score.player2 === 2) {
winner = 2;
}
return {winner: winner, score: score};
}
module.exports = {play, rock, paper, scissors};

view raw
game.js
hosted with ❤ by GitHub

The literals “rock”, “paper” and “scissors” have to be preserved because we have a web service that accepts those parameter values from remote players. (This is very similar to the Mars Rover kata in that respect, where R, L, F and B are inputs.)

First, I’d remove some obvious inner duplication in each outer IF statement.

const rock = "rock";
const scissors = "scissors";
const paper = "paper";
function play(game, player1, player2) {
let score = {game.score};
let winner = undefined;
if (player1 === rock) {
playHand(player2, score, scissors, paper);
}
if (player1 === paper) {
playHand(player2, score, rock, scissors);
}
if (player1 === scissors) {
playHand(player2, score, paper, rock);
}
if (score.player1 === 2) {
winner = 1;
}
if (score.player2 === 2) {
winner = 2;
}
return {winner: winner, score: score};
function playHand(opponent, score, beats, losesTo) {
if (opponent === beats) {
score.player1++;
} else {
if (opponent === losesTo) {
score.player2++;
}
}
}
}
module.exports = {play, rock, paper, scissors};

view raw
game.js
hosted with ❤ by GitHub

Now let’s replace those 3 IF statements that map actions to “rock”, “paper” and “scissors” into an actual map.

function play(game, player1, player2) {
let score = {game.score};
let winner = undefined;
const hands = {
rock: () => playHand(player2, score, scissors, paper),
paper: () => playHand(player2, score, rock, scissors),
scissors: () => playHand(player2, score, paper, rock)
};
hands[player1].call();

view raw
game.js
hosted with ❤ by GitHub

If we take a look inside playHand(), we have an inner conditional.

function playHand(opponent, score, beats, losesTo) {
if (opponent === beats) {
score.player1++;
} else {
if (opponent === losesTo) {
score.player2++;
}
}
}

view raw
game.js
hosted with ❤ by GitHub

This could also be replaced with a lambda map.

function playHand(opponent, score, beats, losesTo, draws) {
const opponents = {
[beats]: () => score.player1++,
[losesTo]: () => score.player2++,
[draws]: () => {}
};
opponents[opponent].call();
}

view raw
game.js
hosted with ❤ by GitHub

Note that I had to add a draws identity so that the mapping is complete. I’d also have to do this for any default case in a conditional (I suppose draws is the default case here, as nothing happens when there’s a draw – an empty lambda).