Code Craft : Part I – Why We Need Code Craft

I started programming many years ago at the beginning of the home computing boom of the 1980s. Computers back then had hundreds of thousands of times less memory and processing power than your smartphone does today, so the programs we wrote for our ZX Spectrums, Commodore 64s and Acorn Micros were necessarily small.

Commodore-64-Computer-FL
The Commodore 64 (source: Wikipedia)

Under the limitations of having just a few kilobytes of memory for our code, programming was a pretty manageable affair. We could fit the whole program in our heads, so to speak.

When Dad brought home the IBM-compatible PC he’d been using in the office – it had been replaced with a newer model – all that changed. Memory leapt from the 64K of our C64 to 2MB, and disk space added a further 20MB for program files.

turbo_c
Turbo C was a popular programming tool for the IBM PC (source: https://turbo-c.soft32.com/)

Like a plant that gets re-potted in a much larger container, my code had a sudden growth spurt – luxuriating in the seemingly unlimited resources of the PC, and in the vastly superior programming languages and tools that were available for it.

In code, size is everything. A simple Commodore 64 game written in hundreds of lines of BASIC is a very different proposition to a game with thousands or tens of thousands of lines of code. That won’t fit inside your head. As the code grows, you quickly hit your limitations of brain power and human memory.

Debugging large programs is really hard. There’s just so much more that can go wrong, and so many more places to look for the sources of problems. It’s the proverbial “needle in a haystack”. Debugging a C64 program was like looking for a needle in a matchbox.

The time taken to sufficiently test a 300-line program is peanuts compared to the testing you need to do for a 30,000-line program. Minutes turns into days or even weeks of clicking buttons to see if the program does what you expect it to across hundreds of functions, each potentially with dozens of different scenarios to consider.

The time it takes to test our programs is a huge factor in how easy it is to change the code. Studies show that the longer a bug goes undiscovered, the more it will cost to fix it. If I make a boo-boo in the code and spot it straight away, it’s a moment’s work to correct it. If that boo-boo makes it on to end users’ computers, fixing it is a much bigger deal. It’s all about the size of the loop we have to go through to schedule time to do the fix, examine the code and find the cause of the bug, fix the bug, re-test the program and then release it back to the end users with the fix in place. If re-testing takes weeks, then that’s one expensive bug fix!

Cost-of-Correcting-Defects-Boehm-and-Basili
(Source: https://slcontrols.com/justify-early-extra-investment-reduce-late-budget-overruns/)

We’re not just talking about coding errors, either. The most common kind of program bug – and one of the most expensive to fix – is when we wrote the wrong code. That is, we misunderstood – or just guessed at – what the end users wanted to do with the program and built the wrong thing. In my early career as a programmer, that happened a lot. Writing code is a very expensive way to find out what the users want, especially if your code is hard to change afterwards.

Changing large programs without breaking something is super hard. In a computer program, all the pieces are connected – directly or indirectly – so changing just one line of code can accidentally break a whole bunch of stuff. And as the program grows, with more and more interconnected parts, it gets harder and harder.

costOfChangeTraditional
Scott W. Ambler – Traditional Cost of Changing Software (source: http://www.agilemodeling.com/essays/costOfChange.htm)

While I happily made the leap to much more memory, and much faster processors, and much more grown-up programming languages, I can now look back with the benefit of nearly 40 years of coding experience and see that the software I was writing back then was rigid – difficult to change – and brittle – easy to break – and buggy as heck.

I was a kid who’d been building tiny houses with Lego, who’d now progressed to building much bigger houses with real bricks and real cement and real timber – all the while thinking that building real houses was just like building Lego houses. NEWS FLASH: it isn’t.

lego_house
Photo by Shadowman39 (Source: https://www.flickr.com/photos/99304214@N05/15756936161)

There’s a lot, lot more that can go wrong building real houses, especially if you expect people to live in them. When you build people-sized houses, you have to think about a bunch of things that you don’t need to think about when you build Lego houses. Steps have to be taken to ensure the structural integrity of a house at all stages of construction and beyond. Otherwise, it can collapse under its own weight, causing expensive damage and even loss of life.

house_building
Photo by David Martin (Source: https://www.geograph.org.uk/photo/6046008)

Ditto with large computer programs. There’s a bunch of things we need to think about for a 10,000-line or a 100,000-line or a 10,000,000 program that just aren’t an issue on a 200-line program.

In particular, we have to take steps to avoid having our big program collapse under its own weight, when making a single change causes it to break in unexpected, and potentially dangerous, ways – depending on who’s using it and what they’re using it for.

It was only a few years into my career as a professional programmer that I learned how to write code that was reliable and easy to change without breaking – code that people can safely live in (both the end users, and other programmers coming to change the code as their users’ needs changed.)

And here’s the thing; while we’re building our software, we are also living in the code. Like plasterers or carpenters or electricians working inside a house while it’s being built, we too are at risk from the thing collapsing on us. This is something it took me quite a while to appreciate. It’s not just about releasing good code that other people can live in. It’s about keeping the code that way while we’re writing it.

Many studies done on computer programming over the last few decades clearly show that code that’s rigid and brittle and buggy takes longer to get working in the first place.

software_quality_at_top_speed
Less buggy programs take less time to deliver. Graph from ‘Software Quality At Top Speed’ – Steve McConnell

Sure, for those first – easy – few hundred lines of code, we can go fast and don’t need to take a lot of care. But the effect of the size of our growing code hits us sooner than we might think, and soon we’re spending all our time trying to debug it to make it usable enough for a release. Many programming projects end with a “stabilisation” phase, where programmers work long hours debugging thousands and thousands of lines of code, trying to hit a deadline to make the software good enough for people to use.

We call this code-and-fix programming. We write a whole bunch of code fast as we can. Then we test it and find a tonne of bugs we didn’t realised we’d introduced. Then we spend a whole lot more time trying to remove those bugs. (And very probably introducing all new bugs while we do that. And around we go.)

Code-and-fix may work on small programs, but it’s often a disaster on larger programs. It’s by far the most time-consuming and expensive way to get programs of any significant size working. And, even after all the debugging, it tends to produce programs that still contain many bugs.

After we’ve released our program, we’re not done yet. It’s in the nature of computer programs that when people use them, they see ways they could be improved. That first release is usually just the start of a long learning process, figuring out what users really need. So they’ll want a second release, and a third, and a fourth, and on and on it tends to go. Programming at scale is a marathon, not a sprint.

If our program code is rigid and brittle, changing it without breaking it is going to be very difficult. On the second go round, it takes even more time and effort to produce a working program. On the third, harder still. Far too many programs hit a barrier where the code is so hard and so risky to change that nobody dares try. At this point we face a difficult decision.

Do we leave it as it is, and the users will just have to struggle on without the changes they need? This is something that holds a lot of businesses back. If you’re Acme Supermarket, and you need to change how your tills work – but you can’t change the software – you have a big problem.

Do we write the program again from scratch? This means that all the program functions your end users currently rely on will have to be rebuilt from the ground up just to get one or two new functions. That’s like building a completely new house just so you can add a porch. Very expensive, and the users will have to wait a long time for their changes.

Do we abandon it altogether, and leave the end users to find their own solutions (or pack up and go home)? I’ve seen businesses do this in extreme cases. The part of their business that relied on legacy software that was hopelessly out of date for their needs, but too expensive to change and too expensive to rewrite, was simply shut down. “We can’t keep up with the competition and their whizzy new software, so we give up.”

And you might think that I’m just talking about computer programs that are written for businesses. But the reality is that, in my own personal programming projects, I’ve faced these decisions because I didn’t take enough care over my code. A 10,000-line program I wrote in C to help with a hobby music project at university had to be abandoned because I was spending all my spare time fixing it. It just got too much. So I ditched not just the code, but the whole project. Months of my life wasted.

Over the 37 years I’ve been coding, I shudder to think how much of my time was wasted debugging code that needn’t have been buggy in the first place. How much time did I waste redoing work I had to throw away because I made infrequent back-ups? How much time did I waste rewriting whole sections of programs because I hadn’t understood what the end users were asking me to build? How much time did I waste trying to understand my own code after I’d come back to it weeks or months later? How much time did I waste re-testing programs by hand?

Most importantly, what else could I have done with all that wasted time?

We’re talking thousands and thousands of hours, probably. Thousands of hours of debugging. Thousands of hours redoing stuff I broke because I didn’t have a recent back-up. Thousands of hours staring at code trying to understand what it does. Thousands of hours going round in circles testing programs by running them and clicking lots of buttons, fixing bugs, and then finding a bunch of new bugs when I re-tested it.

All that changed when I learned some basic code craft after 13 years programming the hard way.

I learned to use version control, checking my code in at least once an hour, so if I hit a dead end, I can easily get back to a working version, losing at most an hour’s work.

I learned to write fast-running automated unit tests, so I could re-test large programs with hundreds of functions in minutes or even seconds, alerting me immediately if I break something.

I learned to write code that people can understand, keeping it as simple as possible and carefully choosing names for functions and data that clearly explained what that piece of code does.

I learned to break large programs down into small, manageable and easy-to-understand chunks (modules) that do one distinct job, and how to compose large programs out of these simple pieces so that a change to one doesn’t break a bunch of connected modules.

I learned how to change code safely in tiny micro-steps, running my unit tests after each change, to keep the code working at all times.

I learned to communicate with end users using examples to pin down exactly what they’re asking for, and I translate those examples directly into tests so I can get immediate feedback on whether the program is doing what the customer wants.

I learned to continually test that changes I’ve made to the code work with changes any other programmers have made at the same time, and to test that it not only works on my computer, but on other computers, too.

And I learned to use automated scripts to build and deploy programs so that a change the users ask for in the morning can be running on their computers by lunchtime if necessary. Since the code is always working, and since I and other programmers on my team are continuously merging our changes into our version control repository, this means that our code is always ready to be released.

These techniques:

  • Version Control
  • Unit Testing
  • Simple Design
  • Modular Design
  • Refactoring
  • Specification By Example
  • Test-Driven Development
  • Continuous Integration
  • Continuous Delivery

…are the foundations of code craft. Master them, and you’ll waste far less time debugging, less time staring at code trying to understand what it does, less time redoing work because you didn’t make a recent back-up or because you misunderstood the requirements, and less time rewriting entire programs from scratch – leaving far more time for the fun stuff, like inventing, being creative, making your end users happy, and having a life outside programming.

 

 

 

Author: codemanship

Founder of Codemanship Ltd and code craft coach and trainer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s