Proactive vs Reactive Learning (or “Why Your Company Only Does Easy Things”)

Imagine you lead an orchestra. The word comes down from on high “Tonight, our audience demands you play Rachmaninoff’s Piano Concerto No. 3. The future of the orchestra depends on it. We’re all counting on you.”

But your orchestra has no pianist. Nobody in your orchestra has even touched a piano, let alone taken lessons. You turn to the lead violin: “Quick. Google ‘how to play piano?’ “

Now, of course, there’s absolutely no chance that any human being could learn to play piano to that standard in a day. Or a week. Or a month. It takes a lot of time and a lot of work to get to that level. Years.

The inevitable result is that the orchestra will not be playing Rachmaninoff’s Piano Concerto No. 3 that evening. At least, not with the piano part. And that’s kind of essential to a piano concerto.

I see tech organisations in this situation on a regular basis. They discover a need that they’re simply nowhere near competent enough to deal with – something completely beyond the range of their current capabilities. “The users demand that the software learns from their interactions and anticipates their needs. Quick. Google ‘how to train a machine?'” “The customer demands a custom query language. Quick. Google ‘how to write a compiler?'” And so on.

Have we become so used to looking stuff up on Stack Overflow, I wonder, that we’ve forgotten that some of this stuff is hard? Some of these things take a long time to learn? Not everything is as easy as finding out what that error message means, or how to install a testing framework using NPM?

The latter style of learning is what some people call reactive. “I need to know this thing now, because it is currently impeding my progress.” And software development involves a lot of reactive learning. You do need to be rather good at looking stuff up to get through the typical working day, because there are just so, so many little details to remember.

Here’s the thing, though: reactive learning only really works for little details – things that are easy to understand and can be learned quickly. If the thing that impedes our progress is that we require a road bridge to be built to get us over that canyon, then that’s where we see the limits of reactive learning. It can remove small obstacles. Big problems that takes a long time to solve require a different style of learning that’s much more proactive.

If your orchestra only plays the instruments needed for the exact pieces they’ve played up to that point, then there’s in increased likelihood that there’ll be gaps. If a dev team only has the exact skill set for the work they’ve done up to that point, there are likewise very likely to be gaps.

It’s hard, of course, to anticipate every possible future need and prepare months or years in advance for every eventuality. But some orgs have a greater adaptive capacity than others because their people are skilled beyond today’s specific requirements. That is to say, they’re better at solving problems because they have more ways of solving problems – more strings to their bow (or more keys to their piano, if you like).

Compiler design might sound like the kind of esoteric computer-sciency thing that’s unlikely to arise as a business need. But think of it this way: what’s our code built on? Is the structure of programs not the domain model we work in every day? While I’ve never designed a compiler, I have had numerous occasions when – to write a tool that makes my job easier – it’s been very useful to understand that model. Programmers who understand what programs are made of tend to be more effective at reasoning about code, and better at writing code that’s about code. We use those tools every day, but all tooling has gaps. I’ve yet to meet a static analysis tool, for example, that had all the rules I’d be interested in applying to code quality.

The most effective dev teams I’ve come into contact with have invested in custom tooling to automate repetitive donkey work at the code face. Some of them end up being open-sourced, and you may be using them yourself today. How did you think our test runner’s unit test discovery worked?

Some books about stuff I had no immediate need to know but read anyway

Now, we could of course hire a pianist for our orchestra – one who already knows Rachmaninoff’s Piano Concerto No. 3. But guess what? It turns out pianists of that calibre are really difficult to find – probably because it takes years and years to get to that standard. (No shortage of people who manage pianists, of course.) And now you remember how you voted against all those “superfluous” music education programmes. If only you could have known that one day you might need a concert pianist. If only someone had warned you!

Well, here I am – warning you. Not all problems are easy. Some things take a long time to learn, and those things may crop up. And while nobody can guarantee that they will, this is essentially a numbers game. What are the odds that we have the capability – or can buy in the capability at short notice (which opens the lid on a can of worms I call “proactive recruitment”) – to solve this problem?

Most of the time, organisations end up walking away from the hard problems. They are restricted to the things most programmers can solve. This is not a good way to build competitive advantage, any more than sticking to works that don’t have a piano part is a good way to run a successful orchestra.

Enlightened organisations actively invest in developing capabilities they don’t see any immediate need for. Yes, they’re speculating. And speculation can be wasteful. Just like all uncertain endeavors can be wasteful. But there are usually signposts in our industry about what might be coming a year from now, a decade from now, and beyond.

And there are trends – the continued increase in available computing power is one good example. Look at what would be really useful but is currently computationally too expensive right now. In 1995, we saw continuous build and tests cycles as highly desirable. But most teams still ran them overnight, because the hardware was about 1000 times slower than today. Now coming into vogue – as I predicted it would over a decade ago – more and more of us are building and testing (and even automatically inspecting) our code continuously in the background as we type it. That was totally foreseeable. As is the rise of Continuous Inspection as a more mainstream discipline off the back of it.

There are countless examples of long-established and hugely success businesses being caught with their pants down by Moore’s Law.

Although digital photography was by no means a new invention, its sudden commercial viability 20 years ago over chemical photography nearly finished Kodak overnight. They had not speculated. They had not invested in digital photography capability. They’d been too busy being the market leader in film.

And then there was the meteoric rise of guitar amp simulators – a technology long sneered at (but begrudgingly used) by serious players, and less serious players like myself. The early generations of virtual amps didn’t sound great, and didn’t feel like playing through a real amp with real tubes. (Gotta love them tubes!) But – damn – they were convenient.

The nut they couldn’t crack was making it sound like it was being recorded through a real speaker cabinet with a real microphone. There was a potential solution – convolution, a mathematical process that can combine two signals, so the raw output of a guitar amp (real or virtual) can be combined with an “impulse response” (a short audio sample, like the short reverberation in a room after you click your fingers) of a cabinet and microphone to give a strikingly convincing approximation of what that signal would sound like in the space – or what that guitar amp output would sound like through those speakers recorded with that microphone. Now, suddenly, virtual guitar amps were convenient and sounded good.

But up to that point, convolution had been too computationally expensive to be viable for playing and recording on commercially available hardware. And then, suddenly, it wasn’t. Queue mad dash by established amp manufacturers to catch up. And, to be fair to them, their virtual amp offerings are pretty spiffy these days. Was this on their radar, I wonder? Did the managers and the engineers see virtual amp technology looming on the horizon and proactively invest in developing that capability in exactly the way Kodak didn’t? Not before virtual amp leaders like Line 6 had taken a chunk of their market share, I suspect. And now convolution is everywhere. So many choices, so many market players old and new.

You see, it’s all well and good making hay while the sun shines. But when the weather turns, don’t end up being the ones who didn’t think to invest in a umbrella.

Author: codemanship

Founder of Codemanship Ltd and code craft coach and trainer

One thought on “Proactive vs Reactive Learning (or “Why Your Company Only Does Easy Things”)”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s