Communicating Through Code – Part 2

A wee while ago, I threw out an idea for a coding exercise designed to challenge our skills at writing code that tells a story.

In the basic version, you’re asked to design an API that would enable you to tell the story of Humpty Dumpty, using modules/classes, functions/methods, fields, variables, parameters etc of maximum one word’s length. (It also served as a useful exercise in working backwards in your editor.)

All fine holiday fun. But my Humpty Dumpty code didn’t actually do anything.

The second part of the challenge is to repeat this exercise with code that actually does real work.

I’m using my faithful video library example. When members donate videos to the library, a bunch of stuff needs to happen.

I wrote it down in plain English first:

  • Add donated video to catalogue

  • Register one rental copy of the donated video

  • Award the donor 10 priority points

  • Alert members about the donated video

Then I repeated the process I applied to Humpty Dumpty, except this time, the code actually does what it says it does. This time, I set myself a limit of 2 words per identifier, and allowed myself to ignore “noise” words like “a” and “the”.

function VideoLibrary(catalogue, members, {register, of}, award, alert) {

    return (donatedVideo, donor) => {
        // add donated video to catalogue
        // register one rental copy of the donated video
        // award the donor 10 priority points
        // alert members about the donated video

        add(donatedVideo).to(catalogue);
        register(1).rentalCopy(of(donatedVideo));
        award(donor, 10).priorityPoints();
        alert(members).about(donatedVideo);
    };
}

Then I repeated the exercise, only this time I started out by writing the raw implementation code that does the work:

function VideoLibrary(catalogue, members) {

    return (donatedVideo, donor) => {
        // add donated video to catalogue
        // register one rental copy of the donated video
        // award the donor 10 priority points
        // alert members about the donated video

        catalogue.push(donatedVideo);
        donatedVideo.copies += 1;
        donor.priorityPoints += 10;

        const nodemailer = require('nodemailer');

        const recipients = members.map((member) => { return member.email; }).join(',');

        const email = {
            from: 'alerts@myvideolibrary.com',
            to: recipients,
            subject: 'A new movie has been added to the library - ' + donatedVideo.title,
            text: 'Dear Member, Just to let you know that '
                + donatedVideo.title +
                '(' + donatedVideo.year +
                ') has been added to the library and is now available to borrow'
        };

        var transporter = nodemailer.createTransport({
            service: 'gmail',
            auth: {  // use your gmail user name & password. You may need to set your account to allow less secure apps
                user: '###youremail###@gmail.com',
                pass: '###yourpassword###'
            }
        });

        transporter.sendMail(email, function(error, info){
            if (error) {
                console.log(error);
            } else {
                console.log('Email sent: ' + info.response);
            }
        });

    };
}

And refactored to make it read like the description. This I found especially challenging, but was delighted at how the end result turned out simpler and cleaner – making effective use of closures to encapsulate details. (Check out the full refactored code at https://github.com/jasongorman/text-to-code-refactoring .)

Now, I’ll remind you that this is just an exercise, and for sure the finished design isn’t without its problems. Here’s we’re specifically ticking two design boxes: code that works, and code that clearly communicates its intent.

As I take more passes at this, the exercise gets more challenging. For example, can I make the dependencies swappable? Managed to do that, but VideoLibrary now has many parameters. I managed to preserve the readability of the code in the donate() function, though. So that’s another tick.

Moving down the call stack, I repeated this process for the code that sends email alerts to members about donated videos. So it can be done at multiple levels of abstraction.

function alert(members) {
    return {
        about: (donatedVideo) => {
            send(newAlert(toAll(members)).
                about(donatedVideo), by(nodemailer));
        }
    }
}

One final little challenge I set myself this morning was to make all the data immutable. This proved tougher than I expected, and I couldn’t find a way to do it that preserved the readability of the code that tells the story.

function VideoLibrary(catalogue, members, {register, of}, award, alert) {

    return (donatedVideo, donor) => {
        // add donated video to catalogue
        // register one rental copy of the donated video
        // award the donor 10 priority points
        // alert members about the donated video

        const rentableVideo = register(1).rentalCopy(of(donatedVideo));
        const updatedCatalogue = add(rentableVideo).to(catalogue);
        const rewardedDonor = award(donor, 10).priorityPoints();
        alert(members).about(rentableVideo);

        return {
            video: rentableVideo,
            donor: rewardedDonor,
            catalogue: updatedCatalogue,
            members: [...members.filter((m) => m.email !== donor.email), rewardedDonor]
        };
    };
}

Disappointing. But it served as a neat illustration of the potential cost of immutability in increased complexity and reduced readability. There’s no free lunches!

I’ll give it some thought and try that refactoring again. Maybe there’s a way to have our cake and eat it?

Anyhoo, I recommend trying this exercise with your own code. Write down in plain English (or whatever language you speak) what the code does, then see if you can translate it into fluent code that clearly tells that story. As your design emerges, see if you can apply that process on multiple layers. And see if you can produce a design that both tells the story and ticks other design boxes at the same time.

Constraints Can Be A Good Thing

Sometimes we get it too easy.

I often think of the experiences I’ve had recording music using software plug-ins instead of real hardware.

Time was that recording music was hard. My home recording set-up was a Tascam 8-track tape recorder, three guitars, a choice of two amplifiers, a bunch of guitar pedals, a microphone for recording guitars (you can’t just plug your amp directly into a tape recorder, you have to mic it) and occasional bad vocals, a bass guitar, a drum machine and a Roland synth module controlled by MIDI from my 486 PC.

Recording to tape takes patience. And, while 8 tracks might sound a lot, it’s actually very limiting. One track is vocals. For rock and metal, you record rhythm guitars twice for left and right to get a stereo spread. Bass guitar makes four tracks. Lead guitar makes five and six, if you want harmonies. Drums take tracks seven and eight for stereo. Then you “bounce” the guitars down to two tracks – basically output a stereo guitar mix and record L and R on just two tracks – to make space for stereo keyboards at the end.

Typically I needed several takes for each guitar, bass and vocal part. And between each take, I had to rewind the tape to exact point where I needed to “punch in” again. If, during mixing, I decide I didn’t like the guitar sound, I had to record it all over again.

Fast forward to the 2010s, and I’ve been doing it all using software. The guitars are real, but the amps are digital – often recorded using plug-ins that simulate guitar amps, speaker cabinets and microphones – and I can choose from hundreds of amp, cabinet and microphone models. If I don’t lie the guitar sound, I can just change it. No need to re-record that guitar part.

Th3-Main

And I can choose from dozens of software synthesizers, offering thousands upon thousands of potential sounds. I can have as many virtual Minimoogs or Roland Jupiter 8s as I like. My i7 laptop can run 10-20 of them simultaneously.

mini-v-image

My drums are now created using a powerful multi-sampled virtual instrument that allows me to choose from dozens of different sampled kits, or create my own custom kits, and I can tweak the recording parameters and apply almost limitless effects like compression and EQ to my heart’s content. And, again, if I don’t like the drum sound, I don’t need to wind back a tape and record them all over again.

sd2

My Digital Audio Workstation lets me record as many tracks – mono and stereo – as my computer can handle (typically, more than 30 in a final mix), and I can route and re-route the audio as many times as I like.

Because I use software plug-ins for effects like echo/delay, reverb, chorus, EQ, compression and more, I have almost limitless production options for every track.

And, most mind-bending of all, I can record take after take after with no rewinding and the audio quality never degrades.

Digital is aces!

Except… About a year ago I invested in my first hardware synthesizer for a very long time. It’s a Korg Minilogue – a proper analog synthesizer (the first analog synth I’ve owned – and to operate it you have to press buttons and twiddle knobs). Unlike my Roland digital synth, it’s not “multi-timbral”. It makes one sound at a time, and can play up to 4 notes at a time. My Roland JV-2080 could make 16 different sounds a time, and play up to 64 notes simultaneously.

minilogue

Compared to the software – and the digital synth hardware – the Minilogue is very limiting. It’s significantly harder recording music with the Minilogue than with a software synthesizer.

But when I listen to the first quick demo track I created using the real analog synth and tracks I created using software synths, I can’t help noticing that they have a very different quality.

Those constraints led me to something new – something simpler, into which I had to put more thought about the design of each sound, about the structure of the music, about the real hardware guitar effects I used on each “patch”.

I’m not necessarily saying the end results are better than using software synths. I’m saying that the constraints of the Korg Minilogue led me down a different path. It changed the music.

I’ve been inspired to experiment more with hardware instruments, maybe even revisit recording guitars through mic’d up tube amps again and see how that changes the music, too. (And this is a great time to do that, as low-priced analog synths are making a comeback right now.)

All this got me to thinking about the tools I use to create software. It sometimes feels like we’ve maybe got a bit too much choice, made things a little too easy.  And all that choice and ease had led us to more complex products: multiple tech stacks, written in multiple languages, with lots of external dependencies because our package managers make that so easy now. etc etc.

Would going back to simple, limited editors, homogenous platforms, limited hardware and so on change the software we tend to create?

I think it may be worth exploring.

 

Code Craft is Seat Belts for Programmers

Every so often we all get a good laugh when some unfortunate new hire or intern at a major tech company accidentally “deletes Google” on their first day. It’s easy to snigger (because, of course, none of us has ever messed up like that).

The fact is, though, that pointing and laughing when tech professionals make mistakes doesn’t stop mistakes getting made. It can also breed a toxic work culture, where people learn to avoid mistakes by not taking risks. Not taking risks is anathema to innovation, where – by definition – we’re trying stuff we’ve never done before. Want to stifle innovation where you work? Pointing and laughing is a great way to get there.

One of the things I like most about code craft is how it can promote a culture of safety to try new things and take risks.

A suite of good, fast-running unit tests, for example, makes it easier to spot our boos-boos sooner, so we can un-boo-boo them quickly and without attracting attention.

Continuous Integration offers a level of un-doability that makes it easier and safer to experiment, safe in the knowledge that if we mess it up, we can get back to the last version that worked with a simple hard reset.

The micro-cycles of refactoring mean we never stray far from the path of working code. Combine that with fast-running tests and frequent commits, and ambitious and improbable re-architecting of – say – legacy code becomes a sequence of mundane, undo-able and safe micro-rewrites.

And I can’t help feeling – when I see some poor sod getting Twitter Heat for screwing up a system in production – that it was the deficiency in their delivery pipeline that allowed it to happen that was really at fault. The organisation messed up.

Software development’s a learning process. Think about when young children – or people of any age – first learn to use a computer. The fear of “breaking it” often discourages them from trying new things, and this hampers their learning process. never underestimate just how much great innovation happens when someone says “I wonder what happens if I do this…” Remove that fear by fostering a culture of “what if…?” shielded by systems that forgive.

Code craft is seat belts for programmers.

Code Craft is More Throws Of The Dice

On the occasions founders ask me about the business case for code craft practices like unit testing, Continuous Integration and refactoring, we get to a crunch moment: will this guarantee success for my business?

Honestly? No. Nobody can guarantee that.

Unit testing can’t guarantee that. Test-Driven Development can’t guarantee that. Refactoring can’t guarantee it. Automated builds can’t guarantee it. Microservices can’t. The Cloud can’t. Event sourcing can’t. NoSQL can’t. Lean can’t. Scrum can’t. Kanban can’t. Agile can’t. Nothing can.

And that is the whole point of code craft. In the crap game of technology, every release of our product or system is almost certainly not a winning throw of the dice. You’re going to need to throw again. And again. And again. And again.

What code craft offers is more throws of the dice. It’s a very simple value proposition. Releasing working software sooner, more often and for longer improves your chances of hitting the jackpot. More so than any other discipline in software development.

Visualising Architecture – London, Sept 4-5

Just wanted to quickly announce the first of a new series of hands-on 2-day workshops on software architecture aimed at Agile teams.

Visualising Architecture teaches diagramming techniques that developers can use to help them better understand and communicate key architectural ideas – essential to help quickly and effectively build a shared understanding among team members.

The public course will be in London W1A on June 4-5. If you’re looking to build your Architect Fu, this is a great investment in your’s or your dev team’s software design skills.

You can find out more and book your place by visiting the Eventbrite page.

Software Architecture (for Architectless Teams)

Since the rise of Agile Software Development post-2001, it’s become more and more common for development teams to work without a dedicated software architect. We, quite rightly, eschewed the evils of Big Design Up-Front (BDUF) after the Big Architecture excesses of the 1990s.

As someone who worked in senior architecture roles, I can attest that much of the work I was asked to do – e.g., writing architecture documents – added very little value to the end product. Teams rarely read them, let alone followed their blueprints. The resulting software turned out the way the developers wrote it, regardless of the architect’s input. Which is why I returned to hands-on development, realising it was the most direct way to have an influence on the actual software design.

But I can’t help feeling we threw the baby out with the bathwater. Too many teams seem to lack a vision of the bigger picture. They don’t see how the pieces fit together, or how the system fits in the real world of businesses or homes or cars or hospitals etc. The design emerges through the uncoordinated actions of individual developers, pairs and teams (and teams of teams) with no real oversight and nobody at the steering wheel.

Big decisions – decisions that are hard to reverse – get made on the fly with little thought given to the long-term consequences. For example, thanks to the rise of build and package management solutions like Maven, NuGet and NPM, developers these days can take on dependencies on third-party code without giving it much thought. We’ve seen where unmanaged dependencies can lead.

Most damaging, though, is the inability of many Agile teams to effectively collaborate on the design, making the bigger decisions together in response to a shared understanding of the architectural requirements. I’m reminded of a team I worked on who went off in their pairs after each planning meeting and came up with their own architectures for their specific user stories. The software quickly grew out of control, with no real conceptual integrity to the design. It became three architectures in one code base. We had three distinct MVC styles in the same software at the same time, and even three customer tables in the database. A little time around a whiteboard – or a bit of mob programming every now and again – could have minimised this divergence.

I firmly believe that software architecture is due for a comeback. I doubt it would return in the Big Architecture form we rejected after the 90s – and I’ll work hard to make sure it doesn’t. I doubt that such things as “software architects” will make a grand return, either. Most Agile teams will still work without a dedicated architect. Which is fine by me.

But teams without architects still need architecture. They need to be discussing, planning, visualising, evaluating and evolving the Big Picture design of their software as a cohesive unit, not as a bunch of people all doing their own thing.

Through Codemanship, I’m going to be doing my bit to bring back architecture through a new training offering aimed at Agile teams who don’t have a dedicated person in that role.

Rolling out later this summer, it will focus on key areas of software architecture:

  • Visualising Architecture (yes, the boxes and arrows are back!)
  • Architectural Requirements (runtime and development time requirements of design)
  • Collaborative Design Processes (how do we go from a user story to a software design, making key decisions as a team?)
  • Contextual Architecture (how does our software fit into the real world?)
  • Strategic Architecture (how does our software solve real business problems?)
  • Evaluating Architecture (how do we know if it’s working?)
  • Evolving Architecture (sure, it works now, but what about tomorrow?)
  • Architectural Principles (of maintainability, scalability, security etc)
  • Architectural Patterns (common styles of software architecture – SOA, Monolith, Event-Driven, Pipes-and-filters, etc)

That’s a lot of ground to cover. certainly a 2-3-day workshop won’t be anywhere near enough time to do it justice, so I’m devising a series of 2-day workshops that will focus on specific areas:

  1. Visualising
  2. Planning
  3. Evolving
  4. Principles & Patterns

All will place the emphasis on how architectural activities can work as part of an Agile development process, and all will assume that the team shares the responsibility for architecture.

Keep your eyes on the @codemanship Twitter account and this blog for news.

 

Wheels Within Wheels Within Wheels

Much is made of the cycles-within-cycles of Test-Driven Development.

At the core, we do micro-iterations with small, single-question unit tests to drive out the details of our internal design.

Surrounding those micro-cycles are the feedback loops provided by customer tests, which may require us to pass multiple unit tests to complete end-to-end.

User stories typically come with multiple customer tests – happy paths and edge cases – providing us with bigger cycles around our customer test feedback loops.

Orbiting those are release loops, where we bundle a set of user stories and await feedback from end users in the real world (or a simulated approximation of it for test purposes).

What’s not discussed, though, are the test criteria for those release loops. If we already established through customer testing that we delivered what we agreed we would i that release, what’s left to test for?

The minority of us who practice development driven by business goals may know the answer: we test to see if what we released achieves the goal(s) of that release.

feedbackloops

This is the outer feedback loop – the strategic feedback loop – that most dev teams are missing. if we’re creating software with a purpose, it stands to reason that at some point we must test for its fitness for that purpose. Does it do the job it was designed to do?

When explaining strategic feedback loops, I often use the example of a business start-up who deliver parcels throughout the London area. They have a fleet of delivery vans that go out every day across the city, delivering to a list of addresses parcels that were received into their depot overnight.

Delivery costs form the bulk of their overheads. They rent the vans. They charge them up with electrical power (it’s an all-electric fleet – green FTW!) They pay the drivers. And so on. It all adds up.

Business is good, and their customer base is growing rapidly. Do they rent more vans? Do they hire more drivers? Do they do longer routes, with longer driver hours, more recharging return-to-base trips, and higher energy bills? Or could the same number of drivers, in the same number of vans, deliver more parcels with the same mileage as before? Could their deliveries be better optimised?

Someone analyses the routes drivers have been taking, and theorises that they could have delivered the same parcels in less time driving less miles. They believe it could be done 35% more efficiently just by optimising the routes.

Importantly, using historical delivery and route data, they show on paper that an algorithm they have in mind would have saved 37% on miles and driver-hours. I, for one, would think twice about setting out to build a software system that implements unproven logic.

But the on-paper execution of it takes far too long. So they hatch a plan for a software system that selects the optimum delivery routes every day using this algorithm.

Taking route optimisation as the headline goal, the developers produce a first release in 2 weeks that takes in delivery addresses from an existing data source and – as command line utility initially – produces optimised routes in simple text files to be emailed to the drivers’ smartphones. It’s not pretty, and not a long-term solution by any means. But the core logic is in there, it’s been thoroughly unit and customer tested, and it seems to work.

While the software developers move on to thinking about the system could be made more user-friendly with a graphical UI (e.g., a smartphone app), the team – which includes the customer – monitor deliveries for the next couple of weeks very closely. How long are the routes taking? How many miles are vans driving? How much energy is being used on each route? How many recharging pit-stops are drivers making each day?

This is the strategic feedback loop: have we solved the problem? If we haven’t, we need to go around again and tweak the solution (or maybe even scrap it and try something else, if we’re so far off the target, we see no value in continuing down that avenue).

This is my definition of “done”; we keep iterating until we hit the target, learning lessons with each release and getting it progressively less wrong.

Then we move on to the next business goal.