Constraints Can Be A Good Thing

Sometimes we get it too easy.

I often think of the experiences I’ve had recording music using software plug-ins instead of real hardware.

Time was that recording music was hard. My home recording set-up was a Tascam 8-track tape recorder, three guitars, a choice of two amplifiers, a bunch of guitar pedals, a microphone for recording guitars (you can’t just plug your amp directly into a tape recorder, you have to mic it) and occasional bad vocals, a bass guitar, a drum machine and a Roland synth module controlled by MIDI from my 486 PC.

Recording to tape takes patience. And, while 8 tracks might sound a lot, it’s actually very limiting. One track is vocals. For rock and metal, you record rhythm guitars twice for left and right to get a stereo spread. Bass guitar makes four tracks. Lead guitar makes five and six, if you want harmonies. Drums take tracks seven and eight for stereo. Then you “bounce” the guitars down to two tracks – basically output a stereo guitar mix and record L and R on just two tracks – to make space for stereo keyboards at the end.

Typically I needed several takes for each guitar, bass and vocal part. And between each take, I had to rewind the tape to exact point where I needed to “punch in” again. If, during mixing, I decide I didn’t like the guitar sound, I had to record it all over again.

Fast forward to the 2010s, and I’ve been doing it all using software. The guitars are real, but the amps are digital – often recorded using plug-ins that simulate guitar amps, speaker cabinets and microphones – and I can choose from hundreds of amp, cabinet and microphone models. If I don’t lie the guitar sound, I can just change it. No need to re-record that guitar part.

Th3-Main

And I can choose from dozens of software synthesizers, offering thousands upon thousands of potential sounds. I can have as many virtual Minimoogs or Roland Jupiter 8s as I like. My i7 laptop can run 10-20 of them simultaneously.

mini-v-image

My drums are now created using a powerful multi-sampled virtual instrument that allows me to choose from dozens of different sampled kits, or create my own custom kits, and I can tweak the recording parameters and apply almost limitless effects like compression and EQ to my heart’s content. And, again, if I don’t like the drum sound, I don’t need to wind back a tape and record them all over again.

sd2

My Digital Audio Workstation lets me record as many tracks – mono and stereo – as my computer can handle (typically, more than 30 in a final mix), and I can route and re-route the audio as many times as I like.

Because I use software plug-ins for effects like echo/delay, reverb, chorus, EQ, compression and more, I have almost limitless production options for every track.

And, most mind-bending of all, I can record take after take after with no rewinding and the audio quality never degrades.

Digital is aces!

Except… About a year ago I invested in my first hardware synthesizer for a very long time. It’s a Korg Minilogue – a proper analog synthesizer (the first analog synth I’ve owned – and to operate it you have to press buttons and twiddle knobs). Unlike my Roland digital synth, it’s not “multi-timbral”. It makes one sound at a time, and can play up to 4 notes at a time. My Roland JV-2080 could make 16 different sounds a time, and play up to 64 notes simultaneously.

minilogue

Compared to the software – and the digital synth hardware – the Minilogue is very limiting. It’s significantly harder recording music with the Minilogue than with a software synthesizer.

But when I listen to the first quick demo track I created using the real analog synth and tracks I created using software synths, I can’t help noticing that they have a very different quality.

Those constraints led me to something new – something simpler, into which I had to put more thought about the design of each sound, about the structure of the music, about the real hardware guitar effects I used on each “patch”.

I’m not necessarily saying the end results are better than using software synths. I’m saying that the constraints of the Korg Minilogue led me down a different path. It changed the music.

I’ve been inspired to experiment more with hardware instruments, maybe even revisit recording guitars through mic’d up tube amps again and see how that changes the music, too. (And this is a great time to do that, as low-priced analog synths are making a comeback right now.)

All this got me to thinking about the tools I use to create software. It sometimes feels like we’ve maybe got a bit too much choice, made things a little too easy.  And all that choice and ease had led us to more complex products: multiple tech stacks, written in multiple languages, with lots of external dependencies because our package managers make that so easy now. etc etc.

Would going back to simple, limited editors, homogenous platforms, limited hardware and so on change the software we tend to create?

I think it may be worth exploring.

 

Code Craft is Seat Belts for Programmers

Every so often we all get a good laugh when some unfortunate new hire or intern at a major tech company accidentally “deletes Google” on their first day. It’s easy to snigger (because, of course, none of us has ever messed up like that).

The fact is, though, that pointing and laughing when tech professionals make mistakes doesn’t stop mistakes getting made. It can also breed a toxic work culture, where people learn to avoid mistakes by not taking risks. Not taking risks is anathema to innovation, where – by definition – we’re trying stuff we’ve never done before. Want to stifle innovation where you work? Pointing and laughing is a great way to get there.

One of the things I like most about code craft is how it can promote a culture of safety to try new things and take risks.

A suite of good, fast-running unit tests, for example, makes it easier to spot our boos-boos sooner, so we can un-boo-boo them quickly and without attracting attention.

Continuous Integration offers a level of un-doability that makes it easier and safer to experiment, safe in the knowledge that if we mess it up, we can get back to the last version that worked with a simple hard reset.

The micro-cycles of refactoring mean we never stray far from the path of working code. Combine that with fast-running tests and frequent commits, and ambitious and improbable re-architecting of – say – legacy code becomes a sequence of mundane, undo-able and safe micro-rewrites.

And I can’t help feeling – when I see some poor sod getting Twitter Heat for screwing up a system in production – that it was the deficiency in their delivery pipeline that allowed it to happen that was really at fault. The organisation messed up.

Software development’s a learning process. Think about when young children – or people of any age – first learn to use a computer. The fear of “breaking it” often discourages them from trying new things, and this hampers their learning process. never underestimate just how much great innovation happens when someone says “I wonder what happens if I do this…” Remove that fear by fostering a culture of “what if…?” shielded by systems that forgive.

Code craft is seat belts for programmers.

Wheels Within Wheels Within Wheels

Much is made of the cycles-within-cycles of Test-Driven Development.

At the core, we do micro-iterations with small, single-question unit tests to drive out the details of our internal design.

Surrounding those micro-cycles are the feedback loops provided by customer tests, which may require us to pass multiple unit tests to complete end-to-end.

User stories typically come with multiple customer tests – happy paths and edge cases – providing us with bigger cycles around our customer test feedback loops.

Orbiting those are release loops, where we bundle a set of user stories and await feedback from end users in the real world (or a simulated approximation of it for test purposes).

What’s not discussed, though, are the test criteria for those release loops. If we already established through customer testing that we delivered what we agreed we would i that release, what’s left to test for?

The minority of us who practice development driven by business goals may know the answer: we test to see if what we released achieves the goal(s) of that release.

feedbackloops

This is the outer feedback loop – the strategic feedback loop – that most dev teams are missing. if we’re creating software with a purpose, it stands to reason that at some point we must test for its fitness for that purpose. Does it do the job it was designed to do?

When explaining strategic feedback loops, I often use the example of a business start-up who deliver parcels throughout the London area. They have a fleet of delivery vans that go out every day across the city, delivering to a list of addresses parcels that were received into their depot overnight.

Delivery costs form the bulk of their overheads. They rent the vans. They charge them up with electrical power (it’s an all-electric fleet – green FTW!) They pay the drivers. And so on. It all adds up.

Business is good, and their customer base is growing rapidly. Do they rent more vans? Do they hire more drivers? Do they do longer routes, with longer driver hours, more recharging return-to-base trips, and higher energy bills? Or could the same number of drivers, in the same number of vans, deliver more parcels with the same mileage as before? Could their deliveries be better optimised?

Someone analyses the routes drivers have been taking, and theorises that they could have delivered the same parcels in less time driving less miles. They believe it could be done 35% more efficiently just by optimising the routes.

Importantly, using historical delivery and route data, they show on paper that an algorithm they have in mind would have saved 37% on miles and driver-hours. I, for one, would think twice about setting out to build a software system that implements unproven logic.

But the on-paper execution of it takes far too long. So they hatch a plan for a software system that selects the optimum delivery routes every day using this algorithm.

Taking route optimisation as the headline goal, the developers produce a first release in 2 weeks that takes in delivery addresses from an existing data source and – as command line utility initially – produces optimised routes in simple text files to be emailed to the drivers’ smartphones. It’s not pretty, and not a long-term solution by any means. But the core logic is in there, it’s been thoroughly unit and customer tested, and it seems to work.

While the software developers move on to thinking about the system could be made more user-friendly with a graphical UI (e.g., a smartphone app), the team – which includes the customer – monitor deliveries for the next couple of weeks very closely. How long are the routes taking? How many miles are vans driving? How much energy is being used on each route? How many recharging pit-stops are drivers making each day?

This is the strategic feedback loop: have we solved the problem? If we haven’t, we need to go around again and tweak the solution (or maybe even scrap it and try something else, if we’re so far off the target, we see no value in continuing down that avenue).

This is my definition of “done”; we keep iterating until we hit the target, learning lessons with each release and getting it progressively less wrong.

Then we move on to the next business goal.

Are You A Full Full-Stack Developer?

This tweet from a conference talk by Kevlin Henney reminded me of a discussion I had with a developent team last week about the meaning of “full-stack developer”.

I think Kevlin’s absolutely right. It doesn’t just pertain to the technology stack. And I would go further: I believe a full-stack developer can be involved throughout the entire product lifecycle.

We can be there right at the start, when the business is envisioning strategic solutions to real business problems. Indeed, in some organisations, it’s developers who often put forward the ideas. Andd why not, after all? We probably have a wider toolbox to draw from when we consider how technology might hep to solve a business problem. And we probably have a better handle on what might be easy and what might be hard to do.

It’s also vitally important that dev teams have a good understanding of the problem they’re setting out to solve. Too often, when devs are brought in later in the process, they lack that understanding and the business pays the price in that lack of clear direction and the inability to prioritise the work.

Likewise with the early stages of the design process: teams that get handed, say, wireframes and told to “code this” often run into difficulties as they realise that UI mock-ups aren’t enough. Exactly what should happen when the user clicks that button? If they weren’t in the discussion, then they’ll need to have the discussion again. Or take a guess.

And at the other end, instead of throwing software over the wall into testing and then production and then waiting for the bug reports to start flooding in, developers can get involved there. Certainly, there’s much we can do to help as developers-in-test in automating and scaling testing so we can test more, and test faster. And by getting involved with software operations – monitoring, testing and observing our software in real use in the real world, we tend to learn a tonne of useful stuff that can feed back into the all-important next iteration of the product.

Kevlin hits the nail on the head: software development should start and end in the real world, with real end users, solving real problems. And that, to me, is best achieveed when developers are involved throughout. The most effective devs wear multiple hats: strategy, business analysis, requirements engineering, UX, architecture, database design and administration, information security, test design and automation, and operations and support.

We don’t need to be experts in all of them – as long as we have experts to drive those key activities – but generalising specialists who can contribute effectively in all those processes.

In other words, not just coders.

How I Do Requirements

The final question of our Twitter code craft quiz seems to have divided the audience.

The way I explain it is that the primary goal of code craft is to allow us to rapidly iterate our solutions, and to sustain the pace of iterating for longer. We achieve that by delivering a succession of production-ready prototypes – tested and ready for use – that are open to change based on the feedback users give us.

(Production-ready because we learned the hard way with Rapid Application Development that when you hit on a solution that works, you tend not to be given the time and resources to make it production-ready afterwards. And also, we discovered that production-ready tends not to cost much more or take much more time than “quick-and-dirty”. So we may as well.)

Even in the Agile community – who you’d think might know better – there’s often too much effort on trying to get it right first time. The secret sauce in Agile is that it’s not necessary. Agile is an iterative search algorithm. Our initial input – our first guess at what’s needed – doesn’t need to be perfect, or even particularly good. It might take us an extra couple of feedback loops if release #1 is way off. What matters more are:

  • The frequency of iterations
  • The number of iterations

Code craft – done well – is the enabler of rapid and sustainable iteration.

And, most importantly, iterating requires a clear and testable goal. Which, admittedly, most dev teams suffer a lack of.

To illustrate how I handle software requirements, imagine this hypothetical example that came up in a TDD workshop recently:

First, I explore with my customer a problem that technology might be able to help solve. We do not discuss solutions at all. It is forbidden at this stage. We work to formulate a simple problem statement.

Walking around my city, there’s a risk of falling victim to crime. How can I reduce that risk while retaining the health and enviromental benefits of walking?

The next step in this process is to firm up a goal, by designing a test for success.

A sufficiently large sample of people experience significantly less crime per mile walked than the average for this city.

This is really vital: how will we know our solution worked? How can we steer our iterative ship without a destination? The failure of so very many development efforts seems, in my experience, to stem from the lack of clear, testable goals. It’s what leads us to the “feature factory” syndrome, where teams end up working through a plan – e.g. a backlog – instead of working towards a goal.

I put a lot of work into defining the goal. At this point, the team aren’t envisioning technology solutions. We’re collecting data and refining measures for success. Perhaps we poll people in the city to get an estimate of average miles walked per year. Perhaps we cross-reference that with crimes statistics – freely available online – for the city, focusing on crimes that happened outside on the streets like muggings and assaults. We build a picture of the current reality.

Then we paint a picture of the desired future reality: what does the world look like with our solution in it? Again, no thought yet is given to what that solution might look like. We’re simply describing a solution-shaped hole into which it must fit. What impact do we want it to have on the world?

If you like, this is our overarching Given…When…Then…

Given that the average rate of street crime in our city is currently 1.2 incidents per 1,000 person-miles walked,

When people use our solution,

They should experience an average rate of street crime of less than 0.6 incidents per 1,000 miles walked

Our goal is to more than halve the risk for walkers who use our solution of being a victim of crime on the streets. Once we have a clear idea of where we’re aiming, only then do we start to imagine potential solutions.

I’m of the opinion that the best software developent organisations are informed gamblers. So, at this early stage I think it’s a good idea to have more than one idea for a solution. Don’t put all our eggs in one solution’s basket! So I might split the team up into pairs – dependending on how big the team is – and ask each pair to envisage a simple solution to our problem. Each pair works closely wth the customer while doing this, to get input and feedback on their basic idea.

Imagine I’m in Pair A: given a clear goal, how do we decide what features our solution will need? I always go for the headline feature first. Think of this is “the button the user would press to make their goal happen” – figuratively speaking. Pair A imagines a button that, given a start point and a destination, will show the user the walking route with the least reported street crime.

We write a user story for that:

As a walker, I want to see the route for my planned journey that has the least reported street crime, so I can get there safely.

The headline feature is important. It’s the thread we pull on that reveals the rest of the design. We need a street map we can use to do our search in. We need to know what the start point and destination are. We need crime statistics by street.

All of these necessary features are consequences of the headline feature. We don’t need a street map because the user wants a street map. We don’t need crime statistics because the user wants crime statistics. The user wants to see the safest walking route. As I tend to put it: nobody uses software because they want to log in. Logging in is a consequence of the real reason for using the software.

This splits features into:

  • Headline
  • Supporting

In Pair A, we flesh out half a dozen user stories driven by the headline feature. We work with our customer to storyboard key scenarios for these features, and refine the ideas just enough to give them a sense of whether we’re on the right track – that is, could this solve the problem?

We then come back together with the other pairs and compare our ideas, allowing the customer to decide the way forward. Some solution ideas will fall by the wayside at this point. Some will get merged. Or we might find that none of the ideas is in the ballpark, and go around again.

Once we’ve settled on a potential solution – described as a headline feature and a handful of supporting features – we reform as a team, and we’re in familiar territory now. We assign features to pairs. Each pair works with the customer to drive out the details – e.g., as customer tests and wireframes etc. They deliver in a disciplined way, and as soon as there’s working software the customer can actually try, they give it a whirl. Some teams call this a “Minimum Viable Product”. I call it Prototype #1 – the first of many.

Through user testing, we realise that we have no way of knowing if people got to their destination safely. So the next iteration adds a feature where users “check in” at their destination – Prototype #2.

We increase the size of the user testing group from 100 to 1,000 people, and learn that – while they on average felt safer from crime – some of the recommended walking routes required them to cross some very dangerous roads. We add data on road traffic accidents involving pedestrians for each street – Prototype #3.

With a larger testing group (10,000 people), we’re now building enough data to see what the figure is on incidents per 1000 person-miles, and it’s not as low as we’d hoped. From observing a selected group of suitably incentivised users, we realise that time of day makes quite a difference to some routes. We add that data from the crime statistics, and adapt the search to take time into account – Prototype #4.

And rinse and repeat…

The point is that each release is tested against our primary goal, and each subsequent release tries to move us closer to it by the simplest means possible.

This is the essence of the evolutionary design process described in Tom Gilb’s book Competitive Engineering. When we combine it with technical practices that enable rapid and sustained iteration – with each release being production-ready in case it needs to be ( let’s call it “productizing”), then that, in my experience, is the ultimate form of “requirements engineering”.

I don’t consider features or change requests beyond the next prototype. There’s no backlog. There is a goal. There is a product. And each iteration closes the gap between them.

The team is organised around achieving the goal. Whoever is needed is on the team, and the team works one goal at a time, one iteration at a time, to do what is necessary to achieve that iteration’s goal. Development, UX design, testing, documentation, operations – whatever is required to make the next drop production-ready – are all included, and they all revolve around the end goal.

 

When Are We ‘Done’? – What Iterating Really Means

This week saw a momentous scientific breakthrough, made possible by software. The Event Horizon Telescope – an international project that turned the Earth into a giant telescope – took the first real image of a super-massive black hole in the M87 galaxy, some 55 million light years away.

This story serves to remind me – whenever I need reminding – that the software we write isn’t an end in itself. We set out to achieve goals and to solve problems: even when that goal is to learn a particuar technology or try a new technique. (Yes, the point of FizzBuzz isn’t FizzBuzz itself. Somebody already solved that problem!)

The EHT image is the culmination of years of work by hundreds of scientists around the world. The image data itself was captured two years ago, on a super-clear night, coordinated by atomic clocks. Ever since then, the effort has been to interpret and “stitch together” the massive amount of image data to create the photo that broke the Internet this week.

Here’s Caltech computer scientist Katie Bouman, who designed the algorithm that pulled this incredible jigsaw together, explaining the process of photographing M87 last year.

From the news stories I’ve read about this, it sounds like much time was devoted to testing the results to ensure the resulting image had fidelity – and wasn’t just some software “fluke” – until the team had the confidence to release the image to the world.

They weren’t “done” after the code was written (you can read the code on Github). They weren’t “done” after the first result was achieved. They were “done” when they were confident they had achieved their goal.

This is a temporary, transient “done”, of course. EHT are done for now. But the work goes on. There are other black holes and celestial objects of interest. They built a camera: ain’t gonna take just the one picture with it, I suspect. And the code base has a dozen active pull requests, so somebody’s still working on it. The technology and the science behind it will be refined and improved, and the next picture will be better. But that’s the next goal.

I encourage teams to organise around achieving goals and solving problems together, working one goal at a time. (If there are two main goals, that’s two teams, as far as I’m concerned.) The team is defined by the goal. And the design process iterates towards that goal.

Iterating is goal-seeking – we’re supposed to be converging on something. When it’s not, then we’re not iterating; we’re just going around in circles. (I call it “orbiting” when teams deliver every week, over and over, but the problem never seems to get solved. The team is orbiting the problem.)

This is one level of team enlightment above a product focus. Focusing on products tends to produce… well, products. The goal of EHT was not to create a software imaging product. That happened as a side effect of achieving the main goal: to photograph the event horizon of a black hole.

Another really important lesson here is EHT’s definition of “team”: hundreds of people – physicists, astronomers, engineers, computer scientists, software and hardware folk – all part of the same multi-disciplinary team working towards the same goal. I’d be surprised if the software team at MIT referred to the astrophysicists as their “customer”. The “customer” is us – the world, the public, society, civilisation, and the taxpayers who fund science.

That got me to thinking, too: are our “customers” really our customers? Or are they part of the same team as us, defined by a shared end goal or a problem they’re tasked with solving?

Photographing a black hole takes physics, astronomy, optical engineering, mechanical and electrical and electronic engineering, software, computer networks, and a tonne of other stuff.

Delivering – say – print-on-demand birthday cards takes graphic design, copywriting, printing, shipping, and a tonne of other stuff. I genuinely believe we’re not “done” until the right card gets to the right person, and everyone involved in making that happen is part of the team.

 

 

 

 

Software Development – What Are The ‘Basics’?

Okay, so here’s a hot take…

I’ve been grappling for more than a year now on what I would focus on in a guide to software development for people progressing from learning programming to building more complex systems for real end users.

I’m acutely aware – based on my own experience, and the accounts of many others – of the skills I really wish I’d learned when I started out. As a trainer and coach, I’m also very aware just how many developers get through seemingly their entire careers without being exposed to some of these foundational skills. Hence the need for a “Software Development 101” introduction.

I’m clear in my own mind about some of these things:

  • A developer should know how to drive design directly from users’ needs
  • A developer should know how to use version control (basically, seatbelts for programmers)
  • A developer should be capable of enumerating test cases given either a set of requirements, some behavioural model of the system (e.g., a UX flow diagram), or a copy of the code itself (e.g., what could go wrong with this line of code?)
  • A developer should be capable of automating the execution of their tests so they run fast and consistently
  • A developer should be capable of writing code that’s open to change (that’s a whole can of worms in itself)
  • A developer should recognise potential code quality issues when they see them, and know how to fix them
  • A developer should be capable of changing code without breaking it
  • A developer should be capable of visualising and clearly communicating multiple aspects of their work and their ideas – e.g., architecture/design, workflow, UI/UX, business rules etc. Partly because it can help enormously in bulding understanding, and also because communicating with pictures tends to be so much more effective in many instances.
  • A developer should be capable of automating their software delivery “pipeline” so that getting working code from their desktop to end users is as frictionless as possible
  • A developer should be capable of rapidly iterating their designs based on real user feedback
  • A DEVELOPER SHOULD BE CAPABLE OF A DIRECT, CONSTRUCTIVE WORKING RELATIONSHIP WITH THEIR CUSTOMERS & END USERS
  • A DEVELOPER SHOULD BE CAPABLE OF WORKING HARMONIOUSLY & CONSTRUCTIVELY WITH OTHER DEVELOPERS
  • A developer should be capable of research – nobody arrives on the job knowing everything they need to know for that particular job. A tonne of stuff must be learned along the way. A developer needs to be an auto-didact, because ain’t nobody gonna teach you everything.
  • A developer should be capable of setting themselves goals and managing their own time and resources – contentious, I know. But so many of the issues I see devs and dev teams facing can be boiled down to the perceived need of organisations to micro-manage them, and developers surrendering control over themselves and their work.
  • A developer should be capable of objectively, honestly and transparently communicating progress and making projections about how long it might be before a feature or release is ready. Again, a whole can of worms. But, at the very least, developers can build a reputation for actually being done when they said they were done, even if they’re unable to predict in advance when that might be. It’s bad when the train is late. It’s worse when we claim it’s still on time.

A software developer should be a competent programmer who can be trusted to reliably deliver what customers need, when they need it.

They deliver working software frequently. They listen and are responsive to customer feedback.

They don’t deliver broken software. They don’t wander far from working, shippable code. They don’t make irreversible changes to code. They test their software continuously, and never assume it’s working – either on their machine or anyone else’s.

They can sustain the pace of innovation on a system far beyond a first release, for as long as the customer needs.

They know problem code when they see it. They know how to improve code to reduce or eliminate those problems, without breaking it. They don’t let large batches of problems build up. They address issues early, and continuously.

They build a close working relationship with their customers, and earn trust by delivering what they promised. They don’t make promises they don’t know they can keep.

They can work effectively with other developers, and are open to collaboration. They make sensible, informed choices based on the customer’s and the team’s needs.

They learn what they need to learn, and when they don’t know, they say “I don’t know”. Then they do what they need to find out.

They report progress honestly and objectively, and never say “Take my word for it.”

They work hard to make themselves clearly understood – face-to-face, with pictures, in writing, and especially in code. They work hard to understand what others are telling them, and are constantly testing their own understanding (e.g., with examples).

They are largely self-managing. They set themselves goals. They prioritise and manage their own time effectively. They make time to learn and improve at their job. (And they don’t ask for permission to do that.)

What I’ve learned, after nearly three decades as a software developer, is that all of these skills are needed, and all of them can be learned. Probably not overnight, for sure.

It may take a few years to build a set of skills like this to a level of competency where you can be trusted to just get on with it. Which, I think, would be my single-sentence definition of a “software developer” – as opposed to a trainee or apprentice. And, yes, I know there are lots of people out there who say “Hey, if you’re getting paid to write code, you’re a software developer.” I certainly don’t own the term, and they’re welcome to their own interpretations. This is not my attempt to legally define it. I’m just getting things clear in my head, about what I mean when I say someone is a software developer. And, if there was an introductory guide for that, what would I include?

You are now free to start throwing the furniture around.