The final question of our Twitter code craft quiz seems to have divided the audience.
The way I explain it is that the primary goal of code craft is to allow us to rapidly iterate our solutions, and to sustain the pace of iterating for longer. We achieve that by delivering a succession of production-ready prototypes – tested and ready for use – that are open to change based on the feedback users give us.
(Production-ready because we learned the hard way with Rapid Application Development that when you hit on a solution that works, you tend not to be given the time and resources to make it production-ready afterwards. And also, we discovered that production-ready tends not to cost much more or take much more time than “quick-and-dirty”. So we may as well.)
Even in the Agile community – who you’d think might know better – there’s often too much effort on trying to get it right first time. The secret sauce in Agile is that it’s not necessary. Agile is an iterative search algorithm. Our initial input – our first guess at what’s needed – doesn’t need to be perfect, or even particularly good. It might take us an extra couple of feedback loops if release #1 is way off. What matters more are:
- The frequency of iterations
- The number of iterations
Code craft – done well – is the enabler of rapid and sustainable iteration.
And, most importantly, iterating requires a clear and testable goal. Which, admittedly, most dev teams suffer a lack of.
To illustrate how I handle software requirements, imagine this hypothetical example that came up in a TDD workshop recently:
First, I explore with my customer a problem that technology might be able to help solve. We do not discuss solutions at all. It is forbidden at this stage. We work to formulate a simple problem statement.
Walking around my city, there’s a risk of falling victim to crime. How can I reduce that risk while retaining the health and enviromental benefits of walking?
The next step in this process is to firm up a goal, by designing a test for success.
A sufficiently large sample of people experience significantly less crime per mile walked than the average for this city.
This is really vital: how will we know our solution worked? How can we steer our iterative ship without a destination? The failure of so very many development efforts seems, in my experience, to stem from the lack of clear, testable goals. It’s what leads us to the “feature factory” syndrome, where teams end up working through a plan – e.g. a backlog – instead of working towards a goal.
I put a lot of work into defining the goal. At this point, the team aren’t envisioning technology solutions. We’re collecting data and refining measures for success. Perhaps we poll people in the city to get an estimate of average miles walked per year. Perhaps we cross-reference that with crimes statistics – freely available online – for the city, focusing on crimes that happened outside on the streets like muggings and assaults. We build a picture of the current reality.
Then we paint a picture of the desired future reality: what does the world look like with our solution in it? Again, no thought yet is given to what that solution might look like. We’re simply describing a solution-shaped hole into which it must fit. What impact do we want it to have on the world?
If you like, this is our overarching Given…When…Then…
Given that the average rate of street crime in our city is currently 1.2 incidents per 1,000 person-miles walked,
When people use our solution,
They should experience an average rate of street crime of less than 0.6 incidents per 1,000 miles walked
Our goal is to more than halve the risk for walkers who use our solution of being a victim of crime on the streets. Once we have a clear idea of where we’re aiming, only then do we start to imagine potential solutions.
I’m of the opinion that the best software developent organisations are informed gamblers. So, at this early stage I think it’s a good idea to have more than one idea for a solution. Don’t put all our eggs in one solution’s basket! So I might split the team up into pairs – dependending on how big the team is – and ask each pair to envisage a simple solution to our problem. Each pair works closely wth the customer while doing this, to get input and feedback on their basic idea.
Imagine I’m in Pair A: given a clear goal, how do we decide what features our solution will need? I always go for the headline feature first. Think of this is “the button the user would press to make their goal happen” – figuratively speaking. Pair A imagines a button that, given a start point and a destination, will show the user the walking route with the least reported street crime.
We write a user story for that:
As a walker, I want to see the route for my planned journey that has the least reported street crime, so I can get there safely.
The headline feature is important. It’s the thread we pull on that reveals the rest of the design. We need a street map we can use to do our search in. We need to know what the start point and destination are. We need crime statistics by street.
All of these necessary features are consequences of the headline feature. We don’t need a street map because the user wants a street map. We don’t need crime statistics because the user wants crime statistics. The user wants to see the safest walking route. As I tend to put it: nobody uses software because they want to log in. Logging in is a consequence of the real reason for using the software.
This splits features into:
In Pair A, we flesh out half a dozen user stories driven by the headline feature. We work with our customer to storyboard key scenarios for these features, and refine the ideas just enough to give them a sense of whether we’re on the right track – that is, could this solve the problem?
We then come back together with the other pairs and compare our ideas, allowing the customer to decide the way forward. Some solution ideas will fall by the wayside at this point. Some will get merged. Or we might find that none of the ideas is in the ballpark, and go around again.
Once we’ve settled on a potential solution – described as a headline feature and a handful of supporting features – we reform as a team, and we’re in familiar territory now. We assign features to pairs. Each pair works with the customer to drive out the details – e.g., as customer tests and wireframes etc. They deliver in a disciplined way, and as soon as there’s working software the customer can actually try, they give it a whirl. Some teams call this a “Minimum Viable Product”. I call it Prototype #1 – the first of many.
Through user testing, we realise that we have no way of knowing if people got to their destination safely. So the next iteration adds a feature where users “check in” at their destination – Prototype #2.
We increase the size of the user testing group from 100 to 1,000 people, and learn that – while they on average felt safer from crime – some of the recommended walking routes required them to cross some very dangerous roads. We add data on road traffic accidents involving pedestrians for each street – Prototype #3.
With a larger testing group (10,000 people), we’re now building enough data to see what the figure is on incidents per 1000 person-miles, and it’s not as low as we’d hoped. From observing a selected group of suitably incentivised users, we realise that time of day makes quite a difference to some routes. We add that data from the crime statistics, and adapt the search to take time into account – Prototype #4.
And rinse and repeat…
The point is that each release is tested against our primary goal, and each subsequent release tries to move us closer to it by the simplest means possible.
This is the essence of the evolutionary design process described in Tom Gilb’s book Competitive Engineering. When we combine it with technical practices that enable rapid and sustained iteration – with each release being production-ready in case it needs to be ( let’s call it “productizing”), then that, in my experience, is the ultimate form of “requirements engineering”.
I don’t consider features or change requests beyond the next prototype. There’s no backlog. There is a goal. There is a product. And each iteration closes the gap between them.
The team is organised around achieving the goal. Whoever is needed is on the team, and the team works one goal at a time, one iteration at a time, to do what is necessary to achieve that iteration’s goal. Development, UX design, testing, documentation, operations – whatever is required to make the next drop production-ready – are all included, and they all revolve around the end goal.