On my 3-day Code Craft training workshop (and if you’re reading this in January 2026, training’s half-price if you confirm your booking by Jan 31st), there’s a team exercise where the group need to work together to deliver a simple program to the customer’s (my) laptop where I can acceptance-test it.
It’s primarily an exercise in Continuous Delivery, bringing together many of the skills explored earlier in the course like Test-Driven Development and Continuous Integration.
But it also exercises the muscles individual or pair-programmed exercise don’t reach. Any problem, even a simple one like the Mars Rover, tends to become much more complicated when we tackle it as a team. It requires a lot of communication and coordination. A team will typically take more time to complete it.
And it also exercises muscles that developers these days have never used before. In 2026, the average developer has never created, say, a command-line project from scratch in their tech stack. They’ve never set up a repo using their version control tool. They’ve never created a build script for Continuous Integration builds. They’ve never written a script to automatically deploy working software.
In the age of “developer experience”, a lot of people have these things done for them. Entry-level devs land on a project and it’s all just there.
That may seem like a convenience initially, but it comes with a sort of learned helplessness, with total reliance on other people to create and adapt build and deployment logic when it’s needed. A lot of developers would be on a significant learning curve if they ever needed to get a project up and running or to change, say, a build script.
It’s the delivery pipeline that frustrates most teams’ attempts to get any functionality in front of the customer in this exercise.
I urge them at the start to get that pipeline in place first. Code that can’t be used has no value. They may have written all of it, but if I can’t test it on my machine – nil points. Just like in real life.
They’re encouraged to create a “walking skeleton” for their tech stack – e.g., a command-line program that outputs “Hello, world!”, and has one dummy unit test.
This can then be added to a new GitHub repository, and the rest of the team can be invited to collaborate on it. That’s the first part of the pipeline.
Then someone can create a build script that runs the tests, and is triggered by pushes to the main (trunk) branch. On GitHub, if we keep our technical architecture vanilla for our tech stack (e.g., a vanilla Java/Maven project structure), GitHub actions can usually generate a script for us. It might need a tweak or two – the right version of Java, for example – but it will get us in the ballpark.
So now everyone in the team can clone a repo that has a skeleton project with a dummy unit test and a simple output to check that it’s working end to end.
That’s the middle of the pipeline. We now have what we need to at least do Continuous Integration.
The final part of the pipeline is when the food makes it to the customer’s table. I remind teams that my laptop is a developer’s machine, and that I have versions of Python, Node.js, Java and .NET installed, as well as a Git client.
So, they could write a batch script that clones the repo, builds the software (e.g., runs pip install for a Python project), and runs the program. When I see “Hello, world!” appear on my screen, we have lift-off. The team can begin implementing the Mars Rover, and whenever a feature is complete, they can ping me and ask me to run that script again to test it.
And thus, value begins to flow, in the form of meaningful user feedback from working software. (Aww, bless. Did you think the software was the value? No, mate. The value’s in what we learn, not what we deliver.)
And, of course, in the real world, that delivery pipeline will evolve, adding more quality gates (e.g., linting), parallelising test execution as the suite gets larger, progressing to more sophisticated deployment models and that sort of thing, as needs change.
DevOps – the marriage of software development and operations – means that the team writing the solution code also handles these matters. We don’t throw it over the wall to a separate “DevOps” team. That’s kind of the whole point of DevOps, really. When we need a change to, say, the build script, we – the team – make that change.
But you might be surprised how many people who describe themselves as “DevOps Engineers” wouldn’t even know where to start. (Or maybe you wouldn’t.)
It’s not their fault if they’ve been given no exposure to operations. And it’s not every day that we start a project from scratch, so the opportunities to gain experience are few and far between.
Given just how critical these pipelines are to our delivery lead times, it’s surprising how little time and effort many organisations invest in getting good at them. It should be a core competency in software development.
It’s especially mysterious why so many businesses allow it to become a bottleneck by favouring specialised teams instead of T-shaped DevOps software engineers who can do most of it themselves instead of waiting for someone else to do it. Teams could have a specialised expert on hand for the rare times when deep expertise is really needed.
If the average developer knew the 20% they’d need 80% of the time to create and change delivery pipelines for their tech stack(s), there’d be a lot less waiting on “DevOps specialists” (which is an oxymoron, of course).
Just as a contractor who has to move house often tends to become very efficient at it, developers who have to get delivery pipelines up and running often tend to be much better at the yak shaving it involves.
So I encourage teams to make these opportunities by doing regular “DevOps drills” for their tech stacks. Get a Node Express “Hello, world” pipeline up and running from scratch. Get a Spring Boot pipeline up and running from scratch. etc.
Typically, I see teams doing them monthly, and as they gain confidence, varying the parameters (e.g., parallel test execution, deployment to a cluster and so on), and making the quality gates more sophisticated (security testing, linting, mutation testing and so on), while learning how to optimise pipelines to keep them as frictionless as possible.





