In electronics design – a sector I spent a bit of time in during the 90s – tool developers recognised the need for their software to integrate with other software involved in the design and manufacturing process. Thanks to industry data standards, a PCB design created in one tool could be used to generate a bill of parts in a management tool, to simulate thermal and electromagnetic emissions in another tool, and drive pick-and-place equipment on an assembly line.
I marveled at how seamlessly the tools in this engineering ecosystem worked together, saving businesses eye-boggling amounts of money every year. Software can work wonders.
So it’s been disappointing to see just how disconnected and clunky our own design and development systems have turned out to be in the software industry itself. (Never live in a builder’s house!) Our ecosystem is largely made up of Heath Robinson point solutions – a thing that runs unit tests, a thing that tracks file versions, a thing that builds software from source files, a thing that executes customer tests captured in text files – all held together with twigs and string. There are no industry data interchange standards for these tools. Unit test results come in whatever shape the specific unit test tool developers decided. Customer tests come in whatever shape the specific customer testing tool developers decided. Build scripts come in whatever shape the build tool developers decided. And so on.
When you run the numbers, taking into account just how many different tools there are and therefore how any potential combinations of tools might be used in a team’s delivery pipeline, it’s brain-warping.
I see this writ large in the amount of time and effort it takes teams to get their pipeline up and running, and in the vastly larger investment needed to connect that pipeline to visible outputs like project dashboards and build monitors.
It occurs to me that if the glue between the tools was limited to a handful of industry standards, a lot of that work wouldn’t be necessary. It would be far easier, say, to have burn-down charts automatically refreshed after customer tests have been run in a build.
For this to happen, we’d need to rethink our tools in the context of wider workflows – something we’re notoriously bad at. The bigger picture.
Perhaps this is a classic illustration of what you end up with when you have an exclusively feature/solution or product focus, yes? Unit tests, customer tests, automated builds, static analysis results, commits, deployments – these are all actors in a bigger drama. The current situation is indicative of actors who only read their parts, though.