In the earliest days of stored-memory programmable electronic computers, programs were captured by punching holes into cards, each representing an individual binary digit (bit) that could be read into memory.
The people who did this were highly skilled computer programmers.
In 1952, everything changed when Admiral Grace Hopper invented the A-0 compiler that automatically translated high-level human-readable instructions into binary machine code. “Automatic Programming”… well… completely automated programming.
This was first time an advance in computer programming tools completely eliminated the need for specialised programmers. As everyone knows, there have been no computer programmers since 1952.
The second time an advance in programming technology completely eliminated the need for programmers was in the late 1950s, when third-generation compiled languages like COBOL and Fortran were invented, enabling users to describe what software they want in high-level language and have the machine automatically generate the low-level machine code.
As I’m sure we all recall from our history class, there have been no computer programmers since 1959.
Then, through the 1960s, high-level “problem-oriented languages” like LISP and ALGOL completely eliminated the need for computer programmers all over again. Now users could simply express the goals of the system in high-level language, and low-level code would be automatically generated by the machine.
That’s why there haven’t been any computer programmers since the 1960s.
Programming was completely eliminated once again in the 1970s and 80s by fourth-generation languages like Informix-4GL and Focus, that enabled users to describe what software they want in high-level language. That’s why you’ll never meet a programmer under the age of 55.
The 1990s saw the rise of visual modeling and Computer-Aided Software Engineering – which, remember, wasn’t a thing because it died out in the 1950s – and now complex computer systems could be designed by, y’know, cats or horses or whatever just describing at a high-level what software they want, with no further need for computer programmers.
This is why the only place you’ll get to see a code editor these days is in a museum. We’ve had no need for them since 1999.
“AI” coding assistants are the latest advance in programming technology that is eliminating the need for computer programmers (that haven’t existed since 1952, remember?)
Users can just describe what software they want in high-level natural language, and the language model, with the aid of some non-AI gubbins on top, will generate a complete working solution for them.
Sound familiar?
I’m being facetious, of course. Programming, as a profession, didn’t die out in 1952, or 1959. By then, there were just more computer programs, and more computer programmers.
With every previous advance in programming technology, that’s been the result: more software, and more software developers. Making it easier and more accessible has just increased demand. This is an example of Jevons’ Paradox.
The other reason why specialised programmers have always been in demand is because of the inherent ambiguity of natural human languages. Although 3GLs like COBOL look like English at a glance, they’re actually quite different. A statement written in COBOL can mean one – and only one -thing. A statement written in English might have multiple possible interpretations.
So, creators of program compilers had to necessarily invent formal languages to instruct them with.
It turns out that the really hard part of computer programming is expressing ourselves formally and precisely in a way that can be automatically translated into machine instructions, regardless of the level of abstraction.
The people creating the COBOL and Fortran programs had to become programmers. The people creating the Focus programs had to become programmers. The people creating spreadsheet applications had to become programmers. The people dragging and dropping visual components in WYSIWYG editors had to become programmers. The people creating the executable UML models had to become programmers. The people snapping together reusable No-Code/Low-Code widgets had to become programmers. They all had to learn to think like a computer.
And the people creating “AI”-generated software will necessarily have to become programmers.
“This time it’s different, Jason.”
Well, I’ve heard that before. And the folly of believing we can accurately specify software using natural language, and the evidence we’ve seen so far, suggests that it’s going to be no different this time. Human intent is too nuanced for computers.
Also, if we were to view language models as being the same as compilers, we’d be making a category mistake. LLMs are not deterministic like a compiler. Every time you hit the “Build” button, you get a different computer program (if it’s able to complete the program at all without a human intervening.)
The uncertainty in our natural languages, coupled with the now-famous unreliability and stochastic nature of LLMs, mean that a human programmer will still be required.
The other matter is the central thesis of this series of posts: that most dev teams using “AI” coding assistants are not getting any value out of them. The tools are actually making the bottlenecks in software delivery worse, leading to even longer delays, more problems in production – hello, everyone at Amazon Web Services! – and rapidly increasing maintainability problems.
These teams would actually go faster if they stopped using “AI” to generate or modify code.
And that might give them some breathing room to address the real bottlenecks in their process.
“Ah, but Jason, AI coding assistants are getting better every day.”
Are they, though? We’ve been seeing a very obvious plateauing of the capabilities – in particular, the accuracy – of LLMs for over a year now. Scaling, once touted as the route to Artificial General Intelligence (AGI) has now clearly hit a wall. In many cases, the bigger they’ve try to make the models recently, the less reliable their performance seems to have become in key areas, like code generation. And LLMs are the engines of these coding assistants (for the time being).
No doubt, the developers of the IDEs, CLIs and “agents” that sit atop the LLM are learning how to work around the technology’s limitations – in many cases, building on principles that are discussed in this series.
But that, too – constrained by an “intelligence” that’s not likely to get much smarter for the foreseeable future – will hit its limits. There’s only so much you can do with a Markdown file and a “while” loop.
So we need to cut our cloth. This, folks, is about as good as its gonna get – perhaps in my lifetime.
But let’s not be a Gloomy Gary! With the technology as it is today, some teams are getting – admittedly modest – benefit from it.
The irony is that those teams were already high-performing, according to the DORA data. They’d already addressed the bottlenecks, the blockers and the leaks in their development process.
The key to being effective with “AI” coding assistants is being effective without them.
When they attach the code-generating firehose, they still don’t get the power shower that the AI industry promised, but they can feel a difference; a noticeably stronger jet.
And, with investment in skills, in process, and in automation of the old-fashioned kind, theoretically any team could reap these benefits – just not necessarily today.
So what of the future, then? The real, likely future?
Multiple lines of research seem now to be converging on the benefits – in reliability, cost, energy etc -of much smaller models than the hyperscale frontier LLMs that we’ve been focusing on up to now.
Models with a few billion parameters, created by distilling much larger models perhaps, are already becoming more popular. They can run locally on high-end consumer hardware; no need for a data centre with a 100 MW power supply.
And research is delving deeper into even smaller models, with just millions of parameters, targeted at niche applications (like code generation).
Perhaps the future of “AI” coding models will be small, local and application-specific? Artificial neural networks are amenable to a divide-and-conquer approach to training and inference. Half the model might require a quarter of the compute.
Maybe the hyperscale general-purpose models we see today – with all the economic, environmental and societal downsides they’ve proven to bring with them – are the biggest we’ll ever see? Maybe the trend for hyper-scaling and Cloud-based AI will go into reverse?
For sure, at this scale, they’re a massive loss-leader for companies like Anthropic and OpenAI, and certainly not worth it for the modest productivity gains that a small portion of teams are reporting.
It won’t come as a huge surprise if the industry decides that hyper-scaling – for a bunch of reasons – just isn’t worth it.
To sum up, “AI” coding assistants are probably here to stay, but they are pretty much as good as they’re going to get for the foreseeable future. That means all the gains going forward aren’t going to come from the technology itself, but how we use it.
T’was ever thus, going all the way back to A-0. Development team productivity has always been systemic, and not about individual output. A bad development system will beat a code-generating firehose every time.
And that’s what “The AI-ready Software Developer” is all about.