I’ve made a lot of predictions about how AI will change programming. Hacking code will be less important than understanding problems, we’ll have better tools for generating code, higher-level skills will be more valuable, and so on. All of these are tied together, to some extent. If programmers spend less time writing code, they will have more time to spend on the real problems: understanding what the code they’re writing needs to do. Our industry has done a poor job of that over the years. And they’ll be able to spend more time designing the larger systems in which their code runs. We’ve done a better job of that, but we will need to design services that can scale to more and more users while providing better security. Those systems must be observable so that problems can be detected and solved before they become crises. We’ll no doubt get better tools, and some of those tools may even help to solve those issues of software architecture. But we’re not there yet.
What’s on the other side of the coin? Better tools, less time hacking code, and more time to design useful systems all sound great. But what shadows are lurking behind the promises?
The first one is obvious. I’ve never seen a software development group that thought it was underworked. I suspect that most, if not all of them, are indeed overworked, and not engaging in ritual complaining. What’s the chance that the gift of AI will be “now you can write code 30% faster, so here’s 50% more code to write in 2024? You had six months for this project, but if you’re 30% faster, you can clearly get it done in three”? There are certainly poorly managed groups that will face heavier workloads and less realistic schedules as a result of AI – or, to be more precise, because management misunderstands the opportunities that AI really presents. More poorly thought-out, badly designed buggy software: That’s not what we need.
Finally: Debugging gets tangled up with high-level skills – but that’s not right. Debugging is as low-level as it gets, the second thing any programmer learns after writing their first “hello, world.” I’ve seen estimates that generative AI can be as much as 90% accurate when writing code – which sounds pretty good until you realize that 90% accuracy is probably per line of code. For a 10-line function, the probability that the result will be correct goes down to about a third. So, there will be a lot of debugging to do – and we have to take that into account. It’s surprising to me that more people haven’t noticed the disjunction between “Now we won’t have to worry about understanding the details of programming languages and libraries” (hey, I may have even said that) and “But we’ll have to be able to debug errors in code that we haven’t written and may not understand.” And I’m not sure how you gain the kind of mental fluency you need to do this debugging without having written a lot of code by hand. There will probably be fewer garden-variety “won’t compile” syntax bugs, but more bugs that alter behavior in subtle ways or introduce security vulnerabilities. When asked to improve a program I wrote, I’ve seen GPT change the order of lines in ways that introduced subtle errors. I’m not saying that AI won’t make programmers faster and more efficient – but I wonder if we’re also throwing junior programmers into the deep end of the pool without a life jacket.
Am I saying, “Stop the train, we need to get off?” No. Am I saying that programmers won’t become more efficient as a result of AI? No. But AI will introduce change, and change always has its good side and its bad side. In the coming year, we’ll have to deal with both sides.