The way that software engineers write code has fundamentally changed with the introduction of artificial intelligence and more specifically large languages models. While this new technology follows the continued trend throughout the history of coding, further abstracting lower level details away from engineers, it now does so at a much larger scale.
When I was learning to code, people used to joke that Python was like English. Now with LLMs, you can literally write english and receive back working code in just about any programming language accomplishing exactly what you requested. While this advancement in technology is nothing short of astounding, it’s hard for me to say objectively whether or not this abstraction is good for software engineers. Like everything in engineering it’s filled with tradeoffs and to me, this makes AI a double edged sword.
For anyone who’s never watched golf, professionals always play with a caddie. A caddie is someone who assists the player by carrying the player’s clubs, keeping their score, and most importantly offering them advice on how to play based on factors like weather conditions, distance to the pin, and the course being played. A caddie is an essential part of any golfer’s success and having a good or bad caddie can affect how well a professional golfer performs. To me, using an LLM when coding is very similar to having a caddie.
LLMs provide quick, helpful feedback while software engineers tackle problems. The ability to easily “converse” back and forth with these large language models help break down problems and illuminate their solutions. Because of this, I believe that engineers can be made more efficient with the help of LLMs when used responsibly. However; too much reliance on an LLM, like golfers too heavily relying on the advice of their caddies, can be detrimental.
Because of this, it’s essential to strike the right balance of seeking advice when leveraging this technology. Perhaps most importantly, you must always understand what the code an LLM returns to you is doing — and this is where I believe the problem begins. I don’t feel that all software engineers adhere to this idea, mostly because now it’s not a hard requirement.
In the earliest days of programming, programmers had no choice but to understand the nittiest grittiest of details because they started by literally writing machine code. As time went on, more and more levels of abstractions were placed between programmers producing code and executables running on hardware. This in turn naturally led to an increase in areas in which programmers’ knowledge could lapse. I’m not saying you should understand all the internal workings of a car before you press the gas pedal, but you should understand how to drive. With the introduction of LLMs into coding I feel that many won’t bother to learn.
And ironically, learning is what I do believe LLMs are extremely helpful for. They can very quickly and succinctly explain complex topics and concepts that took others long period of time (and pain) to learn and digest all while doing it in a haiku, with a silly analogy, or whatever other form you request.
Assuming that you do understand what code an LLM provides you, I believe they can be beneficial and make engineers more productive. It reminds me of the p = np problem. It’s fast to verify a solution, but time consuming to come up with one. If you can verify what you’re given is what you want, you save time. If you can’t, but you still rely on it, you’re playing with fire.
Drop a like ❤️ and comment below if you made it to the end of the article.
I liked your caddie analogy. It's too early IMO to decipher where we land with LLMs in the next 5-10 years, but I'm not convinced it's replacing jobs right now. It's simply not *that* good ATM
AI is not your caddy. Libraries are your caddy. AI is an answer to management's desire to choose mass production over code quality. A good programmer does not use AI just like a good chef doesn't use frozen pizza dough.
A good programmer sees code as poetry. Something that has the biggest most effective impact in the cleanest and most minimal form. Like a scalpel. I have tried so far 3 times to get along with AI. It just doesn't work, because its database is made of code it finds on the Internet. Which is always either old or bad.
So, to save time, I'd rather code it myself than spend the time debugging AI code. Of course you can say: hurr-durr, it's improving, it's just beginning. So is a programming language, ever evolving.
AI does not have creativity. AI doesn't sit in a bench somewhere and had a mind blowing, life changing creative idea out of nowhere. AI is just another automation tool and I already believe it is an overreaching hype.
Developers are geeks. You can throw a challenging piece of code between them and watch them discuss it the whole night.
AI is for mediocre developers. The ones that already are using shortcuts, the ones that just want to get the job done and go home.
I'm starting to believe this is a normal evolution in any profession. At the point of maturity, just like with watchmakers and tailors, the elite becomes on high demand, while the mediocrity is slowly removed by automation. AI is making that happen and developers out there are cheering for it.