Disagree with the Article Title
Not really sure the point that this article is trying to make, but it doesn't bear much relation to the headline. So one person that is a light AI user doesn't get that much benefit from it? Quick, sell your shares in AI everyone! Or not...
I've been a software engineer (plus other labels) for a good number of years now. This year I worked on an AI project and have done a LOT of learning about AI. Before I get into that, I will say this:
- About 70% of people in my personal experience that claim to be developers aren't competent. At best, a lot of them are bad coders, but there's a lot more to software development than just coding (and if you are going to be just a coder, at least be good at it).
- The software engineering industry is full of cowboys and people that don't know what they are doing. Things don't actually seem to be getting better, sadly. Instead, non-technical people go on a 3 day Scrum course, get a BS certificate and think they know the first thing about running a software development project (they don't, and in all likelihood, neither did the person teaching them on the course).
- Be very afraid of anyone that prefixes any statement about software development with, "I used to be a developer". As a good software engineer, you can often earn around a similar wage (if not more) than a CTO, so the primary reason to no longer be a developer is either you burnt out or weren't very good at it.
Onto AI: if you'd told me a year to a year an half ago that you could get an AI agent to do half or even more of the coding for you, I would have thought that you were quite stupid and incompetent. Today, I not only believe this is possible, but am actually doing so myself.
So what are some people doing wrong and why are some people able to get good results, often using the same models, such as Sonnet 4.5? The answer is 'context', which is the data that you send into your LLM. Copilit isn't very good at this. The best indexing that it does of your codebase is when you have your source code checked into a Github or Azure DevOps. The problem with that is that once you start making modifications, then that index is out of date and it has to fall back to looking at modified files. And not everyone has their code in ADO or Github.
So which tools do have really good context engineering? The two I've found are Augment Code and Warp.dev. Sadly, though both put up their prices recently by 3x or more based on previous tiers, so I'm going to continue with Warp.dev, but also see how I get on wih BYOK (Bring Your Own Key) with it. Both Augment and Warp create really good hybrid indexes of your codebase so that it knows how files and modules relate to one another, the same way that a long time developer working on it would. You can use these systems to add features to an existing codebase and write meaningful unit tests which following existing test (patterns and frameworks).
Copilot is OK for simple stuff, a few pages closely related. But asking it to work and understand the full codebase is most likely beyond what it can do.
Other things you could try at Claude Code with an MCP tool that indexes your codebase and store the vectorised embeddings (which may not be a term with which you are familiar, but it's basically a high dimensional datastore of something which captures the meaning of a thing based on their attributes, which are stored as vectors. Other items close by in that dimensional space are judged as similar. Such as Toast and Bread).
But context is king and you need to clearly ask for what you want, but also provide enough of the existing codebase as the correct context. Augment and Warp do this Automatically. Another thing they do is build the solution, add in tests and keep going round in iterations whilst they fix their own mistakes, which is great.
Compare this to some tools such as Cursor (on the trial, I could only use GPT models) and the results were horrible. Windsurf has gotten far, far worse and was the only AI agent system to corrupt my files multiple times (so perhaps requiring that the Windsurf devs do 80 hour weeks wasn't a great idea after all. Who could have knew, right?!).
At the end of the day, all AI is going to do for incompetent people is to continue them to be incompetent, whilst being a huge help those that take the time to understand how it really works and how it can benefit them.
Output from AI absolutely does need to be reviewed. I've had several situations where the proposed solution was hiding that something wasn't possible (such as a specific local embeddings model wouldn't run in LlamaSharp, though the code pretended it did). If you don't understand, nor review the code that's produced, they you're going to end up in real trouble. But if you're a good developer, then take the time to try out different tools, learn how they work and they will save you many, many hours doing dog work or trying to understand how some spaghetti code works in an old solution.
I'm actually quite positive about AI and it's ability to help people at all levels in software development. I just would be very careful about juniors using it for anything other than to help explain bits of code to them. I would advise them to not use AI to generate code in their early years or at least take days off from doing that. Learning to get unstuck is a crucial skill and it can primarily only be learned by getting stuck.
AI can speed up learning of subjects. I still pay for many training resources, PluralSight, loads of Udemy courses, have had CLoud Academy subscriptions in the past and I really am liking my recent Mannings subscription service where you can even have books read to you. Would probably avoid LinkedIn Learning as I've seen the odd course on there that I feel was full of crap or just plain wrong. But then you can get AI to explain a thing that was glossed over in a training video or book or just answer your specific questions about something. It really is an incredible technology and I fully expect more and more of us to be running local models over the next ten years.
I would highly recommend that any developer spends a good amount of their time developing AI skills and understanding how the technology works at a deep level. Having a strong core foundation about a topic is the only upon which true expert knowledge can be built. Look on Udemy for courses by Ed Donner, they are really good and go into the details of AI.
Yes, AI can generate bad code. It can generate code that works, but wouldn't scale well past one user. But if you're a competent software engineer, you'll quickly spot these problems and be able to get the agent to put them right. So a full circle round of that may take ten minutes, including telling the agent that the first version was wrong and what it should do instead. Ten minutes is far quicker than anyone could have written the code correctly in the first place, so you really can make big gains in coding. And then you can get 100% test coverage for the new code that's just been written, written using the same style and libraries as your existing tests. But again, software engineering isn't just coding, so AI doesn't replace the software engineer. It just means that a 1 person team really can create, test and deploy a full application all by themselves. Many fewer people required on each team.
So to anyone that's not seeing a benefit from AI, then either:
- You are using the wrong tools. Or the right tools badly.
- You are using an obscure programming language.
- You are using brand new frameworks and libraries (which tends to be the case with AI libraries at the moment - agents really struggle writing C# for Semantic Kernel because that library is changing so quickly and things that are still in the readme of the codebase have been removed from the actual code!)
Good luck everyone and be happy: AI can do most of the grunt work, whilst leaving you to do the real thinking. Okay, so perhaps that's not great news for everyone, but it should be.