Why is it that all stories on AI are either mindless boosting or doom and gloom.
Like all things, the reality lies somewhere in between.
Code generation / coding assistants are one of the most useful use case for LLMs, because (1) there's a lot of decent code e.g. on Github, Linux kernel, that can be used for training; and (2) it's fairly easy to measure success - the code works, or it doesn't. And it runs fast, or it doesn't. Will there be subtle bugs? possible, hence the danger of blindly accepting generated code.
But human coders can also introduce subtle bugs. The difference is that the LLM's can generate significant amounts of code quickly and so there's the temptation to just accept it, and not adequately review it.
But any programmer or developer who dismisses LLM's as stochastic parrots or BS generators... will get left behind. The productivity benefits are significant enough that an experienced developer with these tools, will outperform an equally competent developer who doesn't use LLM's.
Here's a very good read from a credible and realistic (i.e. not AI-hyping) author (may be paywalled) - https://newsletter.pragmaticengineer.com/p/ai-tooling-2024