Re: "an LLM bot can write new code"
If you had hand-written the code and fed it to a modern compiler then an assembly language expert might well describe the result as "I wouldn't have written it excactly like that [but] the code does work". In fact, all computer languages are just a way of writing down *requirements* in a computer-readable form and if your source code does not express your requirements properly, you should not be surprised that the resulting program doesn't do the right thing.
Natural languages, with all their ambiguities, are a truly terrible way to specify requirements and so the first job for most software development projects is to "translate" the plain English requirements of the customer into something at least a *little* bit more rigorous. (<snark>I think this used to be called "analysis", back in the day when people bothered to do it.</snark>)
There have been many attempts over the years to persuade a machine to generate code in response to some "prompt" or other. I think it is helpful to divide them into two groups.
The first group is "compilers" and these work quite well. In this model, the "prompt" is in a language with carefully delineated semantics and a careful reader of the prompt can make a reasoned guess whether the resulting output (object code) will actually do what is intended. A key point (though not usually emphasised because it is so stunningly obvious) is that human beings are not supposed to need to read the output.
There is a second group of code generators, exemplified by various CASE tools over the decades. These have never gained widespread use. In this model, the "prompt" is less well delineated and the human reader is expected to read (and subsequently maintain!) the output. Using LLMs to generate code would fall into this second group and in my opinion are likely to suffer the same problems (and the same long-term fate) as all other CASE tools.
Domain Specific Languages belong in the first group and have a proven track record. They are a bit harder to produce than simply chatting to WhateverGPT, but the pay-off is correspondingly greater. An LLM's ability to digest the loose language than humans insist on using *might* be a useful component of a DSL design tool. That seems, to me, to be an idea worth exploring. However, if you want reliable software, I still think that the end product of such a design tool has to be a DSL (a prompt language) that has all its semantics nailed down so that you can reason about your code and (ideally) prove aspects of its correctness.