TIL..
People still use Delphi. That's kinda wild.
Also on the primary topic; "it depends" - MMV because how people use it varies. You CAN make it work for you, but you have to know how to do so: it isn't going to just work out the box, it's hard to know if it ever will work out the box.
Think about how a human developer approaches a fresh codebase, let's say you start a new job at a new company, maintaining a 10-year-old system. It might take you 3 months, 6 months, to fully get up to speed. The docs are probably a mess. The test suite is probably broken. It might not have been engineered right in the first place. It might be a system that, IDK, talks to other systems that are hard to model.
If you start a ticket that requires you to get deep into the guts of it in week two you're going to have all sorts of issues, you're probably going to do it wrong - you're relying on your team to catch those errors (they probably won't).
Now consider it from the POV of an AI when you're asking it to work on that same system - you're asking it to do that same thing, but in a matter of seconds. To do what you're asking it to do, with no opportunity to have that 6 months of learning, no support structures, no properly working test suite, no accurate documentation. They'll rip through it, do what they can, but it's probably going to be broken. If there's one legitimate complaint here - and it's probably the root cause - it's that they attempt to "please" the user, that is to say they'll try to please the user by completing the task, even if they don't really know how to. Every time you ask an AI to do something, it starts with its own knowledge and knows nothing about your project.
There are ways to sort all of these issues out, and LLMs are very good at helping you do it. It is an investment of time and work - but a worthy one. You can ask the AI to write its own AGENTS file, its own github PR review instructions, to rewrite the main documentation, all in the repo (you do store the code docs in the repo, not Confluence or something, right?), to properly document how it works, to validate it's own test suite, to build test versions of the systems it talks to. You _must_ check through these yourself, at least initially. Then you can move on to getting the AI to do things for you - and you have it test what it did before even considering telling you that it is finished. When you've done and had AI help you do the groundwork (and you can tell it, by the way, to ask if it doesn't properly understand things) - your 6 months of initial "getting to know the system", but for AI - it will work like a dream.
It'll also help you do things that are above you, if you're remotely intelligent - I have personal projects that I simply don't have the 6 or 7 PhDs I'd really need to write them, where AI has done basically all the work, in a testable and tested way, where the science is sound, which would cost hundreds of grand to have somebody else write. Occasionally at the day job I can ask the AI to do more complex stuff like that, and it will.
It also saves my finger and wrist joints if nothing else, even if it did take the same amount of time in the end - but it doesn't. But AI can do the work of multiple copies of me at the same time anyway, even if that was the case. I sometimes send the AI off cooking personal projects in the background whilst working on day job stuff even, fire and forget..