"When grasped, you never quite knew whether it would run its course to a mind-blowing transformation – or just sputter out after a few weak sparks."
In my experience, that depends how it is grasped, and by whom.
As I've said before, these AI contraptions are tools. They can be useful when used correctly and with appropriate caution/ supervision. They can churn out code in seconds that would take a human an hour or two to produce and debug by hand - great for knocking out a quick utility, say, that you know you could write if you were prepared to spend the time.
But they are not fit to be let loose on their own, particularly not on mission critical tasks. In part that comes down to experience - they don't have any, and they're not likely to get any. For example, I recently asked an LLM to write me some free Pascal code for a windows utility. It was specifically instructed not to use any widget libraries, but instead to call the Windows API directly.
Moments later it gave me full listing of the source code, complete with message loop, message handlers etc. The code looked great to me, though admittedly it has been a long time since I wrote anything that way. But it doesn't compile. Odd. I passed the error back to the LLM, it came back with updated code, which also doesn't compile. I went round that loop a few times, getting increasingly baroque answers, which either don't compile or don't work.
I didn't have time to fuck around with it further at that point (it was only a hobby project) but I was already starting to have suspicions that the problem is not wth the code but with my Lazarus installation, which is lacking some required elements. I'm confident that, when I get round to fixing that, the code as first given will be fine, but the possibility never occurred to the LLM. This is why we need experienced humans to use the tools, not inexperienced humans to pick up the pieces.