Re: We just don't know.
All true. There's a critical difference between industrial robots in the '70s and LLMs today, though.
Robots were very well understood, with fairly clear paths to improvement, with most if not all of the obstacles being technical in nature. When they did something bad, you could often point at the misbehaving bit and say "that bit, make it better". Iterate that, and you get better robots.
LLMs are poorly understood, with no clear paths to improvement except making them bigger and therefore more expensive (there are other paths, but they are very murky), and some of the obstacles are at the theorical level with no definite solutions (e.g. so-called hallucinations, or the inability to learn after the training phase). When they do something bad, all you can do is shrug and have a human take over.
At any time someone could publish a paper proving that LLMs cannot improve any further by merely making them bigger, and that will be pretty much it until the next big theorical breakthrough - which might come a year later, or ten, or a hundred.
Or, hey, the next big theorical breakthrough could be about to be published right now, and the singularity could be a month later. All of this to say that predictions are very hard, and nowadays are even harder than in the '70s.