Re: No surprise there
Maybe we're not sure what "the concept of meaning" is, but that's no reason why a large language model doesn't grasp concepts or understand what it's saying. For years they can summarize complex text; with the newest one you can ask it to clarify what it just said, you can ask it to relate parts of the conversation to novel ideas.
"no system can design another that's 'cleverer; than itself" Is even more unsupportable. I would downvote your statements if I saw them on StackOverflow. You're extrapolating the past performance of these systems and pretending it demonstrates fundamental limitations.