Re: AGI will never arrive
> The concept of "AI" has been so ill defined that in the past researchers conflated it with the ability to play chess
I agree with the point that we've managed to brute force many problems, but I have to disagree with your dismissal of AI researchers.
There was no erroneous "conflation" - the possibility of brute forcing chess was well understood: indeed, that understanding had come from earlier work on problems related to AI, how to express such massive search problems in the first place and prune them sensibly to speed up the search without losing the best path(s).
However, the intent of the research was - and ought to still be - to find a way of playing chess without simple brute force. Unfortunately, the idea of a machine that could beat a human at chess became a Prestige Project: screw any AI research goals, if IBM can create a machine to beat a human Grandmaster this is a massive feather in the corporate cap. Oh look, everyone knows we can brute force it, let's do just that...
As soon as the brute force attack had been actually demonstrated, and at a time when Moore's Law was becoming fairly well known (so more machines could be built more cheaply) the actual problem of playing chess was placed into the "solved" bin by most people - including funding bodies and, yes, yourself.
But that meant that we only have a "chess playing massive search engine", we are *still* without a "chess playing AI", one that doesn't use brute force but a more subtle approach - an approach that, it was (is?) hoped would be applicable to more than just chess *and*, the big dream, would have better explanatory power than just "I tried every path and this got the biggest score". Which is, if we wish to pursue (what is now, annoyingly, called) AGI, a hole tat will need to be filled. But asking to be funded to "solve chess" will be met with derision, coming from the same place as your use of the word "conflation".
> LLMs work differently
They use different mechanics, but still ones that were derived and understood way before OpenAI opened their doors. And had, as the article points out, been put aside as not a solution to AI (sorry, "AGI"), even though it was understood they would exhibit entertaining results if brute forced.
> and get a step closer
Not really - there is even less explanatory power in one of those than there is the decision tree for a chess player: at least the latter can be meaningfully drawn out ("this node G in the tree precisely maps to this board layout, and because that child of G leads to defeat, you can see the weighting on G had been adjusted by k points. Compare G to H, which is _this_ layout, and H has a final weighting of j, so you can see why G was chosen"). Tedious, but comprehensible.
> but still overwhelmingly rely on brute force
ENTIRELY rely on brute force! That is *the* characteristic of an LLM!
> another brute force path where researchers fool themselves into believing "if we just get another 10 or 100x the computing cycles and working memory we'll reach AGI".
Which researchers? As the article points out, not the old guard, the ones you dismissed. The modern "AI researchers" who have only been brought up on these massive Nets? What else are they going to say?
> Spoiler alert: they won't
Yes, we know. Everyone knows (except the snake oil salesmen and everyone else who can make a buck). That really isn't a spoiler, exactly the same way that it wasn't when brute force was applied to chess and the popular press went apeshit over it: the sound of (dare I day, proper?) AI researchers burying their heads in their hands and sighing was drowned out then, as it is being drowned out now.