The problem with all AI:
- You throw data at it, and "train it" (i.e. you kill off those who don't get increasingly closer to what you want) on the subject you desire.
- They become vaguely proficient at it.
To now train on another topic... you have to defeat all that training that you already gave it, by overwhelming it with other training until its initial training becomes a minority player in its behaviour. Which usually means orders-of-magnitude more training, starting from a very biased (culled) base of the thing you originally wanted it to do.
To advance - much of the same. Remember how in learning everything you initially learned is revealed to be not the complete truth? Same problem. Now you have to take your mediocre chess AI and untrain it on everything it learned (to even survive!) in order to become a better chess AI.
If you have the time, the processing, the source data (if you trained it on the entire Internet... where are you going to get orders of magnitude more training data and the time to train it on it?), you can retrain it but you'll still be held hostage by the initial criteria - those you culled in order to achieve.
Imagine you executed every lifeform that couldn't play chess on an interface that they can all use. Eventually, yes, you'd get an animal of some kind that can play chess on that interface. Now try to train that animal to launch a rocket. That millions of years of forced evolution takes a long time to undo - many generations, MANY individuals breeding constantly, until your "now incorrect" training is in the minority - and it will inherently bias the way that every creature that still exists thinks. Because they all came from a chess-playing creature.
Modern AI still hasn't learned these lessons, despite decades of the EXACT SAME PROBLEM. Intelligence isn't a statistical average of the training data. That's not how it works. I can see something *ONCE* and recognise that it's amazing and useful, and throw out decades of my previous knowledge and experience to follow it, because it's a clearly-better tool for the purpose I need. That's how intelligence works. I can reason things that don't yet exist and have no effect on my life - like I can choose to be compassionate to minority groups that I've never previously encountered just by thinking about this new group that I come across.
These things are just statistical probabilistic engines trained on databases. That's all they are. And AI people get rather offended when you point that out, because they still hold the belief that they are not because it uses genetic algorithms / neural networks / transformers / <insert latest fad here> and don't see that all they're building is layers of abstraction from that. And yet actually intelligence does not rely on any such engine or database like that, in fact it's one of the defining features that we can extrapolate from almost zero previous information and imagine things that have never been recorded ever before.
It's 60 years down the line and we're still pushing the same nonsense, and getting the same result, and even proposing the same solution - MORE CPU! MORE RAM! MORE NODES! MORE TRAINING! MORE DATA SOURCES! That'll fix it, this time, for sure! Everyone knows that you just do random stuff millions of times over and it magically and spontaneously becomes intelligent at a given point! It's just that that point is always *just* out of reach, apparently.
Or maybe we could completely rethink what we're doing here. Because none of this solves the inference problem, the training plateaus, or the complete lack of understanding or conceptualisation of the underlying data.