That's STILL NOT AI.
That's just a brute-force search on a buggy implementation of a single port of the game. It's like saying "you learned how to win at football" when you played on a pitch at your local park where you could kick the ball off the local doctor's surgery, having learned the point on the wall that makes it bounce above the goalie and score a goal in an oversized goal. The results are not transferable, they aren't "intelligent" (they just tried every possible direction) and it's certainly not learning or inventing.
Inventing is a matter of "skipping over the missing step". You don't need to learn every possible draughts/checkers opening, if you are intelligent. You can sit, and based on a limited knowledge of the rules, no database, and no brute-force, you can "infer" a good position/move. That's intelligence. Just trying every possible move is not intelligence.
We don't have AI. Even these things aren't learning - you couldn't put one trained on one game to the other game and make progress. The progress is logarithmic... it achieves a quick result and then plateaus and resists all further learning. You couldn't then train on ANY OTHER GAME and get a viable result. AI researchers know this, which is why they always announce the FRESH result of a clean AI, and then nothing else. No AI is ever "taught after". They know not to do it, because they know it's a disaster to try. And single-purpose AI's plateau quickly and have very limited scope.
This one not only had to be hand-held in terms of watching humans play, it had to be programmed with explicit rewards related to that ("did you end up in a similar place to a human playing?"), taught how to interpret the screen, and then trained on very short sections of particular games.
This is expert systems and heuristics (human written rules). All AI we have is expert systems and heuristics. The closest you get to "AI" (as in something that learns for itself) are genetic algorithms and the like - they tend to be VERY hard to understand, direct, train and get results from, but their "insights" are gained organically and without much outside help once the universe they live in is defined. But even they have human-tuned breeding rules.
The most impressive "AI" I ever saw was a Java-based physics simulation of a bunch of joints connected in a vaguely skeletal way, with joint movement individually controlled by a GA. Someone had put it up on their university home area (back when everyone had a home folder /~username, and webspace on their uni account). It was "rewarded" by the distance it could achieve from the starting point in a given time. The "course" was randomly generated to have hills and dips. After something like 2000 generations of genetic breeding, it could epileptic-fit itself across the screen and make some kind of progress (before eventually getting stuck or reversing).
After 10,000 generations, it could form a hop-and-a-skip. After a million generations, it almost began to resemble chimp-like four-limb running. Given it was Java and the 90's, that would be months of calculation, physics simulation, breeding, etc. though (luckily you could export the generations and reseed from a certain point). It would never, no matter how long it was left running, form a consistent stable gait. It was just pretty much random twitches at timed intervals that by chance happened to get it so far until it stumbled.
It was never "intelligent". It never looked ahead and inferred how best to achieve its task, or change its tack based on the terrain. But it was damned impressive (and it's not been on the web for years now, I know, I've looked for it). And despite having a computer science degree, and having friends with PhDs in computer vision, etc. that's the closest I've ever seen to anything AI - anything changing to solve the problem at hand.
All "AI" is similar. It's either a human telling it exactly what to do and when, or random chance, or brute force. The combination of all three tends to mask the use of any one, but it doesn't form intelligence in even the most primitive way.
Just because our kids learn using YouTube videos, does not mean that showing a YouTube video to a heuristic system is "learning".