AI Is snake oil
I once wrote a simple "bot" to act as a CPU opponent in an on-line game. It had various levels of ability, from playing purely random (but legal) moves, to analysing the game state and making the move most likely to result in a win, but tempered with a varying degree of randomness. It worked very well - most human players were really impressed by this "AI" opponent, especially when a random move appeared to be "inspired" gameplay.
I've recently taken (well, was forced to take) a course in AI and neural networks. It convinced me that even the experts in this field don't have a clue how it works, and just keep turning up the complexity dial until they get acceptable results from the test data. A big mistake they seem to make is then extrapolating these results to new inputs - at a small distance outside the training set it can look quite convincing, but the further the real-world data gets from the training set the worse the results, up to the point where RNG would be just as effective.
A further mistake is to mis-represent what their AI baby is doing. "This model recognises numbers". No it doesn't - it has absolutely no concept of "numbers", only a set of arbitrary shapes that it has been told to classify into specific buckets that we call "numbers". Show it a shape that any human would instantly (and yes, sometimes incorrectly) recognise as a number - e.g. a stylised 7-segment numeral or a heavily cursive one, and the AI would fail. "So it just needs more training data...", but that's not how human intelligence works - we can recognise numbers, with great accuracy, in forms and contexts we have never seen before. This "AI" is nothing but a poor, over-complicated, incomprehensible pattern recognition algorithm. A decent engineer could do a much better job at number recognition by writing a proper pattern-recognition algorithm, but that is hard work and needs skill. The "AI" solution just throws lots of data at a block box until it gets good enough results to satisfy test criteria, no skill required. The mistake is to then apply this outside the limited domain of the training dataset and expect "computer" accuracy (i.e. believe it 100%). Intelligence is not a brute-force game, it's much more subtle.
Phew, good to get that rant off my chest :)