Virtually nothing that says AI or "learning" actually is.
It's all heuristics, instructions from programmers on "how to learn", in effect. And not at some basic coding level, but quite literally specified explicitly for the task at hand.
AI, to me, is still interpreted in the same way as the old gaming adverts: "destructible environments" (so long as you don't go out of bounds, go too deep, shoot the critical plot structures, or actually expect it to turn to rubble), "realistic physics" (which is why you can make the enemy bounce a thousand metres in the air by getting him stuck on a door), "open-world" (so long as you don't try to go the opposite direction to your objective or mind being herded back in if you stray too far, and by the way, for mission 2 you have to go see John or you'll never get a mission 3 until you do).
It's all rule-based and targetted. Google's AlphaGo strayed into something different, which is why it's newsworthy and pretty astounding. But you have to understand the game and the rules of the game to make those sub-agents do what you want in order to come to a decent play. And I guarantee you that the "master agent" isn't culling off useless sub-agents and creating unique ones of its own to try to fathom out the game.
It's all hard-coded rules, left to run for a long time with an aim in mind. That's not AI or "learning", no matter how long you leave it running. Unfortunately, any sufficiently-advanced technology is indistinguishable from magic, so people do think that Siri is actually understanding them rather than some speech recognition that hasn't improved in decades (per cpu cycle), shoved into a search engine which returns colloquially-worded results.