Re: No way ready!
The difference is quite obvious.
I know that the shadow is cast by a person. The computer doesn't. It has to be told. There's no driving instruction / test where they tell you "watch out for shadows", specifically. A human, though, is able to look at the situation (blue lights / shadow / kid running from the fields several feet off the road / the sight of a ball crossing the road in front of you which is likely to be followed by a small child retrieving it, etc.) and infer things about it that aren't present in the raw data.
The computer *cannot* do this. They can't learn like that. They can't infer anything that's not absolutely 100% inherent in the data or programmed in. They can't slow down every time you pass a police car with blue lights, nor can they track every object to the point that it realises it's a ball and cast the trajectory back to its likely origin without also slamming the brakes on for a paper bag blowing across a motorway.
The machines do *not* infer data. They are incapable of doing so. All of them. Even the "AI" ones. They don't infer. They are told, they try to find a marker within that data which is semi-reliable, and then then guess. They have no idea WHY they have to brake, they don't know why the road is suddenly all shiny and rainbow-coloured and why that means you should probably slow and make no sudden lane changes or steering at that point. They can't infer it back. They can only react to specific data they've been told to look for.
And you CANNOT tell a computer to look for every possible circumstance, with any accuracy. It's just infeasible.
I put my life in a computer's hands every time I get into a car. ABS. An ECU controlling fuel pumps. Even electronic engine timing can blow up a car if it goes wrong. But they are NEVER required to guess. If the oxygen is below this reading, signal failure. If it's between this and that, then you're at this point in the stroke and you should do X, Y, Z. They never "guess". They can't "infer". They don't know why the oxygen sensor suddenly returns zeroes, they just get told what to do if it ever does. This is why most cars with oxygen sensors just stall if the sensor is faulty. They can't infer that it's faulty and ignore it. They just sit, splutter and stall. Disconnect it, and the engine KNOWS it's not there and slips into "limp home" mode. But you have to know to do that.
And that's the entire problem... current tech can't even stop files being deleted, people breaking into websites, or properly autocomplete an English sentence. And you expect it to be even vaguely safe to interpret what is possible the worst scenario ever for a computer vision system?
Never rely on a computer to infer. They can't. They don't understand the world and thus cannot predict it or even notice when they themselves are failing. You give a computer instructions to do far more rapidly and perfectly than you could ever do. That's what they are for, that's what they are best at, that's what they do. You do not get a computer to ever infer anything, certainly not in any life-threatening scenario. These things can't even write a decent paragraph of English text with years of supercomputing efforts behind them. They have no concept of the data they are acting on. They are just following instructions.
Those sufficiently complicated instructions can work wonders, yes, but they cannot generate any sort of intelligence (nobody has ever proven that and, no, a Turing Test is nothing more than a psychology test for a human, not an intelligence test for a computer), and they cannot infer anything that's not present in the data.
If you can't infer, you can't understand the situation, or adapt properly to it, or deal with any situation which you don't have explicit instructions on how to deal with.
Uber is a great example - that self-driving car that killed the woman with the bike? Within a few seconds it detected her as nothing, a wall, a bike, another vehicle, a pedestrian, and then didn't know what to do about any of them. A human would infer from all those instructions what the situation actually was. The car wasn't trained on it at all, and wouldn't have stopped in time even if the braking hadn't been disabled.
Inference is an inately human / animal skill associated with intelligence. If I bop myself on the head with this stick it hurts me. So that means if I bop THAT monkey on the head...
Inference is a vastly different skill and not present in any computer system that I'm aware of. Not one of them tries to trace back the reasoning for the data being classified as such. They just operate on statistics and heuristics. Don't trust your life, quite literally, to chance and what-some-bloke-wrote-down for every situation.