Tired of smoke-screen AI research claims that a vendor does not know how its AI works
Algorithms behind neural network modeling AI methods use an ancient and well-known Bayesian probability math and its derivatives, so pretty much anything what neural networks are programmed by humans to do can be explained exactly in mathematical terms. As any probabilistic statistics driven algorithms, their output is probabilistic, it tells us only likelihood of the specific outcome, eg., 95% likelihood that a self-driving car should stop immediately, and 5% likelihood it can drive on. To continue to argue that humans do not know how neural networks work and therefore big vendors should not be held responsible for their AI assisted products failures, is a wishful thinking at the loose inspiration level at best. The worst case would be intentional corporate evil using the ubiquitous ignorance of the people about AI mathematics strict rules to get away without corporate responsibility for consequences. Mathematicly any likelihood statistics driven computing technology inevitably will yield certain amount of failed predictions, resulting into car crashes, collisions of planes etc, affecting human society everywhere where neural network AI will be applied. Deep Learning AI math is based on statistics rules, it's not based on rules of human logic. Likelihood based software use for life or death decisions is irresponsible.