Welcome to the grand illusion
Current ML seems to be at the biological equivalent level of retinas, perhaps with a couple of neurons above.
Now think about all the optical illusions that we can be tricked with, and how hard it is to discover some of them - every new instance of ML is going to have it's own set of illusions/false outputs. They may get better and better, but every one of them is going to have it's blind spots and ways to be fooled.
And that isn't counting any of the basic coding, memory, etc., bugs, that can crash things, rather than "just" provide the wrong output. They may be incredibly useful, or even better than a human, but they will never be perfect.