I have written code in the past that analysed data in a "fuzzy" way. It had its pathological weaknesses and could interpret some cases wrongly. Fortunately they were very rare - so usually it gave good answers.
Sometimes the good answers were counter-intuitive and looked wrong. It was hard work checking how it had arrived at that result. It doesn't need much complexity for the human brain to get overloaded. That was why the tool was produced - to abstract large volumes of sometimes complex data into a form that was easy to assimilate by a human.
What was obvious was that in the hands of someone inexperienced they could make serious mistakes. You needed the experience of doing things the hard manual way. Only then could you recognise possible anomalies - and have the skill to go through the code/data to understand what had happened.
You could also buy expensive applications that purported to do the same job with that data - and people made a lot of wrong diagnoses from trusting those results. Even the products' support people didn't understand the way they could fail.
It was the old problem of human nature - if something is printed with nice pictures then it must be true - and you don't need to learn how it is done.