"the only way evidence can be used to support a conviction is if that evidence supports that conviction. If they didn't do the crime, the evidence didn't support the conviction, some other factor - *human* factors - perverted the process."
I tell the court that the defendant was matched with images believed to be of the perpetrator, and the algorithm concludes that they match with a confidence level of 99.4982%. This was trained on a database of seven million photos of human faces. A nontechnical juror hears these impressive numbers and assumes the computer must know what it's doing to produce such a precise number and since it had such a large dataset. The juror is not familiar with the technology and doesn't hear the facts that make this less trustworthy, namely these:
1. The seven million photos were stolen off social media, meaning they were taken on very different cameras, subject to intentional and unintentional editing, and are of better-targeted at the subjects' faces.
2. The images were biased toward one ethnicity causing less accuracy on those with different facial features.
3. The software is using machine learning which can't really provide more information about how it concluded various things.
4. The program hasn't been rigorously tested on extra information because that would require retraining which is costly.
5. Machine learning models always produce really precise numbers.
6. Jurors tend not to know how easy it is to do machine learning wrong.
There are a lot of human factors there, but it's still the fault of the technology usage and can be solved by not allowing the flawed technology to be used as evidence (or at all).