Lies, damned lies and The Guardian's view of statistics.
> The AI tech flagged eight as being possible matches; seven turned out to be false positives, five of whom were actually stopped by the cops and two
> dismissed as obvious errors. The remaining person turned out to be a true positive, and was intercepted by the British plod.
> That's an inaccuracy rate of 87.5 per cent.
The "hits" that the facial recognition system makes are then referred to real, live, people for verification. If the cops get to the point of stopping someone (to ask them for identification, not to arrest them) then it is because an officer has agreed: yes, the face flagged up is actually someone we want to talk to.
At no point did "the computer" arrest anybody.
So a better argument would be that the system flagged 8 people. 6 were passed by police officers, one was genuine and the other 5 were "misses".
So the computer got 7 of 8 wrong, but the officers got 5 of 6 wrong (83%). That shows that the computers are almost as good as the police at identifying wanted individuals. And a damn sight faster - cheaper, too.
Only The Guardian would try to twist such a good (comparable) success rate as meaning facial recognition was a failure.