They're playing fast and loose with the statistics. They say 70% of wanted people who walked past the camera were matched. How do they know that? That would presume that they know exactly how many wanted people walked by and were able to identify them all by some other means. It doesn't take account of how many people who were on the watchlist sauntered past completely unnoticed by computer and by plod (the "unknown unknowns" in Rumsfeld-speak). What they actually mean is that *at least* 30% of wanted people were not spotted.
As for the 1 in a 1000 false positives, we're meant to take that as meaning it's right 99.9% of the time but it doesn't consider the number of false negatives (wanted people who are not identified) and as I said above that presumes that we know how many wanted people are actually in the crowd to start with.
Then there's the fact that they are talking as if all people are uniformly likely to be picked out in error. As regular Reg readers will know, facial recognition is notoriously bad at identifying non-white people so while the overall false positive rate might be 1 in 1000 that could be something like 1 in 100,000 white people but 1 in 100 or even 1 in 10 black people (depending on the makeup of the crowd). Being stopped every tenth time you step out the house could get really annoying really quickly.
Finally, we are meant to just assume that everyone on the list is there because there is some genuine need for the police to stop them. They don't tell us anything about how accurate and up to date their data is. Sure, they may identify 7 out of 10 people they're looking for but if those people aren't actually wanted by police then the efficacy of the facial recognition system is greatly diminished. GIGO.
Addendum: finally finally, there's no comparison given for this facial recognition system against other methods such as, you know, giving coppers a bunch of mugshots or even just randomly stopping people and fingerprinting them.