"inaccuracy rate of 87.5 per cent"
That is an amazing 12.5% success rate, when you look at it from the point of view of a supplier such as Capita or Sopra Steria.
The latest figures from the Met Police's deployment of facial-recognition cameras in the heart of London show the technology is pretty fscking inaccurate. On February 27, for instance, the cameras scanned an estimated 8,600 faces in Oxford Circus, checking them against a watchlist of 7,292 people. The AI tech flagged eight as …
In any case, I'm quite happy that we now have some hard numbers on this tech. Up to now, all I was hearing was companies waxing lyrical about how efficient their "solution" was. Well now we know : 12.5% efficient.
Less than solar panels.
I wonder, is the Met holding it wrong ?
"Well now we know : 12.5% efficient."
Nope, not even close - https://en.wikipedia.org/wiki/Base_rate_fallacy
These systems have already been shown to be AT LEAST as likely to MISS a wanted person as to flag innocents as wanted - but because the Met don't _measure_ a false negative rate the real figures are utterly uncalculable.
A system which flags a middle aged, short, fat black woman as matching a wanted, tall slim 19yo black male - whilst failing to go off on 20 wanted white criminals isn't worth the cost of the cameras unless your agenda is to perpetuate the ongoing culture of institutional racism.
False positive rate of 87.5% is terrible.
Of course what is still missing is a false negative rate. We don't know how many of the wanted people sauntered past undetected.
What about putting some copper faces on the watchlist and have a random few walk past in plain clothes?
We don't know how many of the wanted people sauntered past undetected.
Of the 8,600, exactly 8,590 were wanted or suspected robbers, rapists, terrorists, burglars, wife beaters, fraudsters, and serial litterers, all of whom were geniuses in the highly technical business of wearing wigs, false facial hair, hoodies or spectacles or any combination thereof, and/or capable of using theatrical wax to change the shape of nose or ears, and/or those at the absolute pinnacle of their skills ... with fat wodges of gum stuck in their cheeks to change facial shape.
The other two were foreign intelligence agents of record, using none of the above basic methods, preferring skin coatings, invisible to the naked eye, which use preferentially reflective spectral compounds to fool camera sensors about what they are seeing.
In an unintended irony, the half dozen or so police officers and MI6 agents who would have spotted the foreign spies using Mk#1 Eyeball—if they'd actually been in the crowd—were instead sitting in the camera nerve centre, diligently analysing the almost completely incorrect recognition data.
Ivan and Valentina were back in the Russian embassy by teatime, having concluded their dead-letter mission, unrecognised and unmolested.
And the stats seem to be in line with previous figures released.
Of those that are detected, a high proportion are false positives (87.5% this time with 1 out of 8 versus 81% previously with 8 out of 22). Also note that one of the issues is that the watch list is that it contains errors.
Previous Guardian article on this:
Of 42 people flagged up during the Met’s trials, 22 people were stopped, but of those only eight were being sought – some of whom were wanted for serious violent crime. Some were stopped for a crime the courts had already dealt with, but were arrested for a more minor offence that would not normally be considered serious enough to be tackled using facial recognition.
This was the result of 6 trials involving approximately 40,000 individuals - the exact number isn't given, but Police claimed it identified around 1 individual in every 1000 people multiplied by 42 detections.
This part is very telling:
"Some were stopped for a crime the courts had already dealt with"
In other words, the they had no good reason to be on the wanted list AT ALL
"but were arrested for a more minor offence that would not normally be considered serious enough to be tackled using facial recognition."
In other words "We need to justify this, he's a bad man, so we arrested him on charges of walking on the cracks in the pavement, loitering with intent to use a pedestrian crossing, wearing a loud shirt in a built up area during the hours of darkness and being in possession of thick lips and curly black hair"
I'm willing to bet that the arrests were later voided, but the Met won't be breathing a word of that.
"It doesn't work, it's terrible, false positives"
Then in about a years time the accuracy will shoot up to 98% and it will be too late and everyone will eventually get tracked all the time. Also considering these 7,292 probably know they are on a watch list it's unlikely they will be wandering round central London without some sort of face cover if at all.
" looking out for people wearing masks and arrest them"
That would probably have worked before. Now if an officer stops you for wearing a mask, say it's because of coronavirus. Bonus points if you cough* or sneeze* during the chat.
*Not directly onto the officer of course or they'll charge you with GBH or attempted murder
I don't understand how is this a failed tech!
There were 8600 faces detected and 7 were flagged as probable matches (by matching against 7292 possible faces) and 1 turned on to be a true positive.
Meaning only 7 were sent for manual inspection. Can you imagine the fleet of people required to do this manually?
Also I am assuming 7 were flagged because the system could not afford to have false negatives and thus been liberal in flagging the matches.
I am no fan of surveillance, but this article seems trumpeting towards the wrong end.
It would be interesting to know the cost of detecting and apprehending this person of interest, how serious the alleged crime was, the strength of the evidence that put them in this position and the success or not in prosecuting them.
That would give a better indication of efficiency at the cost of privacy.
"Meaning only 7 were sent for manual inspection. Can you imagine the fleet of people required to do this manually?"
I don't think this makes sense. The system needs at least one person waiting for alerts who can respond immediately to check the alert, stop and question the highlighted individual and decide if the alert was genuine. In practice I am sure several people were required with the system the entire time it was operating. Now lets assume that the people instead of baby sitting the AI system were simply looking for suspicous behaviour and were aware of the appearance of some wanted people who intelligence thought might pass through the area concerned. How many genuine wanted people would the team stop and question in this case? I would be amazed if it was not substantially higher than 1.
If this is the case and I admit I have nothing more than my feelings to go one unless somebody does this as a comparison, the system quite apart from any concern about a surveillance society and culture is wasting time and reducing police effectiveness.
" Now lets assume that the people instead of baby sitting the AI system were simply looking for suspicous behaviour and were aware of the appearance of some wanted people who intelligence thought might pass through the area concerned. How many genuine wanted people would the team stop and question in this case? I would be amazed if it was not substantially higher than 1."
I would be flabbergasted if any of a group of 10 coppers would remember what any of 7000+ perps looked like, and 10 cops deployed at the same point might not even have arrested that 1 wanted person, AND would probably have stopped considerable more than 8. But what's your point? That it's OK to use unacceptably intrusive and shit technology because to do it manually would be even more intrusive and give even worse results?
In principle I wouldn't have a problem with this technology being used if (a) it was used ONLY for serious crime (GBH / armed robbery / murder etc) (b) there is no recording of people and any false matches are immediately purged (c) the technology actually worked / was fit for purpose.
(c) might be fixed, but (a) is clearly not the case, whatever the Met may say (a suspect list of over 7000 is clear on that point), (b) is not the case either as these are kept for a while. Even if points (a) and (b) were adhered to now, there is no trust that they wouldn't be introduced later.
No, the way to go is the San Francisco way - A blanket ban of all automated facial recognition.
"what any of 7000+ perps looked like"
The system looked at 7000+ passers-by and compared what it "saw" with a short list of between 20 and 40 faces on a "watch list", depending on which trial we are talking about. So the officers using Mk/.1 Eyeball only have to remember a small number of faces. People have different levels of innate ability to remember and subsequently recognise. Those who are not in the small sub-set of "super recognisers" can be taught techniques to improve their chances of recognising the suspects. The question is all about if and when using facial recognition becomes better and cheaper than using police officers on the ground and if that cost is a reasonable trade-off against the very likely privacy intrusions. Currently it seems that not only are the systems poor, they are far from being cost effective.
It's worse than that. How many people are required to track down the one person who did get caught? Not that many if they're doing their job. It's called policing and investigation, and we've proven we know how to do it. How many would be required to find thousands of people? A lot more, but the system didn't do that either. Also, if we did somehow come up with enough police to track down each of these people manually, they'd be doing it by investigation of the fugitives and manual tracking, rather than mobbing the public streets and demanding identification from everybody in the hopes of turning up a suspect. In either case, the original argument is just wrong.
We don't know how many people on the watch list of 7492 were actually among the 8600 people scanned. We don't know how many real people-of-interest it missed. Are the 8592 people it didn't flag true-negatives, or are some false-negatives? We have no assessment of false-negatives.
We only know that of 8 people it thought it found, only 1 of those was a correct match (true-positive). The rest were false-positives, innocently going about their business.
So that's a 12.5% success rate for flagged possible matches. Not an auspicious start.
They aren't sharing the false-negative rate. Probably don't even care about such things. After all, nothing to hide, nothing to fear right?
But without knowing the false-negative rate it's impossible to assess accuracy, acurately.
Hence we can only draw conclusion based on the false-positive rate, which shows piss-poor 12.5% accuracy for true-positives.
And that's the best possible interpretation. It gets worse the more false-negatives there are.
How does Metplod tech accuracy compare with China's implementation? I'm not saying plod tech is always bad but form long experience I know the Met like to pay over the odds for second best as many former colleagues will testify! Can anybody comment? I've walked around Oxford Circus occasionally and that square at Stratford many a time and its always very, very busy so it seems to me that they only ran their cameras at each location for an hour or so.
If I was the first person bagged on plodcam in the UK Id want to get into the Guinness Book of Records!
That one person was bagged in just a couple of camera hours with such a small database of faces means they probably loaded the mugs of know local wanteds into the system or they just got very lucky.
Forget looking for fugitives, how long before Border Force have the system up and running at every airport making it impossible to go on hols before settling your tax bills or outstanding tv licence?
"How does Metplod tech accuracy compare with China's implementation?"
China uses the NEC NeoFace solution used by the Met amongst others.
In terms of success, there are questions about the accuracy of facial recognition systems in China - they appears to be a profitable way of line law enforcement and political party pockets and has increased the number of "prosecutions" and "organ donors" but have had less of an impact on overall crime.
"We each begin in innocence. We all become guilty." Leonard F. Peltier
This post has been deleted by its author
> The AI tech flagged eight as being possible matches; seven turned out to be false positives, five of whom were actually stopped by the cops and two
> dismissed as obvious errors. The remaining person turned out to be a true positive, and was intercepted by the British plod.
> That's an inaccuracy rate of 87.5 per cent.
The "hits" that the facial recognition system makes are then referred to real, live, people for verification. If the cops get to the point of stopping someone (to ask them for identification, not to arrest them) then it is because an officer has agreed: yes, the face flagged up is actually someone we want to talk to.
At no point did "the computer" arrest anybody.
So a better argument would be that the system flagged 8 people. 6 were passed by police officers, one was genuine and the other 5 were "misses".
So the computer got 7 of 8 wrong, but the officers got 5 of 6 wrong (83%). That shows that the computers are almost as good as the police at identifying wanted individuals. And a damn sight faster - cheaper, too.
Only The Guardian would try to twist such a good (comparable) success rate as meaning facial recognition was a failure.
You have that backwards, the police would have stopped those people in response to the system's result to check id. If they agreed with the visual similarity, id is the next step, when the Id doesn't match the false positive is still that of the system.
> the police would have stopped those people in response to the system's result to check id
No. That is not how it works.
The facial recognition system scans thousands of faces.
It flags up a small number of potential matches.
Each of those potential matches is checked by an officer in the control room
If the officer agrees, a call goes out to stop the individual for an identity check.
The only time a call is made is if the person (or persons) who have re-checked who the computer flagged agree that there is a match. 5 times out of the 6 times, the officer (a person) made the same error of positively matching that the computer did.
@Pete2 - you are missing psychology 101. If you show someone 10 pairs of photos and ask them if there are any (ie potentially 0) that match, each pair of photos will be attentively scanned for similarities and differences to reach a decision that could go either way.
If someone shows you a single pair of photos and tell you that they think it's a match, the photos will be scanned for similarities to support a predefined conception.
Unless the mismatch is glaringly obvious, in practice the police will give the AI the benefit of the doubt.
The figures don't add up unless you know how many of the wanted people were in the group of faces surveyed. If they were all in that group then its a spectacular failure rate. If none of them were in the group then you have the false positive rate which seems reasonable.
Incidentally they don't say that the person the apprehended was in the group of people they were watching out for. They might have just got lucky and nabbed a different villain.
This post has been deleted by its author
This is a dangerous experiment which needs to be terminated at once.
The exact same mathematics underlying the problems of facial recognition -- which is just an especially-complicated form of shape recognition -- also underlie the problem of decompilation of binary executable code to human-readable Source Code. "What shape does this vertex belong to?" is isomorphic with "What high-level program structure does this machine instruction belong to?"
If boffins wish to research the mathematics underlying face recognition in a way that has a negligible human cost, they could do worse than research decompilation. And when you have something that can reliably take a compiled binary and spit out some Source Code that compiles to a bitwise-identical binary when fed into the same compiler, then you might be ready to undertake a limited trial with fully-informed volunteers.
And even if the face recognition does not work, you will potentially have put a fix in place for thousands of legacy systems where software whose Source code has long been lost is having to be run on increasingly-scarce hardware because nothing newer can run it; as well as enabling programmers across the world to collaborate on a project, without even a language in common.
Though for my part, if I achieved that much, I'd be content to leave facial recognition as a problem for someone else to solve .....
This is a dangerous experiment which needs to be terminated at once.
What makes it dangerous?
If the computer were directly arresting or executing the resulting individuals I'd agree with you, but that only happens in bad SciFi films.
In this case it flagged up a few people as "worth a look", the police looked, and mostly said "nope, not a problem". Is that fundamentally different from having people dial 999 to report "ere, that murderer wot was on the telly last night, he's in Woolworths on the High Street"?
I wonder why they didn't try to to test the false-negative rate. It would have been easy to add a few known 'test' faces to the watchlist, and then ask those people to wander through Oxford Circus on random days of their own choosing, to see whether they got detected.
I suspect the reality is that the Met are hoping the idea that facial recognition is being deployed will discourage crime, regardless of whether the technology actually works or not.
"the measure of a good cop and the basis for advancement is in the arrest rate"
In the USA and a number of other backwards countries, perhaps.
In other countries, individual cops with high arrest rates get investigated to ensure they're not doing illegal things and the measure of sucessful policing isn't REPORTED crimes and solve rates, but the surveys which run to pick up what's NOT reported (where people don't bother because the police won't do anything, or not record it - UK police featuring highly in the latter category)
False negative rate is really irrelevant. What counts is: How many valid arrests? How much inconvenience (or worse) for false positives, and what is the cost.
If you have one camera, and it flags one wanted criminal and seven innocent citizens, and lets ten perps walk past, then we can install a second camera, and together they flag two wanted criminals and fourteen innocent citizens, and let twenty perps walk past. The doubled number of false negatives doesn't matter, the true positives and the false positives are the only thing that matters.
What hasn't been mentioned actually is who is on the watchlist? Is it just photos of unknown people (say a photo of an unknown bankrobber, where a match means you are a crime suspect), or photos of known people (say a guy who murdered his wife and is now on the run, once the police checks who you are they know you're not that person).
"I wonder why they didn't try to to test the false-negative rate."
You know full well why - various bad actors have already been boasting about sauntering past these cameras without a hit being registered and the Met don't want the false negative rates to be quantified.
As for the "idea" that facial recognition will deter crime - opportunists don't care about being fully visible on CCTV as it is (impulse control issues) and actual criminals know the things are next to worthless as locations are easily seen
In general if you want to catch an _actual_ criminal then the only kind of video surveillance that works is _covert_ cameras - which in public places are a spectacular own goal as soon as their presence is known.
Check out recordings of sleight of hand tricks on youtube. Even at super slow close ups, it's really hard to see/catch. So how easy is it to sleight of hand past most these things... if you know how.
Innocents though? They'll forget to tie their laces, and get done for something totally unrelated, but the camera/computer says "no, go to jail".
Excellent idea in principle David M, but a few would not be enough. The population sizes for false positive and false negative testing must be at least approximately the same size to get reliable results. Consequently in this case false negative testing is pretty much impossible to perform. That's just one of the reasons why "statistical" justification for this intrusive technology is without merit.
However that won't stop it being used, as statistics are only used selectively to justify adoption. There are also apparently moves to introduce "lie detectors" into some interrogations, despite decades' worth of objective evidence that they don't detect lies, only physiological stress.
However both these technologies may, by frightening people into behaving in certain ways, serve coercively to elicit predictable behaviour patterns that can be "assumed" to indicate guilt.
Suppose that you have a magic terrorist detector, such that that if you give your detector a picture of a terrorist, 99 times out of 100 it says "yes" and 1 time out of 100 it says "no". That means that it has a false negative rate of 1% and a true positive rate of 99%.
And suppose futher that if you give your detector a picture of a non-terrorist, 99 times out of 100 it says "no" and 1 time out of 100 it says "yes". That means that it has a false positive rate of 1% and a true negative rate of 99%.
This is a pretty fucking good detector. You would have to be an idiot to refuse to admit that such a scanner would be useful.
But. If you take this detector and scan 1,000,000 people, 100 of whom are terrorists - you will EXPECT, of the 999,900 non terrorists, to get 1% false positives. Which means the detector is going to go "TERRORIST" 9,999 times in error and remain silent 989,901 times correctly. And of the 100 terrorists, you expect it to go "TERRORIST" 99 times correctly, and remain silent once in error.
So of the 10,098 times it said there was a terrorist, in only 99 of them was there actually a terrorist. About 1% of the time when the machine goes beep does it mean there's really a terrorist there.
Does this mean that the system has a "99% false positive rate"? No. We established above, the system, which is a good and useful tool to have, has a 1% false positive rate. But if it is being used on a population with a very very low base rate you always expect the false positives to outnumber the true positives.
What the system has successfully done is narrow down for you a population where terrorists were 1 in 10,000 to one where they're 1 in 100, for you to look at more closely. This is useful.
What this article (and, every single other article on this subject has done) is commit the https://en.wikipedia.org/wiki/Base_rate_fallacy and I wish you'd stop.
Argue against facial recognition, sure. Just please don't use nonsense statistics to do so.
Your maths falls apart when you realise that we don't know how many people walked past the camera that were actually wanted criminals. It found 1. In 7000 people. Stats tells me it missed a lot more than it found.
Also, we're arguing against using actual numbers and not your hypothetical "terrorist" numbers. The 87.5% false positive stat came from the fact that out of 8 people ID'd by the system, only one was an actual criminal. Using your own rubbish example, that would be it shouting "TERRORIST" 100 times and 87 of those being non-terrorists.
we don't know how many people walked past the camera that were actually wanted criminals.
Stats tells me it missed a lot more than it found.
Your second statement is nonsense, given your first one. You have no idea how many people it missed, since you don't know how many there were to miss.
Statistics are frequently non-evident, that's why gambling earns so much money for casinos and lottery companies. The post you're responding to described a perfectly correct and well-known situation when it comes to system errors, and the article's reference to an 87.5% failure rate is indeed complete nonsense.
"you don't know how many there were to miss"
Neither does the plod because they're relying on the AI and not looking themselves, which they would have done if they had actually been on the street patrolling.
The NYPD currently uses FR only when assessing crime scene photos by comparing them to their internal arrest database. If true, it seems a more proportionate use of the tech.
"It found 1. In 7000 people. Stats tells me it missed a lot more than it found."
That doesn't make sense. The AI, like a human, had a small, limited number of people it was looking for. Criminals not on the watch list were missed because no one was looking for them during that period and didn't know they even existed. A human may have picked up one or more wanted crims not on the todays list because they already remember them from other lists. The AI has ONLY the current, small list.
your analysis is correct - when taking into account a base rate with very low incidence, the false positives will drown out the true positives. And yes, a 99% true negative and 99% true positive system will narrow down the search space.
What you are missing is that (a) even a near-perfect machine is doing nothing more than narrow down the search space (b) In real-life, because of how pattern matching works, making it more sensitive (increasing the true positive rate) will also increase the false positive rate. So you can't improve both those rates - improving one will make the other worse (c) this is not a simple mathematical exercise. History tells us that police/government cannot be trusted to use this technology responsibly even if it did work to that magic level of accuracy.
Combining all those, it is not worth the time, investment, and loss of civil liberties
@bencurthoys: your hypothetical numbers are absolutely correct. Full marks - and I used to teach data analysis and statistical methods.
Your conclusion that the numbers make your hypothetical detector a "useful tool" is quite wrong though, at least within a free society (which is almost the whole point here).
One thing that your conclusion does not take into account is the principle of presumption of innocence. You propose actually stopping, detaining, and verifying 10,000 absolutely innocent people on suspicion of them being terrorists. What is that going to do to their lives? This is simply not an acceptable price to pay to catch a few terrorists (who, statistically, don't do much damage, by the way - that's another facet of "numbers don't always care weight").
Another thing your conclusion misses is alarm fatigue. The signal to noise ratio in your hypothetical setup is very low. The efficacy of (human) police who will be doing verification will be very low, and the ultimate ROI of the system will be very low as well. In addition, the verifying police who will check 99 people only to find false positives will be quite likely to make a mistake in case number 100. The fact that rather than checking 1M people you need to check only 10K flagged by the terrorist detector is not relevant in the context. Yes, this might be an improvement on stopping a million people in the streets and checking each and every one of them. I'd discount this argument (and I consider myself fortunate to live in a society that allows me to do so...).
A reasonable alternative approach is doing actual police and intelligence work and not involving 10K innocents in the first place. That should require a significantly smaller army of investigators, too.
Wouldn't put it past plod to massage the figures by creating criminals...badgering people into an annoyed response and then arresting them for it. Public Order offences spring to mind, but there are lots of other routes to take.
Also, I don't believe them when they say they delete all the non-successful records.
On February 27, for instance, the cameras scanned an estimated 8,600 faces in Oxford Circus, checking them against a watchlist of 7,292 people. The AI tech flagged eight as being possible matches; seven turned out to be false positives, five of whom were actually stopped by the cops and two dismissed as obvious errors. The remaining person turned out to be a true positive, and was intercepted by the British plod.
That's an inaccuracy rate of 87.5 per cent.
Not it's not. Out of 8600, it matched one correctly against a corpus of about 8k, and had 7 false positives. Don't do the easy bit (7/8 as a percentage) and not do the journalist bit (what's the correct accuracy measure). This isn't high school.
And at these levels of success (One guy successfully arrested out of the 12K or so people this article mentioned being scanned), wouldn't it just be cheaper to hire and train some more cops to go out on the streets and track bad guys down?
It's easy to hype the supposed merits of miraculously wonderful technology, which can purportedly do the work of hundreds or thousands of actual trained, skilled people.
That's why NSA and GCHQ have recorded millions of hours of phone conversations, which they will have properly analysed by 2317AD—if they are able to find several hundred more speakers of highly colloquial Arabic dialects, that is.
It's why police will have petabytes of camera footage just waiting for the perfect recognition technology, so that sometime in 2080 people wanted for relatively trifling crimes—i.e. those who did not make use of elementary disguises, like wearing Coronavirus masks—can be positively identified ... and their graves visited.
To be more serious, it's why there is no substitute for good old-fashioned humint and shoe-leather. But the miracle tech is so inviting, isn't it?
As with companies still buying Oracle: it's amazing how stunningly gullible and ready to part with taxpayers' money supposedly intelligent senior executive-level people can be. We expect a certain level of outright stupidity from our politicians (look at what's befouling the Home Office at the moment) but senior police and intel types ... not so much. Then again, these are the same (mathematically illiterate?) buffoons who keep banging on about secret backdoors in E2E encryption. Cue, Trumpian tantrums where an arrogant fathead keeps screeching "But I want it! I want it !!"
Somebody was taken off the streets that should not have been there.
It's shoddy reporting to say this is an inaccuracy rate of 87%.
In inaccuracy rate of 87% would mean more than 7,000 false alerts would have been generated.
The number of false positives is steady and predictable.
The number of genuine positives is based on the number of people in the watchlist that was actually there.
If nobody wanted actually turned up on the day, does that mean the system is 100% inaccurate?!
Facial Recognition Accuracy: A Worked Example :
I'm planning to rob a bank. I need 4 people for the job. I interview a bunch of candidates and come up with a short-list. If this stuff actually worked I could send them past the camera and see who got singled out. Now I could select from the rest knowing that any recognition cameras would likely miss them! Crappy system, I want my money back (so does the bank).
I hate being a skeptic but... I think 8 matches with 1 True is far too small a sample size for a success rate of 12.5% to be meaningful. Also, to get a True positive means you need another photo to match against so the fact that out of thousands of faces it only recognized 8, or thought it did, probably means that the Scuffers do not have a vast database of photos of all us potential miscreants and thought criminals. If you don't have a photo in the database then it isn't going to recognize you. The chances of a known naughty just happening to walk past a particular camera in a particular street must be...not very good. Of course this is another baby step in the UK along the road to big brother. Our betters must go to sleep at night praying to the big Scuffer in the sky to wake up in China.
Disclaimer-This comment is a first draft and poorly thought out and may contain errors, factual, logical., or grammatical.
"If you don't have a photo in the database then it isn't going to recognize you."
That's the point of the test. It's not trying to identify everyone it "sees" and put a name to them. It's trying to match every image it collects to see if it matches a small selection of "wanted" people. The problem is the number of people it is falsely matching to that small list. (and those who are on the list, get scanned and then missed/discarded as "not on the list"
Stop worrying about trivialities, semantics or statistics and think where this all leads...
The only reason the police does not currently scan all video feeds from all cameras they can access for all faces they are interested in all over the country, is that they don't yet have the connectivity, bandwidth or computing power to do it. Rest assured that once they do, they will use it and there is absolutely nothing you can do about it.
These very public "trials" are just to get you used to the idea and win the support of those who would happily embrace facial recognition everywhere to ensure that even minor breaches of the law are immediately and severely punished. After all, they are the "good" people and have absolutely nothing to fear.
Year by year, little by little, our freedom and privacy is chipped away. Soon, it will be too late to stop Orwell's dystopian future from becoming reality. The only difference from Orwell's future will be how the decision to send you to re-education camp or execution will be made: in our future, it will be an "AI" and there will be no right to appeal.
Look at this another way: they successfully matched one (1) person for the duration of the trial. For a sample set of this size, that looks like a huge success.
What if that one person hadn't taken that route during the trial? Then they would have matched 0%.
Cynic in me asks: how much did they pay that one person to walk through the area so the test would look like a success? Did they just get lucky?
(And TBH the other comments above are right: even if it had only a 1% false positive rate, the real issue isn't accuracy here, it's rights.)
" the real issue isn't accuracy here, it's rights"
One of the lesser known things about the Holocaust is HOW the Nazis knew who was Jewish, gay, gypsies, etc etc
Long before the Nazis were a thing, the 1923 census asked this question in order to be able to gauge discrimination in the Weimar republic. The Nazis were able to refer to the answers.
Just because something seems to be a good idea at the time, doesn't mean it can't be abused later.
I have a list of places where they are trialling this.
I now own a large brown suitcase full of mirrored aviator sunglasses. I'll only be charging £20.a pair. Meet me just around the corner from the camera.
Actually thinking about it...sunglasses and face masks.....this time next year Rodders....
Seems to me they now have an excuse to check the ID of random people, for a "probable cause", which they would not have otherwise.
And yeah, from the point of view of cops, it's quite a success. If they were generally able to arrest one criminal for every six person checked, they would be over the moon....
Would be nice to know the rate of false negatives, of course. What percentage of people walking in London are are criminals, would you say?
There are only two things that you need to know about statistics.
23.974% of all statistics are useless.
Politicians use statistics like a drunk uses a lamp post for support rather than illumination.
And Max headroom used to say: You always know when a politician is lying, it is easy their lips are moving.
Unless this system is setup outside the houses of parliament, to track and publish the attendance of publicly elected politicians for at least the full term of a general election, then it should never be deployed on the general public. Let the politicians be watched and tracked for two to five years and see how they then feel about having such a system deployed.
Biting the hand that feeds IT © 1998–2020