back to article Cops told live facial recog needs oversight, rigorous trial design, protections against bias

Cops should only use facial recognition tech if it is proven to be effective at identifying people, can be used without bias and is the only method available, a UK government advisory group has said. The Biometrics and Forensics Ethics Group – set up to shadow and advise on the Home Office's activities in those areas – has …

  1. Toltec

    Really?

    "with some groups more likely to be detained or asked to identify themselves"

    They all look the same to me guv.

    Probably not what they meant to say, but easy to read it that way so could be taken as quite offensive. If I were the type to take offence on behalf of other people.

    1. Anonymous Coward
      Anonymous Coward

      Re: Really?

      > If I were the type to take offence on behalf of other people

      That's such a popular activity, and the mindset so deeply ingrained that it will probably qualify as a religion on the next census.

    2. katrinab Silver badge

      Re: Really?

      Cameras tend not to be so good with darker skin tones, therefore, the computer risks matching such people purely on the basis that they have the same skin colour as someone on the wanted list.

  2. tiggity Silver badge

    Curated images

    It will be fine, Constable Savage* will curate the images

    *NTNOCN reference, for those unaware YouTube will help

    1. monty75

      Re: Curated images

      "Do I take it, Savage, that your facial recognition system only picks out coloured gentlemen?"

      "I can't say I've ever noticed, sir."

      1. BebopWeBop

        Re: Curated images

        A technology for the Special Patrol Group then.

    2. tony2heads

      Re: Curated images

      Constable Savage was clearly ahead of his times

  3. Anonymous Coward
    Anonymous Coward

    can be justified only if it is an effective tool for identifying people

    this is a ridiculous conclusion, disregarding any (potential) right to anonymity, essentially, "when the tech is ready ANYTHING goes" :(

  4. IanRS

    98% false positive rate?

    False positive rates should be low. Really low if you are dealing with a potentially large pool of candidates. A rate of 1% would mean that 1 person in 100 would be falsely recognised as being somebody 'of interest', and in any kind of crowded event there will probably be thousands of facing passing in front of each camera, so 10s of false positives. A 98% rate means this facial recognition system is working at the level of 'yes, that is a face'. It probably even triggers on the police horses.

    It might be that 98% of the flagged faces were false alarms which is still stupidly high, but that is not what is properly meant by false positive rate.

    1. Eddy Ito

      Re: 98% false positive rate?

      I think the problem is that given some gaggle of people where the total number is unknown, it's much easier to simply determine what percentage of flags are false. It could be that given a glom of 10,000 faces it only recognizes 800 as even being human faces and as you say, some could also belong to horses.

      I hate to ask but given how inaccurate the system has thus far proved itself to be; do they have data on the number of faces (horse or human) observed vs the number identified as people? Oh, and how does it do with the Insane Clown Posse?

    2. T. F. M. Reader

      Re: 98% false positive rate?

      You can get a 98% false positive rate in an experiment if the algo's false positive rate is tiny.

      Suppose the face recognition AI has a 1% false positive rate. I.e., given a 100 innocent mugs it will wrongly recognize only one of them as a criminal. Now conduct a "trial" on a set of 9,800 people coming out of a particular tube station during a given day. There may be 2 real criminals in the bunch, but the AI will flag 98 innocents in addition to them. Out of 100 people identified as criminals in the trial 98% will be false positives.

      This is sometimes calls the prosecutor's fallacy. Suppose they drag you into court for murder and prosecution says that you must be the murderer because your DNA matches a sample taken from the murder weapon, and the false positive rate of DNA matching is one in a million. However, if that is the only evidence against you then, if the set of potential murderers is 4,000,000 (say, the adult population of the large city where the murder occured) out of whom only one person is actually guilty, then 4 of the innocents will match the forensic sample by chance and you may simply be one of them. Were the CSI boffins to take DNA samples from everybody in the city then chances would be that they would find 5 matches - one true criminal and 4 innocents. Out of that sample the probability that you are the bad guy is only 20%. That's "reasonable doubt" (or whatever the proper term is in your jurisdiction) right there.

      This is why forensics (DNA, fingerprints, etc.) should never be the only evidence on which the whole case hangs. They may provide supporting evidence (A, B , C, and the DNA also matches), but "science" is not enough to convict someone of murder on its own.

      1. Robert Helpmann??
        Childcatcher

        Re: 98% false positive rate?

        To follow this up a bit, one of the reasons that facial recognition has had such miserable results has been due to the data set used in baselining. I do not know about the UK, but in the US it is typical to provide your fingerprint as part of getting a state ID. It is not a big leap to assume your next ID photo will be included in the data gathered at that time. With the data set approaching 100% of the population, the accuracy of these systems should be greatly increased. What then?

      2. MonkeyCee

        Re: 98% false positive rate?

        "Suppose the face recognition AI has a 1% false positive rate. I.e., given a 100 innocent mugs it will wrongly recognize only one of them as a criminal. Now conduct a "trial" on a set of 9,800 people coming out of a particular tube station during a given day. There may be 2 real criminals in the bunch, but the AI will flag 98 innocents in addition to them. Out of 100 people identified as criminals in the trial 98% will be false positives."

        Surely those are different things? Accuracy and false positives.

        I thought a 1% false positive rate was that of 100 positives, one was in fact not a positive. So of 100 images flagged as crims, one is an innocent person.

        Now, depending on the ratio of criminals to innocents, you'll get different results. Say 1 in 1000 people are criminal enough to make the database.

        So say you sample 100,000 people, which contain exactly 100 crooks. Assuming 0% false negative, 99% true positives and 1% false positive the system should flag the following:

        - 99 crims as crims (99% true positive, 0% false negative)

        - 1 crim as innocent

        - 999 innocents as criminals (1% of 99900)

        Making the system roughly 9% accurate.

        Mostly I have to explain this sort of thing in regard to medical tests, which tend to (for obvious reasons) have low false negative rates in exchange for high false positive rates. Better to accurately diagnose all the people with a disease while scaring the crap out of healthy people rather than miss a correct diagnosis.

        Generally only accuracy = false positive rate when 50% of the population has whatever you're testing for.

        1. katrinab Silver badge

          Re: 98% false positive rate?

          No, the system is 99% accurate. For each individual test carried out, it will give the correct answer 99% of the time.

    3. katrinab Silver badge

      Re: 98% false positive rate?

      To give an example, lets say you are looking for 10 people out of a population of 1,000,000.

      If your system is 90% accurate, it will pick out 9 of the wanted people, and 99,999 of the not-wanted people. That gives you a false-positive rate of 99.99%.

  5. Anonymous Coward
    Anonymous Coward

    Orwellian whether it works or not

    Objecting to this technology simply because it doesn't work seems a dangerous course to me, there's plenty of reasons to object ot it even more when / if it does work.

    I've seen film of Americans in China commenting on being instantly fined for jaywalking, based on facial recognition, though it might be possible that their mobile gave strong confirmation of identity. The fine was taken from them, via their phone, immediately.

    It would of course be easier to 'accurately' 'recognise' anyone from a minority group in a particular locale.

    1. Graham Cobb Silver badge

      Re: Orwellian whether it works or not

      Yes, there are two completely different issues here:

      1. If the process is inaccurate then they should not be stopping people based only on the system decision: it must require a positive confirmation from a person, preferably from a person familiar with the actual suspect. There is no excuse for stopping, delaying, worrying and interfering with significant numbers of perfectly innocent people on the basis of "the computer says so".

      2. If the process is accurate, then the civil liberties argument comes into play: we, as a liberal, western and freedom-loving society, do not want our police to be too efficient. That leads to police states, authoritarian government, corruption, suppression of dissent, prevention of change/improvement. It is better that a few criminals get away with it than that innocent people are persecuted for actions or opinions that are not in the interests of the powerful.

  6. Wolfclaw

    So here is a report for the government, that the Home office will quietly bury behind the sofa so plod can continue their scanning regardless of legality, accuracy, breach of human rights and don't get me started on GDPR compliance !

    1. Anonymous Coward
      Anonymous Coward

      Accuracy is important. Cameras are subject to certain problems that can be exploited to show your somewhere your not via a holographic setup just outside of the cameras line of sight. Next gen camera spoofing 4 u.

  7. Rajiv_Chaudri

    I fully agree that AI should be programmed to only target white criminals. It is wholly unfair that AI would identify any people of color as "criminals" at all. That is the definition of racism.

    1. Jimmy2Cows Silver badge

      Say wot? Surely you're trolling. Or off your meds.

      AI should only detect people of interest to the cops, and ignore anyone not of interest. Skin colour is completely irrelevant.

  8. cam
    Black Helicopters

    "Cops should only use facial recognition tech if it is proven to be effective at identifying people, can be used without bias and is the only method available"

    How does one take the bias out of something used to prosecute, as opposed to acquit?

    The Police never take down anything you say to be used 'for you'; only against you. They don't get paid to prove people innocent. Bias already present.

    Just saying.

  9. gypsythief

    London police commissioner Cressida Dick...

    ... a clear case of nominative determinism if ever I saw one.

    1. John Brown (no body) Silver badge

      Re: London police commissioner Cressida Dick...

      Can a woman be a dick?

      1. gypsythief

        Re: London police commissioner Cressida Dick...

        "Can a woman be a dick?"

        Maybe this depends upon whether you're left-pondian, right-pondian, or upsidepondownian, but going by the Urban Dictionary's first definition for "being a dick":

        "conducting oneself in an inappropriate manner to the annoyance of others"

        then yes, a woman most certainly can.

        (As can men, cats, and photocopier sales-people ;)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like