Re: 98% false positive rate?
"Suppose the face recognition AI has a 1% false positive rate. I.e., given a 100 innocent mugs it will wrongly recognize only one of them as a criminal. Now conduct a "trial" on a set of 9,800 people coming out of a particular tube station during a given day. There may be 2 real criminals in the bunch, but the AI will flag 98 innocents in addition to them. Out of 100 people identified as criminals in the trial 98% will be false positives."
Surely those are different things? Accuracy and false positives.
I thought a 1% false positive rate was that of 100 positives, one was in fact not a positive. So of 100 images flagged as crims, one is an innocent person.
Now, depending on the ratio of criminals to innocents, you'll get different results. Say 1 in 1000 people are criminal enough to make the database.
So say you sample 100,000 people, which contain exactly 100 crooks. Assuming 0% false negative, 99% true positives and 1% false positive the system should flag the following:
- 99 crims as crims (99% true positive, 0% false negative)
- 1 crim as innocent
- 999 innocents as criminals (1% of 99900)
Making the system roughly 9% accurate.
Mostly I have to explain this sort of thing in regard to medical tests, which tend to (for obvious reasons) have low false negative rates in exchange for high false positive rates. Better to accurately diagnose all the people with a disease while scaring the crap out of healthy people rather than miss a correct diagnosis.
Generally only accuracy = false positive rate when 50% of the population has whatever you're testing for.