Well, that's a good way of getting politicians to actually take notice, certainly...
Politicians fume after Amazon's face-recog AI fingers dozens of them as suspected crooks
Amazon’s online facial recognition system incorrectly matched pictures of US Congress members to mugshots of suspected criminals in a study by the American Civil Liberties Union. As a result, the ACLU, a nonprofit headquartered in New York, has called for Congress to ban cops and Feds from using any sort of computer-powered …
COMMENTS
-
-
This post has been deleted by its author
-
-
-
-
Friday 27th July 2018 05:13 GMT wsm
Re: Poetic Justice?
They may be a criminal class according to the facts and figures, considering the high rate of those convicted of crimes or resigning just before being arrested.
We call them unindicted co-conspirators or persons of interest until such time as they are actually serving a sentence, but why quibble over details.
-
-
Friday 27th July 2018 19:13 GMT Michael Wojcik
Re: Poetic Justice?
Pity they can't give the POTUS the boot
It really, really isn't, as long as Mike Pence is next in line. You think things are bad now? See what happens if Pence gets the top job.
Fact is, we'd have to dig pretty deep into the line of succession to improve the situation measurably.
-
-
-
-
-
-
-
Saturday 28th July 2018 15:15 GMT Anonymous Coward
Re: Predictive
The other thing that happened in Minority Report was persionalised targeted adverts (albeit based on a retina scan). Thats the true goal, side 'benefits' are flagging people.
I'd better get myself some new ones. Anybody up for a swap? Sadly you'll be getting a pair of well used, pr0n scarred, exceptionally myopic eyeballs*.
Maybe there's some truth in what they say about it being bad for your eyes.
-
-
Friday 27th July 2018 14:06 GMT Rocketist
Re: Predictive
Maybe the average member of Congress has a similar physiognomy to a certain class of criminals?
I seem to remember there was a study about a year ago where certain behavioral patterns could be predicted from an analysis of the person's features; something that has been suggested in the 19th century but vehemently (and rightly) criticized by most serious scientists ever since.
-
-
Thursday 26th July 2018 21:50 GMT Mark 85
Nope, not ready for prime time and yet they're trying sell those junk. Given some of the police actions of late, I'm not sure how many false positives will die but it could be enough to raise a public outcry and that's too late for any innocent who's dead or injured.
Put it back in the shed, Amazon and let the folks there tinker under the hood some more. Profit can wait until you get it right. And by "right".. that equals 100%.
Disclaimer: It should be banished, buried, and burned. Facial recognition can't possibly come to a good end.
-
Friday 27th July 2018 19:18 GMT Michael Wojcik
Given some of the police actions of late, I'm not sure how many false positives will die but it could be enough to raise a public outcry and that's too late for any innocent who's dead or injured.
Yes. Combining half-assed automation that has abysmal accuracy with police militarization and you have a recipe for a sharp increase in trigger-happy assholes killing innocent civilians for Texting While Black and similar offenses.
Police departments need to get their house in order before adding any more automation, and vendors like Amazon need to make their products much, much better before peddling them to the police.
I'd like to hope Amazon catches some flak from investors (the only thing they care about) over this, but I'm not holding my breath.
-
Saturday 28th July 2018 15:17 GMT Anonymous Coward
Yes. Combining half-assed automation that has abysmal accuracy with police militarization and you have a recipe for a sharp increase in trigger-happy assholes killing innocent civilians for Texting While Black and similar offenses.
IIRC the majority of people killed by US police officers are white. But in general I agree with the point you're making.
-
Monday 30th July 2018 09:19 GMT 's water music
No idea of the relaibility of the source or reliability of the data available but it appears that non-whites are statistically over-represented, demographically speaking.
Watching but not wearing the right spectacles?---->
-
-
-
Thursday 26th July 2018 22:11 GMT Anonymous Coward
Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
A couple of years old but interesting research on misdirecting facial recognition software https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf.
I would think that it works, at least in part, because the software designers did not anticipate any attempt to defeat recognition other than by wearing a disguise.
-
-
-
Saturday 28th July 2018 15:20 GMT Anonymous Coward
Re: New training rule needed
And actually hugging or wrapping yourself in a flag is proof positive.
A good friend of mine got married a couple of years back. As Hindus, he and the bride were wrapped in a large and beautifully embroidered swastika cloth as part of the ceremony. I hate to think what "AI" would make of all the pictures on social media.
-
-
Thursday 26th July 2018 22:28 GMT Anonymous Coward
"It’s not totally clear why Amazon’s face recognition technology is so inaccurate."
Pretty simple, really. All facial recognition is horribly inaccurate. Studies done on humans show that even we're only good at recognizing faces for people we already know. We've all had instances where we've mistaken a stranger for someone we know, even. Machine learning algorithms have already been able to out do humans, but that comes with huge caveats from the training sets. The ML algorithms tend to end up saying "All Asians look alike" or "All blacks look like apes", because they didn't have enough relevant training data (read: they are usually trained and tested with a heavy Caucasian bias, except in China.) ML algorithms require insane amounts of examples to generalize from. But facial recognition is a memorization problem instead, so, the goal is to use a huge amount of examples to determine all of the ways that faces can differ, so you can determine where a specific face falls in that feature space, then you match "reasonably" similar faces. That's complicated by translation, rotation, lighting, as well as physical modifications to the face. With 7.5 billion people to distinguish between, and low-resolution cameras, you're going to have a nasty trade off between false positives and false negatives. I'd expect they would either try to split the errors evenly or err on the side of false positives, since those can be checked by a human.
-
Friday 27th July 2018 00:35 GMT Robert Helpmann??
Not properly House trained
The simplest explanation is that since the focus is on catching crims, the training data was mostly or completely composed of mugshots. This is based on the high false-positive rate that matches the incarceration rate in the US. Nothing like building in a self-perpetuating bias.
-
Friday 27th July 2018 06:19 GMT John Smith 19
"Those 28 mismatches therefore represent a five per cent error rate. "
Compared to the something like 97% false positive rate of the system the UK Metropolitan Police are trialing that is actually quite good.
Still pretty s**t, but in population of 30 million adults (like the UK) that would 1.5 million false positives.
Frankly you'd better to issue every officer with a fingerprint reader.
But then of course you'd need to actually do some real police work and they might start harassing officers who did this with too little evidence to begin with.
-
Thursday 26th July 2018 23:19 GMT Eddy Ito
In other news
TSA's mother TLA, the DHS, has asked Amazon how quickly they could roll out this technology at every airport in the US. The rationale is that this would do preliminary screening of travelers as they arrive at the curb in order to select individuals for
extended gropingenhanced screening as they pass through security on their way to their flight.-
-
Friday 27th July 2018 19:44 GMT Michael Wojcik
Re: In other news
To be fair, it's no worse than the methods they currently use.
This is the TSA. Flipping a coin would be an improvement.
We're talking about an organization that cleared 73 people on their own terrorist list to work at airports, and granted PreCheck status to a "notorious convicted felon" (not on the friggin' list, which we all know is stupid and useless). (And, personally, I have no problem with Sara Jane Olson having PreCheck, except that it shows just how pointless PreCheck and indeed everything touched by the TSA is.)
We're talking about an organization that has managed, over nine years, to get their success rate in controlled tests from 0% to 4%.
(I'm using Underhill as my source here because he provides good citations, and more importantly funny comments to help soften the despair.)
And there are many, many other criticisms we might level against the TSA. Like the way their employees like to pretend to be Federal officers, even though they aren't. Or their penchant for stealing stuff from passenger luggage. Or their arbitrary invasions of many people's privacy. Or their recruitment of local law enforcement to assist in their bullying. Or how they funnel vast amounts of money to themselves and their accomplices in the fake-security industry.
And, yes, I'm sure there are plenty of decent, hard-working TSA employees. I've generally had perfectly cordial relations with them (but then I take the precaution of being a wealthy white male US citizen, which I heartily recommend if you're going to be using US airports). But the vast majority of the verified evidence shows the TSA is abysmal. It's the worst part of the terrible idea that is the DHS.
-
Saturday 28th July 2018 15:25 GMT Anonymous Coward
Re: In other news
And, yes, I'm sure there are plenty of decent, hard-working TSA employees. I've generally had perfectly cordial relations with them (but then I take the precaution of being a wealthy white male US citizen, which I heartily recommend if you're going to be using US airports).
If you're posting under your real name, then those "cordial relations" might need writing with a past tense. Individually the TSA staff won't be making up their list of hated taxpayers, but as a bureaucracy, you can be sure it does. Somewhere, there will be a "social media team" looking out for the TSA, and maybe your name is on their radar. Few forces in this world are as persistent and spiteful as a spurned bureaucracy.
-
-
-
-
Friday 27th July 2018 01:10 GMT Cpt Blue Bear
Re: The fact that Amazon is actually trying to sell this crap in its current state...
"Regarding police high-ups willing to buy this, I'd say they're split 50%-50% between sociopaths and morons."
Having met a few senior police I can tell you they are neither. They are the product of the prevailing police culture of the 1990 and 2000s filtered through the following decades management training. I find your suggestion less scary.
-
-
Friday 27th July 2018 00:43 GMT JeffyPoooh
"...a five per cent error rate."
This is where Bayesian probability analysis is supposed to be used. In short, most randomly-selected people are not criminals. So false positives can easily dominate the findings unless the error rate is infinitesimal (which it isn't).
A root cause of all these AI and Machine Learning fiascos is that the people involved clearly don't understand the basics. Thus, their Confidence/Competence Ratio dangerously exceeds unity, and that leads to these sorts of fiascos. They need better managers to rein in the foolish expectations.
El Reg assists society by exposing such failures. Thank you.
-
Friday 27th July 2018 11:35 GMT JeffyPoooh
Re: "...a five per cent error rate."
It's occurred to me that legislation and regulations could make use of Bayesian probability analysis. For this example, the vendor is claiming "5%" error rate. An analysis could be performed to adjust this "raw" error rate, considering the relative rarity of actual criminals, to calculate the expected false positive rate when deployed in the field.
In this case, they have 28 false positives, that's presumably a rate of nearly 100%. Big difference to "5%" claim. That's the sort of gap you'd expect for rare characteristics.
Legislation and regulations could incorporate this sort of non-naïve approach, mandating acceptable performance in Bayesian terms. Real world false positives below an extremely low threshold (e.g. 0.01%).
These systems are massively immature. Not ready for primetime, except in a police state context where bothering innocents is not really considered to be an issue.
-
Friday 27th July 2018 19:59 GMT Michael Wojcik
Re: "...a five per cent error rate."
It's occurred to me that legislation and regulations could make use of Bayesian probability analysis.
Yes. Yudkowsky's "intuitive" explanation of Bayesian statistics, particularly the extended example of the positive mammogram, is a good illustration of the problem - he cites studies showing the majority of experts (doctors, in this case) will wildly overestimate the probability of a hypothesis because the intuitive interpretation is so far from the actual Bayesian result.
Outcome-based regulation, like your suggestion of a mandatory low-false-positive (high-precision) rate, would help neutralize some of the marketing spin.
-
-
-
Friday 27th July 2018 07:20 GMT Destroy All Monsters
This is not Terminator identifying John Connor
Biases in training data are known to trickle through to machine learning systems. It could be that the Rekognition and the mugshot dataset contained a disproportionate number of men and people of color.
Normal people, as opposed to the "equitable outcome" (50% black/50% white, no asian) bunch of crazies call that "bias" reality.
This is all bullshit anyway, larded with 2018 levels of completely irrelevant Twitter "construct-outrage".
We are talking here about the standard problem of any information retrieval algorithm since forever, served up so that the uncomprehending hoi polloi can get excited over it: Depending on the sensitivity level, you have a trade-off between false positives (wrong matches) and false negatives (wrong non-matches). Here, we are erring on the side of false positives. So tune that level.
And why is Amazon in the business of providing information retrieval algos to Blue Forces and probably US-linked dictators anyway?
-
Saturday 28th July 2018 15:28 GMT Anonymous Coward
Re: This is not Terminator identifying John Connor
And why is Amazon in the business of providing information retrieval algos to Blue Forces and probably US-linked dictators anyway?
Errr...money? Amazon's AI is alleged the force behind its recommendations on its retail website. Clearly all of the investment hasn't paid off. So rather than write it off, why not repackage and sell it to government?
A bit like IBM and tWatson.
-
-
Friday 27th July 2018 08:03 GMT John G Imrie
What happens to the crook when.
a computer spots you and wrongly thinks you're an arrested crook breaking their conditions of bail
The crook was keeping to their bail conditions and becoming a productive member of society and now because you walked past the store they robbed last year they are back in jail.
-
Friday 27th July 2018 08:59 GMT caffeine addict
Find the racism...
The implied conclusion of the article is that the system can't correctly identify black faces. Which is quite possible considering how bad AI is at spotting black faces anyway.
But given that the prison system contains a disproportionate number of African Americans, it would be interesting to know if the results were skewed by the pool of criminal photos having a disproportionate number of black faces in it.
Not saying race wasn't a factor, just wondering where...
-
Friday 27th July 2018 13:22 GMT corbpm
RE: Find the racism...
Surely if the training dataset had a statistically significant set of one type of facial characteristics like African American it would have been BETTER at spotting those matches (assuming it works!).
So the interesting bit would be does the matches against the criminals have a greater congruity for the african americans due to the increased training data and does the system need to be trained with more non african americans.
Looking at the matches it does look like more caucasians where matched than african americans anyway.
I'm not looking forward to the day when they get this 95% right and i'm fined automatically for dropping litter 200 miles away from my actual location.
-
Friday 27th July 2018 14:05 GMT caffeine addict
Re: RE: Find the racism...
Looking at the matches it does look like more caucasians where matched than african americans anyway.
I don't think that's disputed. The suggestion was that a African American face was more likely to trigger a false positive.
Surely if the training dataset had a statistically significant set of one type of facial characteristics like African American it would have been BETTER at spotting those matches (assuming it works!).
Depends on what the training dataset was. The training was probably done using all the faces Amazon could find, sure. But the matching was done against a small set with an unrepresentative racial balance...
-
-
Saturday 28th July 2018 23:46 GMT Muira
The face comparison algorithm yields results with a 'similarity' which is a measure (0-100%) of how confident the algorithm is that two faces match. The positive results that they obtained are relatively low similarity (sub-90%). An educated individual would not consider these a good match. You can also ask the API to ignore matches that do not exceed a specific threshold, and this is common practice, but the people who did this experiment did not. Hence the misinformed results and the misplaced outrage.