Using it wrongly
Facial recognition should be used to identify potential matches.
Humans should then be responsible for making decisions.
A Black teenager in the US was barred from entering a roller rink after a facial-recognition system wrongly identified her as a person who had been previously banned for starting a fight there. Lamya Robinson, 14, had been dropped off by her parents at Riverside Arena, an indoor rollerskating space in Livonia, Michigan, at the …
I understand what you are trying to say but have to disagree as this what exactly what happened here.
An individual was identified by the AI system as a potential known forbidden character, the person responsible that day took the decision to accept the AI detection status and forbade entry. The human element was definitely there, the computer system did not stop the person coming in, a human meatbag did.
The AI System night have a problem distinguishing objects that have specific colors, or lack of contrasts etc but shouldn't that be considered as an algorithmic problem rather than a racial problem ?
Well, yes. But in practice, that's what will happen most of the time. Especially if you employ minimum wage bouncers.
If the software says it's 97% sure, you'd have to be both pretty sure of your own perception and confident in your own authority to overrule it. I imagine people employed for this purpose are rarely either of those things.
“The software had her daughter at a 97 percent match. This is what we looked at ... if there was a mistake, we apologize for that."
The mistake is to use the software. If you work there you are likely (within the team of employees) to know the troublemakers. Why have software? Or is this yet another example of replacing costly humans with cheap technology?
A big problem here is the software foolishly exposing the 97% number in the UI. Your average layperson does not understand that does not necessarily mean that it is a match. In this case it probably means your training data is woefully deficient in non-matching black people, so your algorithm has learnt the wrong thing. Honestly they should not be selling this kind of thing with this naïve a UI.
What software aimed at minimum wage staff should be doing is telling the staff what they should do next (customisable per company). It should be saying something like: "I think this is this person, please manually compare the images and check the person's ID to confirm the match".
This is a quite common phenomenon, when a device is being used to absolve controller from responsibility.
AI is the new dog
In the past, the police used dogs to justify searching someone, when in reality prejudiced officer would gave dog a command to act as if the suspect raised a suspicion.
Using the same analogy, the operators of such systems can tweak them in a way to underpin any door policy they want, where in case of absence of such device, the policy would unlikely be legal. It was just a glitch... We got hacked... and so on...
When such systems are being used, human should always be solely responsible for the decision and the fact that computer gave this and that suggestion should be irrelevant.
Absolutely. Human operators will be reluctant to override the software because that's where the majority of the risk lies for them. A false positive is an externality ("I was told to use the software and the software said X"). The only ways to fix that are to remove the externality (get rid of facial recognition) or shift the cost (penalize employees for not challenging the system).
I favor the former, by a long margin, and the latter is likely unworkable anyway.
Maine has the right approach. Ban it. Ban it for law enforcement, ban other government use, ban commercial use. We've survived for millennia without widespread use of automated facial recognition.
Make the company who trained and then sold the system financially liable, including where appropriate punitive damages would likely go a long way to solving the underlying problems associated with AI sorted out - back in the late 1800's Darwin had some useful insights in this area.
The AI System night have a problem distinguishing objects that have specific colors, or lack of contrasts etc but shouldn't that be considered as an algorithmic problem rather than a racial problem ?
From a technology standpoint, sure; but that doesn't take into account the people who are using the system. If they have a bias against people that they know the FR system misidentifies, they might be just fine with it having those flaws, as it can provide cover for treating those people poorly. "I'm not a racist; the system said they were a criminal."
This would be a more convincing argument if the system bounced all Black girls wearing glasses or not. Instead it could exclude 97% of the previously identified trouble makers and 3% of those who resemble said trouble makers. The owners might or might not be racist, but if they are then the software isn't doing a good job at being exclusive when there are hundreds or thousands more that could also be excluded.
In any large enough population there are errors made.
The question is - is it advantageous for a venue to allow 99.99% of previously identified trouble makers who had been previously banned to be re-admitted on the fact that no amount of training will allow bouncers to recognize the potentially thousands of faces?
Is concern for the 99.999% of their customers in trying to stop the trouble makers any part of the calculation or are they simply to accept potentially being victims?
If the number of troublemakers is low and the number of other visitors is high, then a 97% probability that this is a troublemaker is going to have a lot of false positives. That's basic statistics. Not all the false positives will have a cast iron alibi like never having been to the roller rink before - it's unlikely that this was the first person humiliated by being excluded.
In a once quaint English seaside town, there worked a bouncer at one of the resorts many nightclubs.
Based on the antics of those he was throwing out, he might on occasion resort to biting a chunk out of their ear.
This not only quietened them down a tad, but marked them as unwelcome if they attempted to gain entry again over the course of their hedonistic holiday.
By all accounts this worked a treat for several years, and the club gained popularity among those who liked to enjoy themselves on a night out without fear of the knob heads that always ruined it.
These days I tend to go to places frequented by bikers and other misjudged communities, as their ill-informed reputation tends to be an effective barrier to knob heads, who prefer a more cowardly fight with easier vulnerable targets.
Yes it shouldn't be seen as a racial problem as its just a shit system it would appear and the idiots monitoring it should of then done a manual check.
The problem is, there have been facial recognition software that has been biased towards black people for some fucked up reason and might be same issue here.
Humans should then be responsible for making decisions.
After (re)viewing the evidence (ie images) that the AI matched to the person. This might be hard for a human if the image quality is bad or the person has different make-up or ...
What happens if AI is not being used and a venue owner thinks that s/he recognises someone who caused problems some time back ? The owner may well be confusing an innocent person with someone else - this has likely happened many times.
The problem is one where the AI has been trained to have ingrained racism of the "they all look the same" type due to the training dataset and input data quality.
It also relies on cameras, that have also been shown to exhibit ingrained racist assumptions in the design - the cameras pick up white/pale skin traits quite well, but not darker skin where they are not very sensitive.
So a poor image + poor AI makes a decision, then shows the photos (on a likely poorly calibrated monitor that is also poor at showing dark skin tones) to an employee, who themselves may have their own ingrained racist opinions (or their employer will), and this will fix the previous two issues?
I'm not too sure what evidence they have, but if you look at racism as simply being "treating differently based on physical appearance", then the software most certainly does.
1. Can racism extended beyond the human element?
2. Is fixing the software based on race, not racism?
I feel if AI facial recognition is to ever have a chance, then it can't depend on cameras that depend on light. Which technology will finally nail it... I'm not sure, but I have a feeling that tech. will be much more costly (I'm envisioning a mass array of lasers throughout the walls, or something crazy like that).
Sensors, just like film before, don't just take a pure, level view across the entire spectrum and brightness. They are designed and picked, and similarly at the lowest image improvement level, to take absolutely any scene, and hopefully make it most intelligible to the viewer. What this means in practice is that flesh tones, which feature in many pictures, are enhanced. And by flesh tones, obviously I mean the pinky pixels in pictures. And similarly, detail is more readily available from enhancing light sections, people want detail that their eyes also do similar with; the darker parts of whatever random view the picture includes are more easily lost. Dark tones contain more noise, so look better if they're evened out, rather than 'detail'/noise picked out. This approach means that for any million random photos you take, the majority will look better than that even, pure, imaginary sensor - you're a winner. Except it means many specific circumstances will likely always end up doing worse, because they are different from some platonic ideal picture in ways this approach does not favour.
I think we can all understand the point you're trying to make here, but that statement is a bit of a stretch.
There's no racist assumptions in the design, it's just that dark anything reflects significantly less light than pale anything. Elementary physics. Cameras either need a longer exposure to properly image dark things, which will affect image quality in other ways, or need higher gain which can lead to image noise and overexposure of brighter image regions.
There's no easy answer. It's not like this is an already-solved problem whose solution is being deliberately ignored because racism.
Physics isn't racist, it's simply physics.
This post has been deleted by its author
Do you seriously think the CCTV cameras used in the are likely to be genuine HDR?
Shooting stills in HDR is relatively easy - the camera just takes a burst at multiple aperture settings and there’s a (albeit computationally expensive) process to combine them. Although the results will not be good if anyone moves during that process.
Shooting video in HDR currently requires at least $1000 of camera, more usually $2000. I doubt those are capable of streaming the result easily, and running around with SD cards or SSDs doesn’t really work in this scenario.
I can’t imagine HDR hitting the CCTV/face recognition market for some time yet.
There's no racist assumptions in the design, it's just that dark anything reflects significantly less light than pale anything.
That's just an excuse. If the system cannot treat different races with equal quality of results, then by definition it is racist.
The point being made is less "the hardware/software is racist" and more "the development and testing were not conducted using subjects of varying skin tones, and as such are optimized to a fairly narrow band, and falls down when analyzing people of color." With the implication that Systemic Racism is the reason why the system was not sufficiently tested/developed against darker skintones.
"That's a seriously tainted point of view to hold.... ? It very sad."
I'll see your very sad and raise you very irate.
The racist humans are very happy with their racist AI because they can make the racist decision they wanted to anyway and say it was just an algorithm.
I'm white. Just calling it like I see it.
Anger is misdirected. Issue: technology failure to handle varying light conditions and reflectivity. Does a control system exist to fix technical incapacity ? Yes. Is it applied ? No. Therefore system is unable to do what it is sold as doing. Conclusion: Consumer protection legal action against vendors of misrepresented goods and services and ban use until accuracy is demonstrated.
Never assume malevolence when stupidity is a sufficient cause, much as that is an popular attitude. Laziness is also a factor. Good enough, get it out the door attitude inmanglement levels. That's not to say that racism doesn't exist, but it seems in some countries and self appointed cultural groups it is seen as a virtuous way of being irredeemably angry.
No, it only becomes racist if different quality of the results is not taken into account.
Dark faces are harder to recognise with light-based photos or video, that's just a fact. If you try to improve recognition of black faces, you will also improve recognition of white faces, the difference stays the same.
Assume that a black kid has a higher chance of a "97% match with known black troublemaker" than a white kid having a 97% match. So far no racism. It becomes non-racist if the software requires say a 98.2% match for black kids and a 97% match for white kids, or whatever the numbers are so that all innocent kids have the same chance of being rejected. It becomes racist if you reject twice as many black kids than white kids.
“ If the system cannot treat different races with equal quality of results, then by definition it is racist.”
So now light is racist because it cannot reflect off black skin with the same equal results as it does when it reflects off white skin?
So now light is racist because it cannot reflect off black skin with the same equal results as it does when it reflects off white skin?
No, the adjustment of the CAMERA/RECORDING MEDIUM favors resolution in lighter skin tones, to the detriment of darker skin tones. As another poster noted, white people would be overexposed pale blobs if the adjustment were set to more reasonably capture darker tones.
With modern electronics, is it beyond the wit of engineers to come up with a solution? Non-linear gain for example? Or is down to FR software matched with cheap, nasty CCTV cameras instead of the expensive ones used when setting up and/or calibrating it? Are these placing buying a "system" or just installing some software onto an existing CCTV system?
After all, there are some damned good home security systems out there with some very good hi res night vision cameras. And yet, whenever the Police want to trace a suspect, the only CCTV "footage" they show on TV always seems to be grainy, blurry B&W images, often at 10FPS or less.
I'd be prepared to bet that the source image at this roller rink came from a "bubble cam" where the housing has never been cleaned, up on a wall or ceiling in less than ideal lighting conditions and a very wide angle lens.
It used the same camera to photograph the girl who was banned as to photograph the 14 year old with what looks like the same lighting.
It's a close-up of their face that is taken at the same time a temperature reading is taken for COVID reasons.
The software compared the two pictures.
"Physics isn't racist, it's simply physics."
Sort of. Physics isn't racist and nor are these cameras racist; they have no ability to change what they were programmed to do. Two things in this situation are racist though. First is the AI algorithms which have been inappropriately trained such that they're more likely to be incorrect about certain groups. That's not the program's fault as it's just performing mathematical operations, but it is a fundamental inaccuracy. The larger thing, however, is the use of all this stuff. If you use a camera you know won't capture people correctly and feed that information into a model which you know won't judge people correctly, that's a racist act. You are using tools which have the result of creating unjust circumstances, whether that was the explicit goal or not.
I was going to upvote you but then I started thinking. If its racist it implies action taken to specifically achieve the result of incorrectly identifying people of colour. A better explanation would be economics or laziness.
Think of those instances where "simple" abuse is categorised based on skin colour, sex or (these days) desired gender.
At some point, laziness is no excuse. Take a related thing. If I install equipment which is faulty and is likely to kill the people who use it, but I don't know that I've done so, that's negligence. Still a crime, but a lesser one. If I know that it's likely to kill but I leave it up, I've committed a larger crime. A thing which is known to cause injustice and is left in place is at least tacit acceptance of those consequences, especially when the alternative, removing the system, is such a cheap and easy action to take.
"
It also relies on cameras, that have also been shown to exhibit ingrained racist assumptions in the design - the cameras pick up white/pale skin traits quite well, but not darker skin where they are not very sensitive.
"
This is surely more to do with the fact that visible light is inherently racist in that it refuses to to be reflected as well from dark skin as it does from white skin?
Facial recognition works well for matching a variety of faces against a single face - as in a smartphone that unlocks with your face but not that of others. Apple claims Face ID has a 1 in 50,000 chance of matching someone else's face. You wouldn't trust it with nuclear secrets, but it is fine for its intended purpose.
The more faces a system is trying to match against the greater the possibility of a false positive. It is simple math - if they had a system similar to Face ID then the odds of a false match are 50,000 divided by the number of faces in their banned database. If they have booted 100 people over the years, that means one out of every 500 people will be a false match. You might have that many people through the doors every weekend.
Imagine using a system like that at a sporting event where you have tens of thousands in attendance - you'd need a system with a single person accuracy of one in a billion to make it feasible to use on such a large scale!
The percentage confidence, rather than just saying "yes" or "no" like Face ID, adds another wrinkle as it makes it supposedly a judgment call by humans but most likely they have a policy like "if its over 95% then don't let them in". No idea what percentage Face ID uses, but to reach 1 in 50,000 it very likely requires a higher confidence level than 97%.
FaceID uses distance measurement, basically analysing the detailed shape of your face. Skin colour has no effect on it at all. Demonstrated using some models with an amount of face paint that would have been unrecognisable by any human, didn’t affect faceID.
We don't know what the roller rink is using, but even if it is a different technology than Face ID the exact same issue with many to 1 versus many to many still applies. I only referenced Face ID since it is well known and Apple has released information about its accuracy.
"FaceID uses distance measurement, basically analysing the detailed shape of your face. Skin colour has no effect on it at all."
The why does current Facial Recognition technology have so much trouble with non-white faces compared to white faces?
https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/
"But Idemia’s algorithms don’t always see all faces equally clearly. July test results from the National Institute of Standards and Technology indicated that two of Idemia’s latest algorithms were significantly more likely to mix up black women’s faces than those of white women, or black or white men.
The NIST test challenged algorithms to verify that two photos showed the same face, similar to how a border agent would check passports. At sensitivity settings where Idemia’s algorithms falsely matched different white women’s faces at a rate of one in 10,000, it falsely matched black women’s faces about once in 1,000—10 times more frequently. A one in 10,000 false match rate is often used to evaluate facial recognition systems."
The facial recognition that has more trouble with black faces isn't based on distance measurement its based on photos/video.
The latter is what the roller rink would likely be using, since I doubt they are asking everyone to stand still for a moment while they scan their face. They just have cameras on them as they walk in.
The problem with black faces is mostly due to lack of training on black faces, but there's no indication that was the case here. Roller rinks are a black culture thing in the US, the overwhelming majority of people through the doors - especially in urban areas - are black. If whatever system they use is able to "learn" after deployment it will quickly get a lot of black faces to train on and that particular bias would be less of an issue. The many to 1 vs many to many thing still would be, however.
worth suing, metaphorically, because they feel something's seriously fucked up with this society, or literally, calculating the investment to hire a lawyer and potential return, which proves there's something seriously fucked up with this society?
And how much damage to their life, job, housing etc when the other lawyer has access to any previous police records (thanks to some friends on the force) and the ice rink hires a PR comp to make sure this all gets out on Fox news and social medo
“The software had her daughter at a 97 percent match. This is what we looked at ... if there was a mistake, we apologize for that."
Again, not actually accepting there was a mistake, that a young girl has been humiliated and denied an enjoyable experience with her friends because of a faulty system. I do hope the parents get appropriate help deciding whether or to to pursue this legally.
Sometime there has to be a legally required standard for facial recognition systems to meet. Claiming that the AI system said it was a 97% match is meaningless without context. What was the AI system trained on? I bet (and this is extremely racist) that if you trained an AI facial recognition system on blond, blue-eyed fair skinned boys plus one baboon, and then showed it a picture of any black person the match would be with the baboon. Conversely, train one on a load of black and asian people and one polar bear then show it a picture of a white person with white hair and it would match with the bear. Unless these systems are trained on an appropriate demographic they should be banned.
OK rant over
Entirely this. Facial Recognition is known for having problems distinguishing between non whites. I don't know why - I don't know if anyone does - but, knowing the system is badly flawed, yet continuing to use it, is inherently discriminatory against anyone who ain't white.
I wonder how well Chinese developed FR works outside China? Likewise any other countries with FR programmes. I'm sure this must be happening all over the world, possibly with similar levels of issues with their minorities or non-locals. After all, white people are the minority in much of the world.
It all depends on the size and quality of training data and the effort of the developers. If you start with a nonrepresentative sample that hasn't been cleaned up, pump it through the training process until the percentages are high, then sell the result, you'll get something inaccurate most of the time. If you're selling a product though, the number of photos and high test scores are all you're quoting, so many companies do that.
With rigorous attention to detail by data scientists and machine learning experts, you could get something which is significantly better. However, it would be a lot more expensive and it would still be wrong often due to unavoidable problems like poor cameras. At this rate, most people have either concluded that they don't want to do something that will never be even close to acceptable or that, if they're going to be inaccurate anyway, no use spending a lot of time trying to improve. And there we are today.
Actually, admitting they were wrong, genuinely apologising (not 'if there was an error we are sorry if anyone has misunderstood' Tory-style apology), and offering some sort of compensation, even if it's only free admission for the next decade, would be an incredibly WISE thing to do. Might even get some good publicity.
That and reviewing their procedures. Perhaps require the bod on the door to look at a photograph (of a banned person, flagged up by the system) and require them to decide if the customer is the same person. And take responsibility for their decision.
> And take responsibility for their decision
You're kidding, aren't you...
That would require capable bouncers (with a brain!), paid enough to take responsibility and not only tips. Besides, only 1 in 100 of those wrongly turned away will make any bigger fuss, so why bother?
"Sometime there has to be a legally required standard for facial recognition systems to meet."
No, we approach that in a different way. It's already illegal to act solely on the basis of a flawed (discriminatory) algorithm.
The problem is the users completely misunderstanding what the tech is saying. It isn't a 97pc chance of a match, it's a 97pc chance that a human should look and decide if this is the same person as in whatever stored image they're comparing to.
Yes, it is a major design flaw that the system tells the operator its assessment of the match. It should just display the top three matches from its database, with no further comment, for every visitor and let a human judge if any of them are the same.
The flawed design is so obvious that the creators of the machine should probably be prosecuted for racism.
I doubt the system is in place to match every visitor and flash up possible matches for a human to check. It's far more likely that it's looking at every visitor and only alerting staff when it thinks there's a match. At which point sirens and red flashing lights start up and the suspect is ejected.
I think the big question here is - why does a roller-skating rink have facial-recognition AI? Are they actually just a cover for an underground nuclear missile silo? Or maybe, behind the pinball machines, is the entrance to the Pentagon's Michigan CnC bunker? Or perhaps the Navy keeps its most highly-classified results from the rail-gun trials in locker number 76. I mean, it's a roller-skating rink, not a national security site. Seems a bit paranoid to me (and I'm a bit paranoid myself).
Saving labor costs, plus management just hears the sales team say "Now you'll never accidentally let someone banned in to cause trouble again!" Of course, they know nothing about the tech, and sales knows practically nothing about the tech or what false positive means.
And note this kind of low-end, error-ridden AI is just a module for the security camera system, it's not like a whole new system installed just for this purpose. It's increasingly common for all the major premises security vendors to offer one.
The low tech way to do it is to have pictures of troublemakers, make your staff try to recognise them, and act on that basis.
Here, the computer system is supposed to be doing the first step of 'that looks a bit like this, we should check if they're the same'.
Turning someone away unless you're sure they're the troublemaker in question is very odd. Normally if you aren't certain you'd let someone in and keep a closer than normal eye on them to start with.
"Turning someone away unless you're sure they're the troublemaker in question is very odd. Normally if you aren't certain you'd let someone in and keep a closer than normal eye on them to start with."
But but but, won't someone think of the lawyers? If you let in a "known" troublemaker "by mistake" and they get in a fight and hurt someone, it's your fault. This is the Land of the Lawyer we are talking about.
One day a kid is going to get thrown out for fighting and they will want to get even, so they go home, grab a gun and 3-4 magazines with 20 rounds each, and come back. But it's a shift change and the managment haven't told everyone, so the kid walks in and kills 10-20 people, wounds another dozen, some with life long agonizing damage.
Then you will ask - why didn't they have better security and shouldn't they have done something?
Guns - because in America there are more firearms in private hands than there are people and where casual insults or even getting cut-off in line is used to justify killing.
The US has a mass shooting almost every day, too many to make the national news.
>Looking at the pictures in the linked story - the girls to look pretty similar
Similar in the sense they are both dark skinned and wearing glasses?
I hope this does go to court, but it needs a decent lawyer and legal team to force the vendor of the facial recognition system to attend and explain in full detail how their system arrived at the 97% match figure, ie. strip away the AI cloak mysticism.
Hmmm
Both lit well enough.
Both wearing glasses, totally different designs.
Victim girl has a thinner face.
10 second look at not very good photo definately not same person.
1) Badly trained AI
2) Idiot staff not checking and seeing two completely different girls.
Indeed. There is a book written by a black woman (can't remember what it's called, sorry) where the author told the story of when she won a scholarship to a very good all-girls high school, where she was one of the very few black girls there. (She'd previously been at a school which was mainly black kids.) She had no problems about the girls at her new school - they couldn't have been nicer and more welcoming. But she literally couldn't tell them apart - their faces all looked the same to her. She had to use unreliable visual cues like clothes and hair colour.
So, in order for this all singing all dancing wonderful AI technology (that is going to make all our lives so much better), to work - we need to re-introduce segregation!!! Yeah - go human advancement!
We can at least be grateful that the cameras were not, as yet, attached to mini-gun auto turrets - that will perhaps be a future update!
See: https://www.businessinsider.com/httpswwwbusinessinsideresdrones-reconocimiento-facial-cerca-ser-realidad-812285?r=US&IR=T
Also the MCU has already covered this. The nice Robert Redford played a character who wanted airborne aircraft carriers with facial recognition guided guns to 'take out' undesirable characters without trial, appeal or much consideration of collateral damage. (Spoiler alert - he gets shot.)
It's all very well embracing 'big data', but what you end up with is 'data quality' issues at 'big' scale.
I'm not a fan of AI/ML or facial recognition (I'm a people-person and in my 50s)...I should think the the false positives generated outweigh the benefits of the bloody stuff. I suppose it has it's uses, but until it gets a *whole lot better* there has to be human oversight to exert a degree of common sense on it's decisions.
97% Match....just means a 97% match by a bad algorithm on crappy data. I can understand the staff apologizing, but the people who need 'outing' as guilty are the numpties who made the system in the first place. I don't know the details so can't really judge, but the phrase 'not for for purpose' springs to mind.
Me and my mate holidayed with his elderly relative in California in the '80s. She'd moved there from Canada after leaving Scotland. She took us to the largest shopping mall we'd ever seen that had the largest ice rink we'd ever seen in it. There was only one person on it, an angelic seven-ish hispanic lass. She was captivatingly talented and me and my mate were close to tears watching her. At that point the old Scottish woman turned up, glanced at the child and snorted, "Bloody immigrants." No sense of irony or self-awareness.
She kept on trying to take us to her Scottish highland dancing club. We went once and were appalled at their racism. In California 'Scottish' or 'Irish' is shorthand for whites-only. They considered us 'bad Scots', and we didn't consider them Scots at all. Scottishness, it's more than a porridge thing.
“The software had her daughter at a 97 percent match [with absolutely no validity or credibility attached to this figure] . This is what we looked at..."
So...the software reported a 97% match, eh, and--by inference-- "...this is the ONLY THING we looked at. WE DID NOT, AT ANY POINT, EVER CONCEIVE OF ALLOWING A HUMAN TO BECOME INVOLVED IN THIS PROCESS..."
+++++++++++++++++++++++++
"...Juliea and Derrick, are now mulling whether it’s worth suing Riverside Arena or not...". That depends...
Seems to me that the Robinsons are on their way to a really fat pay-off, if only they will hire a lawyer with a modicum of understanding about statistics AND a real Expert Witness whose expertise is in the field of Statistics. This is one of the more egregious examples of that fact, and, sadly, not the last...by any means.
The entire field of Artificial Intelligence and Machine Learning is populated by charlatans.
As first poster said, human element is needed -- and NOT just to say "the computer said there was a match." The honest fact is, in a sense the system is racist -- you're probably going to have like a 60% match just for having 2 eyes, a nose, and a mouth (maybe 65 or 70% because of the glasses), +10% for a similar hairstyle, +10% for skin tone, maybe another 10% for having vaguely the same head size and shape (i.e. a girlish head), you're then at like 90% without it meaning much of anything.
If places are going to use an AI, they really MUST have it so a match like this has the operator pay attention, not just go based on some result from the system. The system really needs to have show a name and photo for the match, and the operator needs to be expected to use it (not rely on some percentage match.) It would have been easy enough to be able to either see the photos don't match (... maybe, I suppose it's possible they really are practically a doppleganger for the trouble maker), or easy enough to ask "hey, are you (name)?" or "could I have your name please?" and let them in when it's clear they aren't the same person.
edit: Looked at the photos in TFA, I can see why a AI may have thought they were similar (in particular, they have similar eyeshadow... or possibly some purplish-blue effect in the photos from how the camera and glasses interact.. that stands out.) But it takes a few seconds of human intervention to see they don't have the same head shape and are not the same person. The owner admitted they just look at % match and not photos, that's the issue I'd take up here, if they're going to use an AI that's a bad way to do it.
Maybe a bye-law allowing any child so accused a phone call to their parents / guardian. The young man who called the police because he thought George Floyd was trying to pass a forged $20 bill has spoken of his guilty feeling that he is in some way responsible for Mr Floyd's death.
FRS systems have notorious problems with dark skinned people. This has been widely reported. Part of the issue is the quality of the photos and the skill (or more accurately lack) of the 'photographer'. High quality portrait images require quality gear, good lighting, and a competent person behind the camera. Even if the first 2 conditions were met, I doubt the rink has a competent photographer on staff. I have my doubts about the gear and the lighting. Plus I have my doubts about the images used to 'train' the system, particularly those of dark skinned people.
I have cats who have solid, dark brown fur. If the lighting is not good, the cat's facial details are sometimes hard to discern in a photo. I suspect a FRS system would struggle with an accurate identification. I have good gear and pretty good idea of what I am doing but I do not always have the best lighting.
While I dislike suing someone just because, this case seems to beg for a suit because the rink's methodology and (mis)use of the technology shows a complete lack of understanding of its limits. More accurately complete stupidity. So the girl resembles someone else, whatever happened to asking for her name?
So clearly what is needed is a supplemental palm-print database. Every customer gets a facial photo taken, and a palm imprint to accompany. Next time AI says "you look like that troublemaker we ejected last week", accused (would-be) customer presses palm onto a reader plate to see if there's a match. No problem there, right?
"The problem is not with the facial recognition software. The problem is with the bad hardware..."
WRONG !!.. "computers-can-do-anything"-breath (as Johnny Carson might have said).
THE problem is with all the mindless, room-temperature-IQ, mouth-breathing idiots in the world who have not one iota ("jot", in Britspeak) of critical thinking skills. These numbskulls think that computers and software are the solution to EVERYTHING, and fall head-over heels for the latest "solution",--panacea, if you will-- to any problem which presents itself, and to the snake-oil salesmen who are only too ready to take the money of these gullible rubes--and, sadly, who most often are victims themselves of the same mentality and mind-set.
Does "Boeing 737 Max" ring a bell?
"The required techniques of effective reasoning are pretty formal, but as long as programming is done by people that don't master them, the software crisis will remain with us and will be considered an incurable disease. And you know what incurable diseases do: they invite the quacks and charlatans in, who in this case take the form of Software Engineering gurus.”--Edsger W. Dijkstra
“It is time to unmask the computing community as a Secret Society for the Creation and Preservation of Artificial Complexity."--Edsger W. Dijkstra
"Originality is no excuse for ignorance."--Fred Brooks
"I find that the reason a lot of people are interested in "artificial intelligence" is for the same reason that a lot of people are interested in artificial limbs: they are missing one."--David L. Parnas
.. that they even kept the miscreant’s (facial) data in a database.*
Is that allowed?
Either way it is pretty scary.
Even leaving aside that the enforcement authorities et al would lose no time in accessing the database - of kids - roller skating! - whenever they wanted to.
* and the innocent girl’s too.