back to article Teen turned away from roller rink after AI wrongly identifies her as banned troublemaker

A Black teenager in the US was barred from entering a roller rink after a facial-recognition system wrongly identified her as a person who had been previously banned for starting a fight there. Lamya Robinson, 14, had been dropped off by her parents at Riverside Arena, an indoor rollerskating space in Livonia, Michigan, at the …

  1. Dr Scrum Master

    Using it wrongly

    Facial recognition should be used to identify potential matches.

    Humans should then be responsible for making decisions.

    1. Khaptain Silver badge

      Re: Using it wrongly

      I understand what you are trying to say but have to disagree as this what exactly what happened here.

      An individual was identified by the AI system as a potential known forbidden character, the person responsible that day took the decision to accept the AI detection status and forbade entry. The human element was definitely there, the computer system did not stop the person coming in, a human meatbag did.

      The AI System night have a problem distinguishing objects that have specific colors, or lack of contrasts etc but shouldn't that be considered as an algorithmic problem rather than a racial problem ?

      1. Pascal Monett Silver badge
        Thumb Down

        He accepted the AI's decision, but he did not control the result.

        So he just made himself part of the robot. No intelligence included.

        1. veti Silver badge

          Well, yes. But in practice, that's what will happen most of the time. Especially if you employ minimum wage bouncers.

          If the software says it's 97% sure, you'd have to be both pretty sure of your own perception and confident in your own authority to overrule it. I imagine people employed for this purpose are rarely either of those things.

          1. Dave314159ggggdffsdds Silver badge

            But the software doesn't say it's any percentage sure of anything, it tells you how many similarities the algorithm sees, and nothing else. 97pc is the probability that it's a good enough match to be worth flagging for humans to check, not the chances they're the same person.

            1. RegGuy1 Silver badge

              “The software had her daughter at a 97 percent match. This is what we looked at ... if there was a mistake, we apologize for that."

              The mistake is to use the software. If you work there you are likely (within the team of employees) to know the troublemakers. Why have software? Or is this yet another example of replacing costly humans with cheap technology?

              1. tip pc Silver badge

                I agree but sometimes you need your type of “use case” in use somewhere so you can sell it to others.

                Magic quadrants need real life use cases, regardless of how artificial those use cases start out.

          2. TechnicalVault

            The first mistake is UI

            A big problem here is the software foolishly exposing the 97% number in the UI. Your average layperson does not understand that does not necessarily mean that it is a match. In this case it probably means your training data is woefully deficient in non-matching black people, so your algorithm has learnt the wrong thing. Honestly they should not be selling this kind of thing with this naïve a UI.

            What software aimed at minimum wage staff should be doing is telling the staff what they should do next (customisable per company). It should be saying something like: "I think this is this person, please manually compare the images and check the person's ID to confirm the match".

      2. elsergiovolador Silver badge

        Re: Using it wrongly

        This is a quite common phenomenon, when a device is being used to absolve controller from responsibility.

        AI is the new dog

        In the past, the police used dogs to justify searching someone, when in reality prejudiced officer would gave dog a command to act as if the suspect raised a suspicion.

        Using the same analogy, the operators of such systems can tweak them in a way to underpin any door policy they want, where in case of absence of such device, the policy would unlikely be legal. It was just a glitch... We got hacked... and so on...

        When such systems are being used, human should always be solely responsible for the decision and the fact that computer gave this and that suggestion should be irrelevant.

        1. Michael Wojcik Silver badge

          Re: Using it wrongly

          Absolutely. Human operators will be reluctant to override the software because that's where the majority of the risk lies for them. A false positive is an externality ("I was told to use the software and the software said X"). The only ways to fix that are to remove the externality (get rid of facial recognition) or shift the cost (penalize employees for not challenging the system).

          I favor the former, by a long margin, and the latter is likely unworkable anyway.

          Maine has the right approach. Ban it. Ban it for law enforcement, ban other government use, ban commercial use. We've survived for millennia without widespread use of automated facial recognition.

          1. Anonymous Coward
            Anonymous Coward

            Re: Using it wrongly

            Make the company who trained and then sold the system financially liable, including where appropriate punitive damages would likely go a long way to solving the underlying problems associated with AI sorted out - back in the late 1800's Darwin had some useful insights in this area.

      3. batt-geek

        Re: Using it wrongly

        I think Khaptain has nailed it - a flaw in the system (be it inadequate software / hardware / etc) but not racial profiling...

        The problem with "inadequate software / hardware / etc" is that it doesn't give someone a reason to lawyer up and make an easy few quid...

        1. slimshady76

          Re: Using it wrongly

          There's enough evidence out there about how facial recognition software is trained with a marked bias against non-caucasian population. If you live in a territory with significant non-caucasian population, I'd say it's a non-adequate choice to use it.

      4. katrinab Silver badge
        Megaphone

        Re: Using it wrongly

        It is both an algorithmic problem and a racial problem.

      5. Snowy Silver badge
        Facepalm

        Re: Using it wrongly

        He is not saying what happen here but what should happen here.

      6. fidodogbreath

        Re: Using it wrongly

        The AI System night have a problem distinguishing objects that have specific colors, or lack of contrasts etc but shouldn't that be considered as an algorithmic problem rather than a racial problem ?

        From a technology standpoint, sure; but that doesn't take into account the people who are using the system. If they have a bias against people that they know the FR system misidentifies, they might be just fine with it having those flaws, as it can provide cover for treating those people poorly. "I'm not a racist; the system said they were a criminal."

        1. Anonymous Coward
          Anonymous Coward

          Re: Using it wrongly

          This would be a more convincing argument if the system bounced all Black girls wearing glasses or not. Instead it could exclude 97% of the previously identified trouble makers and 3% of those who resemble said trouble makers. The owners might or might not be racist, but if they are then the software isn't doing a good job at being exclusive when there are hundreds or thousands more that could also be excluded.

          In any large enough population there are errors made.

          The question is - is it advantageous for a venue to allow 99.99% of previously identified trouble makers who had been previously banned to be re-admitted on the fact that no amount of training will allow bouncers to recognize the potentially thousands of faces?

          Is concern for the 99.999% of their customers in trying to stop the trouble makers any part of the calculation or are they simply to accept potentially being victims?

          1. AVR Bronze badge

            Re: Using it wrongly

            If the number of troublemakers is low and the number of other visitors is high, then a 97% probability that this is a troublemaker is going to have a lot of false positives. That's basic statistics. Not all the false positives will have a cast iron alibi like never having been to the roller rink before - it's unlikely that this was the first person humiliated by being excluded.

          2. Rol

            Re: Using it wrongly

            In a once quaint English seaside town, there worked a bouncer at one of the resorts many nightclubs.

            Based on the antics of those he was throwing out, he might on occasion resort to biting a chunk out of their ear.

            This not only quietened them down a tad, but marked them as unwelcome if they attempted to gain entry again over the course of their hedonistic holiday.

            By all accounts this worked a treat for several years, and the club gained popularity among those who liked to enjoy themselves on a night out without fear of the knob heads that always ruined it.

            These days I tend to go to places frequented by bikers and other misjudged communities, as their ill-informed reputation tends to be an effective barrier to knob heads, who prefer a more cowardly fight with easier vulnerable targets.

            1. jason_derp

              Re: Using it wrongly

              "In a once quaint English seaside town..."

              So, this girl should hang out at biker bars instead of roller rinks? I'm not sure I understand the parable. Is it a parable? Seems like it is.

              1. Rol

                Re: Using it wrongly

                I was suggesting a more rigorous approach by indelibly marking known idiots so that the rest of society can go about their business without risk of being confused as one.

      7. tip pc Silver badge
        WTF?

        Re: Using it wrongly

        The computer says no

        for the Americans watch here

        https://www.youtube.com/watch?v=0n_Ty_72Qds

      8. steviebuk Silver badge

        Re: Using it wrongly

        Yes it shouldn't be seen as a racial problem as its just a shit system it would appear and the idiots monitoring it should of then done a manual check.

        The problem is, there have been facial recognition software that has been biased towards black people for some fucked up reason and might be same issue here.

      9. Cynic_999

        Re: Using it wrongly

        "

        ... shouldn't that be considered as an algorithmic problem rather than a racial problem ?

        "

        Perhaps it was a racist algorithm.

        1. Glenturret Single Malt

          Re: Using it wrongly

          I was wondering how many BME people were involved in constructing the algorithm.

    2. alain williams Silver badge

      Re: Using it wrongly

      Humans should then be responsible for making decisions.

      After (re)viewing the evidence (ie images) that the AI matched to the person. This might be hard for a human if the image quality is bad or the person has different make-up or ...

      What happens if AI is not being used and a venue owner thinks that s/he recognises someone who caused problems some time back ? The owner may well be confusing an innocent person with someone else - this has likely happened many times.

      1. Anonymous Coward
        Anonymous Coward

        Re: Using it wrongly

        The odds are vastly different.

        Every entrant is scanned by the camera. So the probability of being picked up is higher.

        The there is the probability that the image identification will be accepted just because it was offered.

        A 97% "match" is a vague measure.

    3. Warm Braw

      Re: Using it wrongly

      Humans are also responsible for the development of the technology.

      Which just goes to show how much you can rely on human "responsibility".

    4. Sykowasp

      Re: Using it wrongly

      The problem is one where the AI has been trained to have ingrained racism of the "they all look the same" type due to the training dataset and input data quality.

      It also relies on cameras, that have also been shown to exhibit ingrained racist assumptions in the design - the cameras pick up white/pale skin traits quite well, but not darker skin where they are not very sensitive.

      So a poor image + poor AI makes a decision, then shows the photos (on a likely poorly calibrated monitor that is also poor at showing dark skin tones) to an employee, who themselves may have their own ingrained racist opinions (or their employer will), and this will fix the previous two issues?

      1. Khaptain Silver badge

        Re: Using it wrongly

        "It also relies on cameras, that have also been shown to exhibit ingrained racist assumptions in the "

        design"

        That's an extremely serious assumption to make, can you please provide evidence of what you state ?

        1. Anonymous Coward
          Anonymous Coward

          Re: Using it wrongly

          I'm not too sure what evidence they have, but if you look at racism as simply being "treating differently based on physical appearance", then the software most certainly does.

          1. Can racism extended beyond the human element?

          2. Is fixing the software based on race, not racism?

          I feel if AI facial recognition is to ever have a chance, then it can't depend on cameras that depend on light. Which technology will finally nail it... I'm not sure, but I have a feeling that tech. will be much more costly (I'm envisioning a mass array of lasers throughout the walls, or something crazy like that).

          1. Anonymous Coward
            Anonymous Coward

            Re: Using it wrongly

            (entering a club) Whoa, awesome laser light show!

            Ah, no, that's just the security system...

        2. the hatter

          Re: Using it wrongly

          Sensors, just like film before, don't just take a pure, level view across the entire spectrum and brightness. They are designed and picked, and similarly at the lowest image improvement level, to take absolutely any scene, and hopefully make it most intelligible to the viewer. What this means in practice is that flesh tones, which feature in many pictures, are enhanced. And by flesh tones, obviously I mean the pinky pixels in pictures. And similarly, detail is more readily available from enhancing light sections, people want detail that their eyes also do similar with; the darker parts of whatever random view the picture includes are more easily lost. Dark tones contain more noise, so look better if they're evened out, rather than 'detail'/noise picked out. This approach means that for any million random photos you take, the majority will look better than that even, pure, imaginary sensor - you're a winner. Except it means many specific circumstances will likely always end up doing worse, because they are different from some platonic ideal picture in ways this approach does not favour.

      2. Jimmy2Cows Silver badge

        Re: exhibit ingrained racist assumptions in the design

        I think we can all understand the point you're trying to make here, but that statement is a bit of a stretch.

        There's no racist assumptions in the design, it's just that dark anything reflects significantly less light than pale anything. Elementary physics. Cameras either need a longer exposure to properly image dark things, which will affect image quality in other ways, or need higher gain which can lead to image noise and overexposure of brighter image regions.

        There's no easy answer. It's not like this is an already-solved problem whose solution is being deliberately ignored because racism.

        Physics isn't racist, it's simply physics.

        1. This post has been deleted by its author

        2. katrinab Silver badge
          Unhappy

          Re: exhibit ingrained racist assumptions in the design

          Yes, but when setting their exposure parameters and so on, they tune them to work best on white people.

          They could tune them to work best on black people, and white people would appear as an over-exposed white blob on the image.

          1. Roland6 Silver badge

            Re: exhibit ingrained racist assumptions in the design

            And they could use AI to detect the skin tone of the person being looked at and set the exposure parameters accordingly...

            1. Swarthy

              Re: exhibit ingrained racist assumptions in the design

              Or just use the HDR setting, which is exactly what it's designed for.

              1. Martin M

                Re: exhibit ingrained racist assumptions in the design

                Do you seriously think the CCTV cameras used in the are likely to be genuine HDR?

                Shooting stills in HDR is relatively easy - the camera just takes a burst at multiple aperture settings and there’s a (albeit computationally expensive) process to combine them. Although the results will not be good if anyone moves during that process.

                Shooting video in HDR currently requires at least $1000 of camera, more usually $2000. I doubt those are capable of streaming the result easily, and running around with SD cards or SSDs doesn’t really work in this scenario.

                I can’t imagine HDR hitting the CCTV/face recognition market for some time yet.

                1. Martin M

                  Re: exhibit ingrained racist assumptions in the design

                  No apertures, exposures. D’oh.

        3. elsergiovolador Silver badge

          Re: exhibit ingrained racist assumptions in the design

          There's no racist assumptions in the design, it's just that dark anything reflects significantly less light than pale anything.

          That's just an excuse. If the system cannot treat different races with equal quality of results, then by definition it is racist.

          1. Khaptain Silver badge

            Re: exhibit ingrained racist assumptions in the design

            That's just an excuse. If the system cannot treat different races with equal quality of results, then by definition it is racist."

            That's a seriously tainted point of view to hold.... ? It very sad.

            1. Swarthy

              Re: exhibit ingrained racist assumptions in the design

              The point being made is less "the hardware/software is racist" and more "the development and testing were not conducted using subjects of varying skin tones, and as such are optimized to a fairly narrow band, and falls down when analyzing people of color." With the implication that Systemic Racism is the reason why the system was not sufficiently tested/developed against darker skintones.

            2. yetanotheraoc Silver badge

              Re: exhibit ingrained racist assumptions in the design

              "That's a seriously tainted point of view to hold.... ? It very sad."

              I'll see your very sad and raise you very irate.

              The racist humans are very happy with their racist AI because they can make the racist decision they wanted to anyway and say it was just an algorithm.

              I'm white. Just calling it like I see it.

              1. Anonymous Coward
                Anonymous Coward

                Re: exhibit ingrained racist assumptions in the design

                Anger is misdirected. Issue: technology failure to handle varying light conditions and reflectivity. Does a control system exist to fix technical incapacity ? Yes. Is it applied ? No. Therefore system is unable to do what it is sold as doing. Conclusion: Consumer protection legal action against vendors of misrepresented goods and services and ban use until accuracy is demonstrated.

                Never assume malevolence when stupidity is a sufficient cause, much as that is an popular attitude. Laziness is also a factor. Good enough, get it out the door attitude inmanglement levels. That's not to say that racism doesn't exist, but it seems in some countries and self appointed cultural groups it is seen as a virtuous way of being irredeemably angry.

              2. LybsterRoy Silver badge

                Re: exhibit ingrained racist assumptions in the design

                I wonder what the opposite of rose tinted glasses is? I suspect you are wearing a pair.

                1. Cynic_999

                  Re: exhibit ingrained racist assumptions in the design

                  "

                  I wonder what the opposite of rose tinted glasses is?

                  "

                  Reality.

            3. Cynic_999

              Re: exhibit ingrained racist assumptions in the design

              Erm ------ whooooosh ----------

          2. Anonymous Coward
            Anonymous Coward

            Re: exhibit ingrained racist assumptions in the design

            That's just an excuse. If the system cannot treat different races with equal quality of results, then by definition the decision to deploy it is racist.

            FTFY

            1. gnasher729 Silver badge

              Re: exhibit ingrained racist assumptions in the design

              No, it only becomes racist if different quality of the results is not taken into account.

              Dark faces are harder to recognise with light-based photos or video, that's just a fact. If you try to improve recognition of black faces, you will also improve recognition of white faces, the difference stays the same.

              Assume that a black kid has a higher chance of a "97% match with known black troublemaker" than a white kid having a 97% match. So far no racism. It becomes non-racist if the software requires say a 98.2% match for black kids and a 97% match for white kids, or whatever the numbers are so that all innocent kids have the same chance of being rejected. It becomes racist if you reject twice as many black kids than white kids.

          3. Phones Sheridan Silver badge

            Re: exhibit ingrained racist assumptions in the design

            “ If the system cannot treat different races with equal quality of results, then by definition it is racist.”

            So now light is racist because it cannot reflect off black skin with the same equal results as it does when it reflects off white skin?

            1. Anonymous Coward
              Anonymous Coward

              Re: exhibit ingrained racist assumptions in the design

              So now light is racist because it cannot reflect off black skin with the same equal results as it does when it reflects off white skin?

              No, the adjustment of the CAMERA/RECORDING MEDIUM favors resolution in lighter skin tones, to the detriment of darker skin tones. As another poster noted, white people would be overexposed pale blobs if the adjustment were set to more reasonably capture darker tones.

          4. LybsterRoy Silver badge

            Re: exhibit ingrained racist assumptions in the design

            I think I detect a graduate of CRT

        4. John Brown (no body) Silver badge

          Re: exhibit ingrained racist assumptions in the design

          With modern electronics, is it beyond the wit of engineers to come up with a solution? Non-linear gain for example? Or is down to FR software matched with cheap, nasty CCTV cameras instead of the expensive ones used when setting up and/or calibrating it? Are these placing buying a "system" or just installing some software onto an existing CCTV system?

          After all, there are some damned good home security systems out there with some very good hi res night vision cameras. And yet, whenever the Police want to trace a suspect, the only CCTV "footage" they show on TV always seems to be grainy, blurry B&W images, often at 10FPS or less.

          I'd be prepared to bet that the source image at this roller rink came from a "bubble cam" where the housing has never been cleaned, up on a wall or ceiling in less than ideal lighting conditions and a very wide angle lens.

          1. Anonymous Coward
            Anonymous Coward

            Re: exhibit ingrained racist assumptions in the design

            It used the same camera to photograph the girl who was banned as to photograph the 14 year old with what looks like the same lighting.

            It's a close-up of their face that is taken at the same time a temperature reading is taken for COVID reasons.

            The software compared the two pictures.

        5. doublelayer Silver badge

          Re: exhibit ingrained racist assumptions in the design

          "Physics isn't racist, it's simply physics."

          Sort of. Physics isn't racist and nor are these cameras racist; they have no ability to change what they were programmed to do. Two things in this situation are racist though. First is the AI algorithms which have been inappropriately trained such that they're more likely to be incorrect about certain groups. That's not the program's fault as it's just performing mathematical operations, but it is a fundamental inaccuracy. The larger thing, however, is the use of all this stuff. If you use a camera you know won't capture people correctly and feed that information into a model which you know won't judge people correctly, that's a racist act. You are using tools which have the result of creating unjust circumstances, whether that was the explicit goal or not.

          1. LybsterRoy Silver badge

            Re: exhibit ingrained racist assumptions in the design

            I was going to upvote you but then I started thinking. If its racist it implies action taken to specifically achieve the result of incorrectly identifying people of colour. A better explanation would be economics or laziness.

            Think of those instances where "simple" abuse is categorised based on skin colour, sex or (these days) desired gender.

            1. doublelayer Silver badge

              Re: exhibit ingrained racist assumptions in the design

              At some point, laziness is no excuse. Take a related thing. If I install equipment which is faulty and is likely to kill the people who use it, but I don't know that I've done so, that's negligence. Still a crime, but a lesser one. If I know that it's likely to kill but I leave it up, I've committed a larger crime. A thing which is known to cause injustice and is left in place is at least tacit acceptance of those consequences, especially when the alternative, removing the system, is such a cheap and easy action to take.

        6. Cynic_999

          Re: exhibit ingrained racist assumptions in the design

          "

          Physics isn't racist, it's simply physics.

          "

          But can you prove that physics isn't racist?

      3. Cynic_999

        Re: Using it wrongly

        "

        It also relies on cameras, that have also been shown to exhibit ingrained racist assumptions in the design - the cameras pick up white/pale skin traits quite well, but not darker skin where they are not very sensitive.

        "

        This is surely more to do with the fact that visible light is inherently racist in that it refuses to to be reflected as well from dark skin as it does from white skin?

    5. DS999 Silver badge
      Stop

      That's not how they were using it wrongly

      Facial recognition works well for matching a variety of faces against a single face - as in a smartphone that unlocks with your face but not that of others. Apple claims Face ID has a 1 in 50,000 chance of matching someone else's face. You wouldn't trust it with nuclear secrets, but it is fine for its intended purpose.

      The more faces a system is trying to match against the greater the possibility of a false positive. It is simple math - if they had a system similar to Face ID then the odds of a false match are 50,000 divided by the number of faces in their banned database. If they have booted 100 people over the years, that means one out of every 500 people will be a false match. You might have that many people through the doors every weekend.

      Imagine using a system like that at a sporting event where you have tens of thousands in attendance - you'd need a system with a single person accuracy of one in a billion to make it feasible to use on such a large scale!

      The percentage confidence, rather than just saying "yes" or "no" like Face ID, adds another wrinkle as it makes it supposedly a judgment call by humans but most likely they have a policy like "if its over 95% then don't let them in". No idea what percentage Face ID uses, but to reach 1 in 50,000 it very likely requires a higher confidence level than 97%.

      1. gnasher729 Silver badge

        Re: That's not how they were using it wrongly

        FaceID uses distance measurement, basically analysing the detailed shape of your face. Skin colour has no effect on it at all. Demonstrated using some models with an amount of face paint that would have been unrecognisable by any human, didn’t affect faceID.

        1. DS999 Silver badge

          Re: That's not how they were using it wrongly

          We don't know what the roller rink is using, but even if it is a different technology than Face ID the exact same issue with many to 1 versus many to many still applies. I only referenced Face ID since it is well known and Apple has released information about its accuracy.

        2. Eclectic Man Silver badge

          Re: That's not how they were using it wrongly

          "FaceID uses distance measurement, basically analysing the detailed shape of your face. Skin colour has no effect on it at all."

          The why does current Facial Recognition technology have so much trouble with non-white faces compared to white faces?

          https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/

          "But Idemia’s algorithms don’t always see all faces equally clearly. July test results from the National Institute of Standards and Technology indicated that two of Idemia’s latest algorithms were significantly more likely to mix up black women’s faces than those of white women, or black or white men.

          The NIST test challenged algorithms to verify that two photos showed the same face, similar to how a border agent would check passports. At sensitivity settings where Idemia’s algorithms falsely matched different white women’s faces at a rate of one in 10,000, it falsely matched black women’s faces about once in 1,000—10 times more frequently. A one in 10,000 false match rate is often used to evaluate facial recognition systems."

          1. DS999 Silver badge

            Re: That's not how they were using it wrongly

            The facial recognition that has more trouble with black faces isn't based on distance measurement its based on photos/video.

            The latter is what the roller rink would likely be using, since I doubt they are asking everyone to stand still for a moment while they scan their face. They just have cameras on them as they walk in.

            The problem with black faces is mostly due to lack of training on black faces, but there's no indication that was the case here. Roller rinks are a black culture thing in the US, the overwhelming majority of people through the doors - especially in urban areas - are black. If whatever system they use is able to "learn" after deployment it will quickly get a lot of black faces to train on and that particular bias would be less of an issue. The many to 1 vs many to many thing still would be, however.

  2. Anonymous Coward
    Anonymous Coward

    mulling whether it’s worth suing Riverside Arena or not

    worth suing, metaphorically, because they feel something's seriously fucked up with this society, or literally, calculating the investment to hire a lawyer and potential return, which proves there's something seriously fucked up with this society?

    1. Yet Another Anonymous coward Silver badge

      Re: mulling whether it’s worth suing Riverside Arena or not

      And how much damage to their life, job, housing etc when the other lawyer has access to any previous police records (thanks to some friends on the force) and the ice rink hires a PR comp to make sure this all gets out on Fox news and social medo

      1. Roland6 Silver badge

        Re: mulling whether it’s worth suing Riverside Arena or not

        Live in fear if you want; given the current Black Lives Matter climate, a lawyer/PR comp who does this is being particularly stupid and deserves to have their life, job, housing etc. damaged.

  3. Eclectic Man Silver badge
    Flame

    "If" - Context

    “The software had her daughter at a 97 percent match. This is what we looked at ... if there was a mistake, we apologize for that."

    Again, not actually accepting there was a mistake, that a young girl has been humiliated and denied an enjoyable experience with her friends because of a faulty system. I do hope the parents get appropriate help deciding whether or to to pursue this legally.

    Sometime there has to be a legally required standard for facial recognition systems to meet. Claiming that the AI system said it was a 97% match is meaningless without context. What was the AI system trained on? I bet (and this is extremely racist) that if you trained an AI facial recognition system on blond, blue-eyed fair skinned boys plus one baboon, and then showed it a picture of any black person the match would be with the baboon. Conversely, train one on a load of black and asian people and one polar bear then show it a picture of a white person with white hair and it would match with the bear. Unless these systems are trained on an appropriate demographic they should be banned.

    OK rant over

    1. Sam Therapy
      Thumb Up

      Re: "If" - Context

      Entirely this. Facial Recognition is known for having problems distinguishing between non whites. I don't know why - I don't know if anyone does - but, knowing the system is badly flawed, yet continuing to use it, is inherently discriminatory against anyone who ain't white.

      1. John Brown (no body) Silver badge

        Re: "If" - Context

        I wonder how well Chinese developed FR works outside China? Likewise any other countries with FR programmes. I'm sure this must be happening all over the world, possibly with similar levels of issues with their minorities or non-locals. After all, white people are the minority in much of the world.

        1. doublelayer Silver badge

          Re: "If" - Context

          It all depends on the size and quality of training data and the effort of the developers. If you start with a nonrepresentative sample that hasn't been cleaned up, pump it through the training process until the percentages are high, then sell the result, you'll get something inaccurate most of the time. If you're selling a product though, the number of photos and high test scores are all you're quoting, so many companies do that.

          With rigorous attention to detail by data scientists and machine learning experts, you could get something which is significantly better. However, it would be a lot more expensive and it would still be wrong often due to unavoidable problems like poor cameras. At this rate, most people have either concluded that they don't want to do something that will never be even close to acceptable or that, if they're going to be inaccurate anyway, no use spending a lot of time trying to improve. And there we are today.

    2. Anonymous Coward
      Anonymous Coward

      Re: "If" - Context

      Again, not actually accepting there was a mistake...

      Well that's hardly surprising. As the parents are considering suing it would be incredibly unwise of the business to make a statement implicating themselves.

      1. Graham Cobb Silver badge

        Re: "If" - Context

        Or, they could consider their reputation as an entertainment venue in that community and just apologise.

        1. Anonymous Coward
          Anonymous Coward

          Re: "If" - Context

          Apologies will follow the lawsuit, should they be found guilty.

          1. Pen-y-gors

            Re: "If" - Context

            Genuine apologies in advance may well save the cost of the lawsuit and the need to apologise in cash afterwards.

      2. veti Silver badge

        Re: "If" - Context

        They could avoid the whole "being sued" experience by apologising and offering some appropriate compensation.

      3. Pen-y-gors

        Re: "If" - Context

        Actually, admitting they were wrong, genuinely apologising (not 'if there was an error we are sorry if anyone has misunderstood' Tory-style apology), and offering some sort of compensation, even if it's only free admission for the next decade, would be an incredibly WISE thing to do. Might even get some good publicity.

        That and reviewing their procedures. Perhaps require the bod on the door to look at a photograph (of a banned person, flagged up by the system) and require them to decide if the customer is the same person. And take responsibility for their decision.

        1. ThatOne Silver badge
          Devil

          Re: "If" - Context

          > And take responsibility for their decision

          You're kidding, aren't you...

          That would require capable bouncers (with a brain!), paid enough to take responsibility and not only tips. Besides, only 1 in 100 of those wrongly turned away will make any bigger fuss, so why bother?

    3. Dave314159ggggdffsdds Silver badge

      Re: "If" - Context

      "Sometime there has to be a legally required standard for facial recognition systems to meet."

      No, we approach that in a different way. It's already illegal to act solely on the basis of a flawed (discriminatory) algorithm.

      The problem is the users completely misunderstanding what the tech is saying. It isn't a 97pc chance of a match, it's a 97pc chance that a human should look and decide if this is the same person as in whatever stored image they're comparing to.

      1. Graham Cobb Silver badge

        Re: "If" - Context

        Yes, it is a major design flaw that the system tells the operator its assessment of the match. It should just display the top three matches from its database, with no further comment, for every visitor and let a human judge if any of them are the same.

        The flawed design is so obvious that the creators of the machine should probably be prosecuted for racism.

        1. John Brown (no body) Silver badge

          Re: "If" - Context

          I doubt the system is in place to match every visitor and flash up possible matches for a human to check. It's far more likely that it's looking at every visitor and only alerting staff when it thinks there's a match. At which point sirens and red flashing lights start up and the suspect is ejected.

        2. Dave314159ggggdffsdds Silver badge

          Re: "If" - Context

          How does displaying matches from the database help? Surely it should only show people who've been banned?

          It seems you're essentially suggesting the old school system of sticking up a row of photos of troublemakers and hoping staff recognise them.

    4. gnasher729 Silver badge

      Re: "If" - Context

      I assume “97% match” means 3% of all Americans look as similar or more similar to the photo as she does.

      If they had 50 people banned then most people would be a 97% match to one of them.

  4. Pascal Monett Silver badge

    "that's not right"

    No it is not.

    But that's what you get in a society that accepts flawed results from a technology that heas been proven time and time again to not be reliable.

  5. Pirate Dave Silver badge
    Pirate

    Jeez

    I think the big question here is - why does a roller-skating rink have facial-recognition AI? Are they actually just a cover for an underground nuclear missile silo? Or maybe, behind the pinball machines, is the entrance to the Pentagon's Michigan CnC bunker? Or perhaps the Navy keeps its most highly-classified results from the rail-gun trials in locker number 76. I mean, it's a roller-skating rink, not a national security site. Seems a bit paranoid to me (and I'm a bit paranoid myself).

    1. Yet Another Anonymous coward Silver badge

      Re: Jeez

      Because unlike your corner store, a sign saying "no more than 3 black teens" would be illegal. However 'computer says no' is a company policy and out of my hands.

      Ironically to stop employees getting sued if they looked at photos and said you look like her, you're banned

    2. Anonymous Coward
      Anonymous Coward

      Re: Jeez

      I suspect because they decided that

      a) it's cheaper than to hire one or more humans

      b) so it's a smart-future-proof move (somebody told them, let me guess who it might have been)

      c) "everybody's doing this!", and lookie here, 97%, can not go wrong!

      1. foxyshadis

        Re: Jeez

        Saving labor costs, plus management just hears the sales team say "Now you'll never accidentally let someone banned in to cause trouble again!" Of course, they know nothing about the tech, and sales knows practically nothing about the tech or what false positive means.

        And note this kind of low-end, error-ridden AI is just a module for the security camera system, it's not like a whole new system installed just for this purpose. It's increasingly common for all the major premises security vendors to offer one.

    3. Dave314159ggggdffsdds Silver badge

      Re: Jeez

      The low tech way to do it is to have pictures of troublemakers, make your staff try to recognise them, and act on that basis.

      Here, the computer system is supposed to be doing the first step of 'that looks a bit like this, we should check if they're the same'.

      Turning someone away unless you're sure they're the troublemaker in question is very odd. Normally if you aren't certain you'd let someone in and keep a closer than normal eye on them to start with.

      1. John Brown (no body) Silver badge

        Re: Jeez

        "Turning someone away unless you're sure they're the troublemaker in question is very odd. Normally if you aren't certain you'd let someone in and keep a closer than normal eye on them to start with."

        But but but, won't someone think of the lawyers? If you let in a "known" troublemaker "by mistake" and they get in a fight and hurt someone, it's your fault. This is the Land of the Lawyer we are talking about.

        1. Dave314159ggggdffsdds Silver badge

          Re: Jeez

          Even in the US, you'd have to be really seriously negligent for that to fly.

    4. John Brown (no body) Silver badge

      Re: Jeez

      "Seems a bit paranoid to me (and I'm a bit paranoid myself)."

      The system probably cost less than a security guards annual salary. After 12 months or less, it's "free". Trebles all 'round boys!

    5. Anonymous Coward
      Anonymous Coward

      Re: Jeez

      One day a kid is going to get thrown out for fighting and they will want to get even, so they go home, grab a gun and 3-4 magazines with 20 rounds each, and come back. But it's a shift change and the managment haven't told everyone, so the kid walks in and kills 10-20 people, wounds another dozen, some with life long agonizing damage.

      Then you will ask - why didn't they have better security and shouldn't they have done something?

      Guns - because in America there are more firearms in private hands than there are people and where casual insults or even getting cut-off in line is used to justify killing.

      The US has a mass shooting almost every day, too many to make the national news.

      1. Anonymous Coward
        Anonymous Coward

        Re: Jeez

        You sound like a psycho.

  6. goldcd

    Bit hard to comment without more details

    Looking at the pictures in the linked story - the girls to look pretty similar, so I could see the same mistake being made without AI

    (However, I'm white, and we're all notoriously poorer at differentiating people of different races than our own)

    1. katrinab Silver badge
      Paris Hilton

      Re: Bit hard to comment without more details

      They look completely different to me.

      Also white, and a much paler shade of white than the icon.

    2. Roland6 Silver badge

      Re: Bit hard to comment without more details

      >Looking at the pictures in the linked story - the girls to look pretty similar

      Similar in the sense they are both dark skinned and wearing glasses?

      I hope this does go to court, but it needs a decent lawyer and legal team to force the vendor of the facial recognition system to attend and explain in full detail how their system arrived at the 97% match figure, ie. strip away the AI cloak mysticism.

      1. MJI Silver badge

        Re: Bit hard to comment without more details

        Hmmm

        Both lit well enough.

        Both wearing glasses, totally different designs.

        Victim girl has a thinner face.

        10 second look at not very good photo definately not same person.

        1) Badly trained AI

        2) Idiot staff not checking and seeing two completely different girls.

    3. MJI Silver badge

      Re: Bit hard to comment without more details

      As similar as the two middle aged shaved head due to bald ex ginger chaps I know.

    4. Martin

      Re: Bit hard to comment without more details

      Indeed. There is a book written by a black woman (can't remember what it's called, sorry) where the author told the story of when she won a scholarship to a very good all-girls high school, where she was one of the very few black girls there. (She'd previously been at a school which was mainly black kids.) She had no problems about the girls at her new school - they couldn't have been nicer and more welcoming. But she literally couldn't tell them apart - their faces all looked the same to her. She had to use unreliable visual cues like clothes and hair colour.

    5. Gene Cash Silver badge

      Re: Bit hard to comment without more details

      Shoot... I have face blindness... EVERYONE looks the same.

      I've stood there waiting for a friend (of 35 years) to pick me up at the airport, while he's been bemusedly standing right there for 5 minutes.

  7. Potemkine! Silver badge

    AI everywhere, Justice nowhere

  8. Magani
    Unhappy

    Definition?

    AI - Artificial Ignorance.

  9. jpo234

    I'm pretty sure there are more people mistakenly turned away every single day by humans than by computers. I get it: This sucks. But it's something that happened since the first bouncer stood outside the first tavern.

  10. Scott Broukell

    AI - Farcical Recognition

    So, in order for this all singing all dancing wonderful AI technology (that is going to make all our lives so much better), to work - we need to re-introduce segregation!!! Yeah - go human advancement!

    We can at least be grateful that the cameras were not, as yet, attached to mini-gun auto turrets - that will perhaps be a future update!

    1. Eclectic Man Silver badge
      Black Helicopters

      Re: AI - Farcical Recognition

      See: https://www.businessinsider.com/httpswwwbusinessinsideresdrones-reconocimiento-facial-cerca-ser-realidad-812285?r=US&IR=T

      Also the MCU has already covered this. The nice Robert Redford played a character who wanted airborne aircraft carriers with facial recognition guided guns to 'take out' undesirable characters without trial, appeal or much consideration of collateral damage. (Spoiler alert - he gets shot.)

      1. Spacedinvader
        Terminator

        Re: AI - Farcical Recognition

        https://www.youtube.com/watch?v=Hzlt7IbTp6M

  11. vogon00

    Dodgey data quality..

    It's all very well embracing 'big data', but what you end up with is 'data quality' issues at 'big' scale.

    I'm not a fan of AI/ML or facial recognition (I'm a people-person and in my 50s)...I should think the the false positives generated outweigh the benefits of the bloody stuff. I suppose it has it's uses, but until it gets a *whole lot better* there has to be human oversight to exert a degree of common sense on it's decisions.

    97% Match....just means a 97% match by a bad algorithm on crappy data. I can understand the staff apologizing, but the people who need 'outing' as guilty are the numpties who made the system in the first place. I don't know the details so can't really judge, but the phrase 'not for for purpose' springs to mind.

  12. Danny 2

    Immigrants

    Me and my mate holidayed with his elderly relative in California in the '80s. She'd moved there from Canada after leaving Scotland. She took us to the largest shopping mall we'd ever seen that had the largest ice rink we'd ever seen in it. There was only one person on it, an angelic seven-ish hispanic lass. She was captivatingly talented and me and my mate were close to tears watching her. At that point the old Scottish woman turned up, glanced at the child and snorted, "Bloody immigrants." No sense of irony or self-awareness.

    She kept on trying to take us to her Scottish highland dancing club. We went once and were appalled at their racism. In California 'Scottish' or 'Irish' is shorthand for whites-only. They considered us 'bad Scots', and we didn't consider them Scots at all. Scottishness, it's more than a porridge thing.

    1. Dave314159ggggdffsdds Silver badge

      Re: Immigrants

      She was right about Scottish culture being an extremely racist one. Highest rate of hate crimes in Europe according to the stats - although of course places like Hungary won't even record them.

  13. Anonymous Coward
    Anonymous Coward

    Congress seems unlikely to act on the issue

    {shocked pikachu face}

  14. John Savard

    Obvious Question

    Since the individual in question was a 14-year-old, it's not as if he would be carrying any identification other than a school ID. So it's not as if he could just have shown them his driver's license to prove he wasn't the individual in question.

    1. Roland6 Silver badge

      Re: Obvious Question

      >it's not as if She...

      Need to retrain your reading and language processing filters; its not as if the article and news reports weren't clear that the 14-year old was a "her".

  15. Tron Silver badge

    Sue people in cases like this and they will stop lazily using this AI junkware.

    -Facial-recognition technology is controversial.

    Facial-recognition technology is bollocks.

    FTFY.

  16. Dr Scrum Master

    97%

    97% is like the 98% certainty I remember from an AWS facial recognition demo.

    There was a 98% certainty that the subject was wearing sunglasses. He wasn't. He was indoors with overhead lighting which obviously lead to his brow casting a shadow over his eyes.

  17. nautica Silver badge
    Boffin

    Wait...WHAT ‽ ‽ What's wrong with this picture? [pun is NOT intended]

    “The software had her daughter at a 97 percent match [with absolutely no validity or credibility attached to this figure] . This is what we looked at..."

    So...the software reported a 97% match, eh, and--by inference-- "...this is the ONLY THING we looked at. WE DID NOT, AT ANY POINT, EVER CONCEIVE OF ALLOWING A HUMAN TO BECOME INVOLVED IN THIS PROCESS..."

    +++++++++++++++++++++++++

    "...Juliea and Derrick, are now mulling whether it’s worth suing Riverside Arena or not...". That depends...

    Seems to me that the Robinsons are on their way to a really fat pay-off, if only they will hire a lawyer with a modicum of understanding about statistics AND a real Expert Witness whose expertise is in the field of Statistics. This is one of the more egregious examples of that fact, and, sadly, not the last...by any means.

    The entire field of Artificial Intelligence and Machine Learning is populated by charlatans.

  18. Anonymous Coward
    Anonymous Coward

    What a dystopian world

    Kids need to be scanned by a camera just to go skating.

    Let me off, I don’t like this ride anymore.

  19. Henry Wertz 1 Gold badge

    Human elemented needed

    As first poster said, human element is needed -- and NOT just to say "the computer said there was a match." The honest fact is, in a sense the system is racist -- you're probably going to have like a 60% match just for having 2 eyes, a nose, and a mouth (maybe 65 or 70% because of the glasses), +10% for a similar hairstyle, +10% for skin tone, maybe another 10% for having vaguely the same head size and shape (i.e. a girlish head), you're then at like 90% without it meaning much of anything.

    If places are going to use an AI, they really MUST have it so a match like this has the operator pay attention, not just go based on some result from the system. The system really needs to have show a name and photo for the match, and the operator needs to be expected to use it (not rely on some percentage match.) It would have been easy enough to be able to either see the photos don't match (... maybe, I suppose it's possible they really are practically a doppleganger for the trouble maker), or easy enough to ask "hey, are you (name)?" or "could I have your name please?" and let them in when it's clear they aren't the same person.

    edit: Looked at the photos in TFA, I can see why a AI may have thought they were similar (in particular, they have similar eyeshadow... or possibly some purplish-blue effect in the photos from how the camera and glasses interact.. that stands out.) But it takes a few seconds of human intervention to see they don't have the same head shape and are not the same person. The owner admitted they just look at % match and not photos, that's the issue I'd take up here, if they're going to use an AI that's a bad way to do it.

  20. 9Rune5

    Good thing kids come equipped with mobile phones these days

    I don't know the area in question, and the kid was relatively old, but if something had happened to the kid because she was unable to reach her parents after being rudely ejected from that place of evil, then...

    1. Eclectic Man Silver badge

      Re: Good thing kids come equipped with mobile phones these days

      Maybe a bye-law allowing any child so accused a phone call to their parents / guardian. The young man who called the police because he thought George Floyd was trying to pass a forged $20 bill has spoken of his guilty feeling that he is in some way responsible for Mr Floyd's death.

  21. chivo243 Silver badge
    WTF?

    Mr. Mackey

    This is bad M'kay... Roller rinks now use facial recog? That's bad M'kay!

    1. werdsmith Silver badge

      Re: Mr. Mackey

      Why bring Milton Keynes into it?

      Oh, because of the Cliff Richard Wired for Sound video?

  22. a_yank_lurker

    Not Ready For Prime Time

    FRS systems have notorious problems with dark skinned people. This has been widely reported. Part of the issue is the quality of the photos and the skill (or more accurately lack) of the 'photographer'. High quality portrait images require quality gear, good lighting, and a competent person behind the camera. Even if the first 2 conditions were met, I doubt the rink has a competent photographer on staff. I have my doubts about the gear and the lighting. Plus I have my doubts about the images used to 'train' the system, particularly those of dark skinned people.

    I have cats who have solid, dark brown fur. If the lighting is not good, the cat's facial details are sometimes hard to discern in a photo. I suspect a FRS system would struggle with an accurate identification. I have good gear and pretty good idea of what I am doing but I do not always have the best lighting.

    While I dislike suing someone just because, this case seems to beg for a suit because the rink's methodology and (mis)use of the technology shows a complete lack of understanding of its limits. More accurately complete stupidity. So the girl resembles someone else, whatever happened to asking for her name?

    1. Anonymous Coward
      Anonymous Coward

      Re: Not Ready For Prime Time

      So clearly what is needed is a supplemental palm-print database. Every customer gets a facial photo taken, and a palm imprint to accompany. Next time AI says "you look like that troublemaker we ejected last week", accused (would-be) customer presses palm onto a reader plate to see if there's a match. No problem there, right?

  23. herman Silver badge
    Black Helicopters

    Multispectral Cameras

    The problem is not with the facial recognition software. The problem is with the bad hardware. A half decent multispectral camera system will work much better.

    1. nautica Silver badge
      Boffin

      Re: Multispectral Cameras

      "The problem is not with the facial recognition software. The problem is with the bad hardware..."

      WRONG !!.. "computers-can-do-anything"-breath (as Johnny Carson might have said).

      THE problem is with all the mindless, room-temperature-IQ, mouth-breathing idiots in the world who have not one iota ("jot", in Britspeak) of critical thinking skills. These numbskulls think that computers and software are the solution to EVERYTHING, and fall head-over heels for the latest "solution",--panacea, if you will-- to any problem which presents itself, and to the snake-oil salesmen who are only too ready to take the money of these gullible rubes--and, sadly, who most often are victims themselves of the same mentality and mind-set.

      Does "Boeing 737 Max" ring a bell?

      "The required techniques of effective reasoning are pretty formal, but as long as programming is done by people that don't master them, the software crisis will remain with us and will be considered an incurable disease. And you know what incurable diseases do: they invite the quacks and charlatans in, who in this case take the form of Software Engineering gurus.”--Edsger W. Dijkstra

      “It is time to unmask the computing community as a Secret Society for the Creation and Preservation of Artificial Complexity."--Edsger W. Dijkstra

      "Originality is no excuse for ignorance."--Fred Brooks

      "I find that the reason a lot of people are interested in "artificial intelligence" is for the same reason that a lot of people are interested in artificial limbs: they are missing one."--David L. Parnas

  24. Anonymous Coward
    Anonymous Coward

    Just stop scanning people!

    OK?

  25. PhilipN Silver badge

    Missing the point ….

    .. that they even kept the miscreant’s (facial) data in a database.*

    Is that allowed?

    Either way it is pretty scary.

    Even leaving aside that the enforcement authorities et al would lose no time in accessing the database - of kids - roller skating! - whenever they wanted to.

    * and the innocent girl’s too.

  26. Sherrie Ludwig

    doppelgangers exist

    ...and this is why no AI face recognition should ever be trusted.

    https://www.youtube.com/watch?v=i1nz5Bkpmcw

    One women looking at two of the unrelated look-alikes said, "the one could do a murder, and the other get nicked for it".

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like