back to article Top facial recognition algo joins the dots and sees pretend people

How much like a face does an image have to be, to trick the standard Voila-Jones facial recognition algorithm? Not very much, it turns out. Two researchers from the University of California, Berkeley, have spoofed the algorithm into recognising a handful of dots, barely recognisable as an image, as a human face. Another image …

COMMENTS

This topic is closed for new posts.
  1. frank ly

    So

    What is it that I do, very quickly, as a human, which tells me that those are not faces? Maybe it's that those images have no similarity to any face I've ever seen, have no standard facial features, etc.

    1. Brewster's Angle Grinder Silver badge

      Re: So

      The right one of the four looks like a face.

  2. Magani
    Coat

    Daydreaming AI?

    So does this algorithm sit back in a deck chair and look at the clouds to see what shapes it can find?

    Also, is "Voila-Jones" the French cut-down version of "Alas Smith and Jones"?

  3. heyrick Silver badge

    Mmm

    So maybe this helps explain the bizarre random things blurred in Google Street view that are not signs and not faces but more like "half a bush" or "front wheel of motorbike"?

  4. brotherelf
    Trollface

    And then, of course,

    there was the bloke who got the EURion constellation tatooed on his face and then couldn't get a passport anymore since the digital imagery in the production process didn't like it.

    (I made it up, but I wouldn't be surprised.)

    1. caffeine addict

      Re: And then, of course,

      This sounds like it needs to be tested. EURion is detected by scanners printers and maybe things like FB algos, yes?

      I wonder if I can get it on a t-shirt...

  5. John Smith 19 Gold badge
    WTF?

    So is this a facial recognition algorithm or *the* SoA algorithm ?

    Pretty clear why the FBI shut down its facial recog programme after decades if this is the top score, is it not?

  6. Al fazed
    Unhappy

    Hmmm

    I wonder if this algorythim is any relation to the one the Brit plod are using to find illegal child porn images once deleted from a suspects hard drive ? A 12% failure rate has been observed. One image that had been accepted as being in the illegal category just looked like a piece of sack cloth, but then so does the Turin Shroud.

    That's not Alan Turin by the way.

    What with bad algorythims in MRI the software, the police forensics software, network security software, financial services software and scientists being constantly abused by XL spreadsheets and Windows 10, it's a good job that all our ..............

  7. JimmyPage
    Stop

    Like the self driving cars story yesterday - we need to be BETTER than humans

    Humans also make mistakes in recognition - we are pre-programmed to seek out familiar shapes in randomness which is why people have car accidents when they "think" they saw a person, or animal where there was a shadow on the road.

    Seems people are finally realising that it's not good enough to *replicate* human abilities.

    We need to *exceed* them.

    By quite a margin it would seem.

  8. Cuddles

    Worse or just different?

    Humans are very well known as seeing patterns where none exist, with faces being one of the most common things for us to see - in bits of cloth, toast, the Moon, blurry photos, and so on. So the question is not whether this algorithm has a huge problem because it can be fooled by images that don't actually contain faces, but rather whether the algorithm is worse overall than a human that would be fooled by different images that don't actually contain faces. It may be obvious to a human that this random pattern of dots isn't a face, but it may be equally obvious to the algorithm that Jesus doesn't appear on toast.

    As for spoofing security systems, that may be the stated goal but it doesn't even seem to be related problem. The Viola-Jones algorithm is not a facial recognition algorithm as stated in the article, it is a face detection algorithm, as is clearly stated in the first link the article provides. It uses a few very simple filters to determine if there is something present that looks a bit like a face. While it is surprisingly good at doing so given its simplicity, it's not possible even in principle for it to distinguish between different faces - either it says there's a face present or not, and that's it. Spoofing a security system would require looking at the actual facial recognition algorithms used in them, and this research isn't likely to help with that at all.

    Finally, on this point:

    "These images only worked if fed directly to the algorithm; if they were printed out and scanned into the algorithm, they failed."

    The images in question were never printed. This should be obvious from the the statement noting ways the physical world changes them - printing and scanning does not brighten the centre of an image. The physical test was to display the images on a tablet with a webcam pointed at it. Sadly, it appears to be the paper that is misleading here; the abstract claims that

    "we show that it is possible to construct images that fool facial detection even when they are printed and then photographed"

    but in fact they do no such thing. The only references to printing in the body of the paper are either hypothetical or a reference to other works. The only thing this paper address is the "simulated physical world" using tablet and webcam. There is not even any attempt to address the similarities or differences between this method and printing. Since this is just a pre-print, hopefully this will be caught by peer review before actual publication. It seems likely it's not being deliberately misleading, but simply that the scope of the final paper didn't quite match what was originally intended when the abstract was written.

    1. Jason Bloomberg Silver badge

      Re: Worse or just different?

      Humans are very well known as seeing patterns where none exist

      In the "Look closely" example I could clearly 'see' two eyes and a nose in the left image when I first looked. We take what clues there are and extrapolate from there and I expect that is also what 'fools the algorithm'. I can however check further; an algorithm cannot unless designed to do so.

This topic is closed for new posts.

Other stories you might like