back to article Deepfake attacks can easily trick live facial recognition systems online

Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report. Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto …

  1. cyberdemon Silver badge
    Holmes

    Artificial Mimickry

    Honestly I think the term "AI" should be banned. It misleads the public into thinking that there is some kind of intelligence in the machine.

    But as anyone with a clue knows, these systems are nothing more than statistical regression. (multi-dimensional statistical regression yes, with lots of fancy optimisation over an enormous dataset to make it good)

    But fundamentally, all they do is try to copy and extrapolate decisions made by actual intelligent beings (humans) based on a big pile of data that represents (what are assumed to be) correct decisions.

    There is no logic behind them. So-called "AI" does not have the power to form IF/THEN/ELSE logical constructs, because it has no cognition. It is simply a guessing machine, and they should be called that: Guessing machines.

    Sure, you could take ten thousand real humans and have them do a hundred thousand Turing Tests, 50% against each other, and 50% against deep-fakes, and try to make a turing tester machine.

    It might (initially) perform very well. But it would still be a guessing machine. One of the humans might have said that she caught out the deep fake because it made some statement that wasn't logically consistent with her question. Even if she could input her insight into the analysis, how does an LSTM-RNN solve for that? All it can do is say that "this subject looks somewhat like some of the deepfakes" and it is dead-easy to make a new deepfake that fools it.

    1. cyberdemon Silver badge
      Big Brother

      Re: Artificial Mimickry

      > and it is dead-easy to make a new deepfake that fools it.

      And I will add: At that point it becomes just an endless arms race, where the only way to get ahead is to collect and analyse more and more data, from every human being on the planet, just to make better fakes and better fake-spotters, and better fakes...

      Until the only way to prove that you are human is to authenticate with your cryptographically-secure Human ID issued to you at birth (which may be rescinded at any time for naughtiness, at which point you would be considered a fake by the machines).

      And at that point we will have stepped into George Orwell's most famous dystopia.

      Please authenticate as Human before you can watch your "Sky Glass" Telescreen. (It is still watching you and it already knows exactly who you are, but you must authenticate anyway, in case you have been replaced by a fake)

      Oops, ID check failed. See you in Room 101

      1. jmch Silver badge
        Terminator

        Re: Artificial Mimickry

        "... the only way to prove that you are human is to..."

        Provide a DNA sample?

        (And then where does that leave us with replicants or humanoid robots?)

    2. Martin Gregorie

      Re: Artificial Mimickry

      Correct. All "AI" means at present is 'Pattern Matcher': some device or program that can report a result as 'matches requirement', 'doesn't match requirement', or more rarely 'similar to required answer' and cannot explain how it arrived at the answer it provided.

      As a result, the "AI" tag is essentially meaningless.

      The 'Artificial Intelligence' designation should only be applied to systems that CAN give a meaningful explanation of why they came to a particular conclusion or recommended a procedure to be carried out,

      However, fat chance of THAT ever happening thanks to the money being made by selling the current fallible pattern matching systems to the gullible as 'AI' or, worse, claiming them to be reliable ways to give definitive answers that affect people or control autonomous vehicles, factories, et al.

      1. Anonymous Coward
        Anonymous Coward

        Re: Artificial Mimickry

        The 'Artificial Intelligence' designation should only be applied to systems that CAN give a meaningful explanation of why they came to a particular conclusion or recommended a procedure to be carried out.

        Easy there, buddy. More than 70% of global population CAN'T give a meaningful explanation of why they came to a particular conclusion.

        1. cyberdemon Silver badge
          Alien

          Re: Artificial Mimickry

          I'm sure your 70% figure is hyperbole, and the argument still stands: If someone really cannot give a meaningful explanation of why they came to a particular conclusion, then they should not be included in important decisions (such as voting in a referendum or general election, forming part of a jury, or deciding whether a caller is genuine or a fraud), nor should they perhaps be included in the definition of "intelligent life"..

          1. Flocke Kroes Silver badge

            Re: Artificial Mimickry

            On the other hand isn't this the criterion for being appointed a judge on the US Court of Appeals for the Fifth Circuit?

          2. jmch Silver badge

            Re: Artificial Mimickry

            "If someone really cannot give a meaningful explanation of why they came to a particular conclusion, then they should not be included in important decisions"

            Not all human information processing is done in the brain, nor is all of it done in a way that we can understand. Indeed emotions and 'gut feeling', while much derided from a logical point of view, are ways in which our body is giving us feedback about life situations.

            For example if I get a 'bad feeling' about whether a politician, caller etc can be trusted, I might not know exactly why, but it's highly likely that my body has subconsciously picked up on some signals that do, in fact, indicate that the person is untrustworthy. While it's true that in-depth psychological study could result in untrustworthy humans displaying trustworthy behaviour to fool the 'gut instinct', it's equally true that in many real-life situations we don't have all the relevant information available to be able to make a decision on logical grounds.

            So most certainly, many humans can make decisions without *being able to explain* why, but internally they themselves "know" exactly why. Not being able to explain the 'knowing' doesn't invalidate the decision.

            With 'AI' systems it's different, as they neither can explain, nor do they 'know' without being able to explain.

            1. Cav Bronze badge

              Re: Artificial Mimickry

              "Indeed emotions and 'gut feeling'" both take place in the brain.

              "Not all human information processing is done in the brain". Yes, it is.

              "nor is all of it done in a way that we can understand".

              The individual may not understand but "we", as in science, do.

              1. yetanotheraoc Silver badge

                Re: Artificial Mimickry

                "Indeed emotions and 'gut feeling'" both take place in the brain.

                "Not all human information processing is done in the brain". Yes, it is.

                "nor is all of it done in a way that we can understand".

                The individual may not understand but "we", as in science, do.

                Hey there, nice fact-filled rebuttal.

                I thought (sic) at a minimum that some of the information processing involved the senses, so your rebuttal only holds if we think of the nerves in the fingers, to take one example, as being in the brain. In particular the gut feeling mentioned often takes place precisely in the gut (butterflies, sick, and other gut feelings). Humans can withdraw their hand from danger before the nerve impulses have had time to make the round trip from the hand to the arm muscles. So I vaguely recall a study showing the signal made a shorter trip from the hand to the spine and back to the muscles, in parallel to the trip to the brain. Now if you want to exclude this type of thing from information processing, then please feel free to elaborate.

              2. jmch Silver badge

                Re: Artificial Mimickry

                "Not all human information processing is done in the brain". Yes, it is.

                Erm... citation needed? Maybe it depends on your exact definition of "information processing" - but a whole lot of brain processes are triggered or modulated by hormones and other chemical signals that different parts of the body send in response to external stimuli or other internal nerve or chemical triggers. The brain does an awful lot of stuff, not only logical thought. So maybe 'logical thought' does happen only in the brain, but for me "information processing" is a lot more than logical thought.

                The individual may not understand but "we", as in science, do.

                Absolutely wrong. "We" as in science are very very far from really understanding at a detailed level how thoughts are constructed and processed in the brain. We have a decent map of what parts of the brain do what, we can very vaguely (at about 80% success rate) map a brainwave pattern to a small subset of very specific thoughts (that's what allows for example, brain control of a very simple computer interface).

          3. Falmari Silver badge
            Devil

            I am not intelligent life ;)

            @cyberdemon “70% figure is hyperbole” I would call 70% conservative. Who has not on occasion known the answer to a question without knowing how they got to the answer?

            I remember doing a maths evening class prior my degree. When it came to Matrix maths, I was able to see the answer. The lecture opened with a simple 2 by 2 matrices addition and the answer. Then a second was put on the board we were asked if we knew the answer.

            I knew the answer but could not explain how I got there I could just see the answer. After 2 more simple questions to see if it was not a fluke the lecturer increased the difficulty of the questions, subtraction, multiplication all I asked was can I have negatives and fractions. After about 10 questions I got one wrong.

            How was I doing it I haven’t a clue, my lecture thought I was able to see a pattern. I think it was more likely I was subconsciously remembering maths from school 15 years previously. But however I did it I cannot explain how I came to the answers, Therefore I must not be intelligent life.

    3. ThatOne Silver badge
      Megaphone

      Re: Artificial Mimickry

      > Honestly I think the term "AI" should be banned. It misleads the public into thinking that there is some kind of intelligence in the machine.

      Agree, but in this specific case one should also ban "facial recognition" and other biometrics software from holding any important (much less critical) role.

      Biometrics are not a secure way to determine it is you and nobody else. All right, it might spot that you were supposed to be a little old lady and not a young 7 foot bearded guy, but even then, if the guy puts a picture of you in front of his face he's in. Even if the rest of him is still pretty much "not you".

      But well, it's a lost battle: Biometrics are "cool", extremely easy to use, very cheap to implement, and as the article states, "We told them 'look you're vulnerable to this kind of attack,' and they said 'we do not care,'". Sure, why would they? It's cool, man! The suckers love it, because it relieves them from making any effort, it's not like you can forget your face.

      Now obviously someone will jump in to point out that what's the difference between using "123456" as a password or using your face? Well, the difference is that in the first case *I* can chose to use a more secure password, while in the second case I can't chose a more secure face. Simple.

      1. Tom 7

        Re: Artificial Mimickry

        As someone who has had his face rearranged by a pavement after a few drinks and know a few people who have had facial injuries through car accidents and muggings I'd be disinclined to use facial recognition for anything more important than a piece of paper.

      2. cyberdemon Silver badge
        Devil

        Re: Artificial Mimickry

        "AI" based authentication is also a great way to embed deniable backdoors in all kinds of software, as reported in an earlier reg article

      3. Alumoi Silver badge
        Joke

        Re: Artificial Mimickry

        Ever heard of plastic surgery?

      4. Version 1.0 Silver badge

        Re: Artificial Mimickry

        I always see the term "AI" as meaning Artificial Idiot, to say it's Artificial Intelligence effectively misleads the public into thinking that there is no stupidity in the machine. Let's move to reality, calling it Artificial Realistic Stupidly Efficient, Intelligence.

        No, I'm not joking - these days we just call the programming results "AI", Windows, Android etc., etc., and are busy updating everything all the time with each version claiming to be a bug free enhancement but it's updated again every week or two. So our AI is just a guess, based on other guesses.

    4. vtcodger Silver badge

      Re: Artificial Mimickry

      "Artificial Stupidity" might well be more accurate, but I guarantee you that the Artificial Stupidity label will never make it past the folks in marketing. At least not until they are replaced by AI agents.

    5. Cuddles

      Re: Artificial Mimickry

      "these systems are nothing more than statistical regression"

      Indeed. Which makes this news read a little oddly. I was wondering how exactly they were planning to generate all this fake data, and the linked WSJ article provides the answer - "Anil Bhatt said the plan is to use algorithms and statistical models to generate approximately 1.5 to 2 petabytes of synthetic data". So they're going to use algorithms and statistical models to generate the data with which to train their algorithms and statistical models. A big old circle-jerk with no hit of reality allowed to intrude at any point. They go on to say that this is how they expect everyone to do "AI" in the future.

  2. vtcodger Silver badge

    The Shaman was Right

    Those damn camera thingees can in fact steal -- if not your soul -- your wealth.

    BTW, anyone remember the Mythbuster's successful attack on fingerprint scanners https://en.wikipedia.org/wiki/MythBusters_%282006_season%29#Fingerprint_Lock? My guess is that fooling facial recognition is going to be easier than fooling a fingerprint sensor. At least for the forseeable future.

    And it's not like many, probably most, smartphone users haven't cleverly posted pictures of their face online.

    1. cyberdemon Silver badge
      Pirate

      Re: The Shaman was Right

      Many banks already use "voice recognition" for authentication of their telephone banking system.

      Does anyone remember the 2001 computer game "Uplink"?

      All you had to do was call up the victim, record their voice saying "Hello?.. Hello?? Who is this?" and that was enough to generate a voice-print response to authenticate with their bank and rob them.

      1. Tom 7

        Re: The Shaman was Right

        I recently got given steroids for a lung condition. I had to retrain a voice translation system I'd been playing with.

        "These things dont change" except when they do.

  3. Anonymous Coward
    Anonymous Coward

    This is best biometric news I've heard. Now "they" won't need to cut our fingers off, to steal our cars and crypto.

    1. Anonymous Coward
      Anonymous Coward

      facial recognition

      Thug 1: "Dude! Why'd you cut off his face?!!"

      Thug 2: "So we can steal his money and his car."

      Thug 1: "You could have just taken his picture."

      Thug 2: "Oh. Din't think of that."

    2. ThatOne Silver badge
      Devil

      "Stolen phone, sold with finger to unlock it"

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like