back to article FYI: You can trick image-recog AI into, say, mixing up cats and dogs – by abusing scaling code to poison training data

Boffins in Germany have devised a technique to subvert neural network frameworks so they misidentify images without any telltale signs of tampering. Erwin Quiring, David Klein, Daniel Arp, Martin Johns, and Konrad Rieck, computer scientists at TU Braunschweig, describe their attack in a pair of papers, slated for presentation …

  1. Anonymous Coward

    Kudos to the boffins

    But why do I get the feeling that this won't be a priority for the companies involved because it costs money and their customers don't understand it?

    1. Pascal Monett Silver badge

      Re: Kudos to the boffins

      Because, down the line, the results will not be optimal and customers will be unhappy ?

      Maybe ?

  2. Richard 12 Silver badge

    We can fix this!

    All we need is some machine learnings to check that the scaling isn't changing the image too much!

    AI to the rescue, something something

    1. Anonymous Coward
      Anonymous Coward

      Re: We can fix this!

      Probably but at least this explains how the good guys in Person of Interest were able to hide from the bad AI....

  3. potatohead

    So basically using a bad scaling algorithm that introduces tonnes of aliasing can be abused. Answer - don't use a bad scaling algorithm

    1. Anonymous Coward
      Anonymous Coward

      Not really. It's worse than that. You can use any scaling algorything, and theoretically as long as the originator of the image can know/test that, they can create an image that is different once scaled. It's the actual product of the mechanism, scaling will give a different result.

      If you don't scale, but instead sample random pixels, that's harder. So that might be a better option, as while attackers can edit groups or individual pixels, the probability of you getting the pixel they edited is much much smaller.

  4. Nuno trancoso


    You know, this s**t is so old news only people entering the ML field yesterday would even bother reading it. It's NOT like GAN's are anything new, or unknown, or... Long story short, either you're taking your inputs from a known, very stict and narrow, controlled input, or... you train your model to deal with fakes, and then it's an arms race between two models. Nothing to be seen, move along...

  5. lvm
    Thumb Down

    how they want some attention...

    So they found a bug in image scaling library, why not call it just that? Noooo, they will involve AI, ML and other trendy acronyms.

    1. Pier Reviewer

      Re: how they want some attention...

      It’s not a bug. It’s intentional. When you’re scaling an image to a smaller size you lose data, as you are only able to represent a fraction of the original data. You need to decide which parts of the data are more important and which can be thrown away.

      The side effect of this is that in this very particular use case, the classifier can be tricked into classifying an input incorrectly, and human auditing is less likely to detect it (“hey, who flagged this cat as a traffic light?!”).

      Yes, it has limited use at the moment, but when ppl start selling data sets on a larger scale, and for sensitive use cases, it could be a more significant issue.

      1. Anonymous Coward
        Anonymous Coward

        Re: how they want some attention...

        This could already be used as an attack. Existing products do a "send your picture for a check" services remotely. We can now assume an attacker can submit their "photo id" of their real photo but edited so when checked by by AI to see if they are in a criminal database, it comes back "John Smith/someone else". Then once checked by a human, would they go with the computer or their own judgement?

  6. harmjschoonhoven

    Does that mean

    a self-driving car can be trained to recognize my face as a stop sign or is it as likely to recognize it as a end-of-speed-limit sign as a car with a "gentleman" behind the wheel did when I crossed the street the other day?

    1. Anonymous Coward
      Anonymous Coward

      Re: Does that mean

      Yes. Defcon already had a video of this. They put smiley faces in training data and the cars AI tests gave the reading of stop signs. They suggested making T-Shirts with these on, so cars would not bother you when you went for a walk.

  7. J27

    The training data is part of the algorithm... So this doesn't matter much. You don't deploy non-chrystalizeed neural networks anyway. Any neural net will produce bad results if you don''t use the right training data.

  8. aks


    Surely this is simple garbage in, garbage out

    1. Andy Non Silver badge

      Re: GIGO

      Exactly what I was going to say.

  9. Roger Kynaston

    classic GIGO

    Or, at least, it looks like that.

  10. Anonymous Coward
    Anonymous Coward

    "Will you notice the edits?"

    Yes. For a little while now, my biological neural network has been trained to spot these "AI" coded generated/edited images. However, once you hit hidden steganography (the sample images on the site have obvious dithering), there is little the human eye can do to see the edits. Something like Google Deep Dream? Trippy and weird and obvious.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon