
Kudos to the boffins
But why do I get the feeling that this won't be a priority for the companies involved because it costs money and their customers don't understand it?
Boffins in Germany have devised a technique to subvert neural network frameworks so they misidentify images without any telltale signs of tampering. Erwin Quiring, David Klein, Daniel Arp, Martin Johns, and Konrad Rieck, computer scientists at TU Braunschweig, describe their attack in a pair of papers, slated for presentation …
Not really. It's worse than that. You can use any scaling algorything, and theoretically as long as the originator of the image can know/test that, they can create an image that is different once scaled. It's the actual product of the mechanism, scaling will give a different result.
If you don't scale, but instead sample random pixels, that's harder. So that might be a better option, as while attackers can edit groups or individual pixels, the probability of you getting the pixel they edited is much much smaller.
You know, this s**t is so old news only people entering the ML field yesterday would even bother reading it. It's NOT like GAN's are anything new, or unknown, or... Long story short, either you're taking your inputs from a known, very stict and narrow, controlled input, or... you train your model to deal with fakes, and then it's an arms race between two models. Nothing to be seen, move along...
It’s not a bug. It’s intentional. When you’re scaling an image to a smaller size you lose data, as you are only able to represent a fraction of the original data. You need to decide which parts of the data are more important and which can be thrown away.
The side effect of this is that in this very particular use case, the classifier can be tricked into classifying an input incorrectly, and human auditing is less likely to detect it (“hey, who flagged this cat as a traffic light?!”).
Yes, it has limited use at the moment, but when ppl start selling data sets on a larger scale, and for sensitive use cases, it could be a more significant issue.
This could already be used as an attack. Existing products do a "send your picture for a check" services remotely. We can now assume an attacker can submit their "photo id" of their real photo but edited so when checked by by AI to see if they are in a criminal database, it comes back "John Smith/someone else". Then once checked by a human, would they go with the computer or their own judgement?
Yes. For a little while now, my biological neural network has been trained to spot these "AI" coded generated/edited images. However, once you hit hidden steganography (the sample images on the site have obvious dithering), there is little the human eye can do to see the edits. Something like Google Deep Dream? Trippy and weird and obvious.