back to article MIT bods offer PhotoGuard gadget to thwart AI deepfakes

Computer scientists claim they've come up with a way to thwart machine learning systems' attempts to digitally manipulate images and create deepfakes. These days you can not only create images from scratch using AI models – just give them a written description, and they'll churn out corresponding pics – you can also manipulate …

  1. Will Godfrey Silver badge

    MIT Bods?

    Have they been downgraded from Boffin status, or is this a regional thing?

    Oh, and how long will it be before the artificial stupidity folk find a way of defeating this?

    1. Doctor Syntax Silver badge

      Re: MIT Bods?

      "Oh, and how long will it be before the artificial stupidity folk find a way of defeating this?"

      No doubt it's the beginning of an arms race.

    2. Snowy Silver badge
      Coat

      Re: MIT Bods?

      Trivial to defeat this, just used photos that are not PhotoGuarded, currently no image is and even if every image that is released from now is current ones are not going away.

      1. Anonymous Coward
        Anonymous Coward

        Weak attack on a weak defense then?

        I feel with a another swing you could do better. But yeah, there are plenty of images floating around these days, and once something is already published that ship has sailed.

        This plan won't work though, the adversary can just recompress the source file and reprocess it. Unless your "protection" scheme degrades the image so much it's usless, and unless that degraded version is the only one ever released, someone will just pre-process a clean enough copy and make their edits.

        Image signing can tackle this, but will take some software updates to start red flagging altered images, and warning on unsigned content.

        Using a GAN to poison files to trick other GANs is never going to be a stable solution in the real world, where any adversary will adapt to your adversarial attack on their system.

  2. Anonymous Coward
    Anonymous Coward

    Authority propagation tracking in necessary

    The most likely scenario is that the Internet will get flooded with deep fakes soon. It will not be scalable to check and verity all of them.

    It is not the images themselves, but the authority of the disseminating accounts/sites that really matter. Who said what, who reposted the information, who linked it etc.

    PageRank for Search was meant to mimic scientific paper references for authority. But it failed miserably, because manipulated links distort the truth. The manipulation started with SEO, then turned to social media marketing, and ended up with troll farms. In social networks the authority is being proactively falsified with fake accounts, fake likes, reviews, bot traffic etc.

    One could say that the Truth is a network of statements and authoritative references. It is probably futile to approach disinformation through analysis of every piece of information without knowing its origin.

    1. Anonymous Coward
      Anonymous Coward

      On the right track

      I'd amend that to Trust, but just because that is a useful and slightly lower bar. Trust and reputation, as a signing authority will need to be right and honest an overwhelming majority of the time, and the longer a signer has been honest, the more they have to lose from being wrong.

      The signature just validates who signed it, a separate stack needs to track (and penalize) the reputation of the signers for it to work. Otherwise Fox News will flag as a vaild signing authority, or the weekly world news.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like