back to article Watermarking AI images to fight misinfo and deepfakes may be pretty pointless

In July, the White House announced that seven large tech players have committed to AI safety measures, including the deployment of watermarking to ensure that algorithmically-generated content can be distinguished from the work of actual people. Among those giants, Amazon, Google, and OpenAI have all specifically cited …

  1. FF22

    A stupid idea

    Marking anything AI generated is, regardless of how it's being done, a stupid idea to begin with, as it's equal to the evil bit (RFC 3514). Also because anybody can run a generative AI on their own computers (political parties and large corporations even more so, and even build their models from scratch), and circumvent the flagging occurring already at the source in the first place.

    If anything, what could work instead of it is the opposite: ie. marking the reliable and verified content a such. And not by some kind of fingerprint, watermark or a flag, but by establishing something like a custody chain for evidence, which would allow you to trace back the info not only to the original source, and also allow you to compare how it was possibly altered or preserved as it passed through various channels or people until it got to you, also making evident the culprit who did the manipulation (both when it was intentional and unintentional). This could be done by using digital signatures over digital signatures. The technology for this has been available for decades and well established, you'd only need to slap a very thin application-specific layer on that.

    That being said the real problem is that the Average Joe just generally doesn't care whether some info is from a reliable source or not, or whether it's true or not. He cares more whether it fits his world view or not, and anything that does not fit, his mind will refuse as some kind of conspiracy or "the fake" information, even when it's actually the verified truth and objective reality. So, in the end this is more a problem with the human psyche, which unfortunately can not be fixed by technological means, only by education and traning.

    Then again, politics will not allow the latter, because then they'd also lose the ability to manpilate their voters and public opinion in general, which is the very and really only thing that keeps their sorry, incompetent and corrupt asses in power.

    1. Dinanziame Silver badge

      Re: A stupid idea

      I think it's a bit negative. In the very least, it should be possible to prove in some cases that the image was AI generated by specific systems. I'm unconvinced by this claim of the authors that they can modify an innocent image to make it seem AI generated. It seems easy enough for watermarks to be cryptographically signed.

      1. StillBill
        FAIL

        Re: A stupid idea

        So, where is the line drawn. The updating features in Photoshop tend to be labelled as "AI". I can add/remove/move edit things in Photoshop through an AI interface. Does that mean it needs to be watermarked? At what point do I go from a human edit to an AI edit? What if I use an AI image creation, then a human editor to adjust it and modify it - oh change lighting levels, color balance - crop it rotate it, at what point does it become a human edit from an AI creation?

        Now the fun of cryptographically signing the image. The first question is with what key and why should I trust it anyway?

      2. Michael Wojcik Silver badge

        Re: A stupid idea

        it should be possible to prove in some cases that the image was AI generated by specific systems

        OK, so propose a system for doing so. Is it robust against digital image manipulation, including noise addition and removal, blurring and sharpening, cropping, etc? Is it robust against the classic point-a-camera-at-the-screen technique? Remember that the same approaches used to create imperceptible watermarks are often useful for invalidating imperceptible watermarks, and removing perceptible watermarks by interpolating a close-enough match to the original image data is something machine-learning approaches are already good at.

        A cryptographically-signed watermark is easily defeated (bypass confirmation) by trivially altering it. If the watermark detector ignores watermarks that fail signature verification, then you just alter the watermark in a genuinely-watermarked image and, presto, it's no longer detected as watermarked.

        You can't make the watermark harder to spoof without making it easier to break. That's the whole point of the paper.

      3. Donn Bly

        Re: A stupid idea

        Sure, you an cryptographically sign a watermark. All that would mean is that you could establish that the image was watermarked by a specific entity, at a specific time and place. Like an SSL Certificate, you would rely on the authority and reputation of the signer. But there isn't just one entity that would be doing the signing, or even dozens. Because of the proliferation of technology you have MILLIONS of potential generators, thus millions of potential signers. Relying on a cryptographically signed watermark would be like relying on a self-signed certificate - it would prove that it is watermarked but would NOT prove whether the source was legitimate, or whether the source was AI generated.

        if you can inject a detectable watermark, then I can build something that would detect it. If I can detect it, then I can make subtle changes to the source to corrupt, obscure, or entirely remove that watermark to the point where it is not detectable. That completely negates the idea that an image without a watermark wasn't generated by AI. Even a visible watermark like you would have on the comp images from any stock photography outlet can be obscured so that you don't know the source of the image. Invisible watermarks are even easier.

        Conversely, I can take an existing image and watermark it. As mentioned above, you have millions of potential generators and signers. My camera doesn't watermark the images, so the existence of a watermark or lack thereof on an image I publish does not in any way change the underlying fact as to whether or not my original photo was created by me. A watermark just attests to the claim of whomever is signing it.

      4. garwhale

        Re: A stupid idea

        What's to stop people changing or generating an image and then (re-) signing it? Either signing will be automatic (and thus useless) or it will not be used. If my phone automatically signs all photos, what's to stop me photographing an AI-generated image?

    2. veti Silver badge

      Re: A stupid idea

      So, in the end this is more a problem with the human psyche, which unfortunately can not be fixed by technological means, only by education and traning

      People have been trying to "fix the human psyche by education and training" for thousands of years, and no-one has cracked it yet. How do you suggest we try to improve on that track record?

      1. Michael Wojcik Silver badge

        Re: A stupid idea

        Specifically, people will believe what they want to believe. A perceptibly-watermarked deepfake video that "proves" some conspiracy theory will be hailed by believers regardless of the watermark. They'll just say the watermark was added afterward by the Forces of Evil.

        It's a small minority who even explicitly attempt to evaluate evidence and account for their own biases when considering arguments, and all the evidence from methodologically-sound psychological studies (and history) supports the claim that no human can be perfectly vigilant, or even mostly vigilant, when it comes to doing so.

  2. StrangerHereMyself Silver badge

    Guarantee

    The best guarantee is to deny copyright on all works made by AI. Authors and artists will shun AI if they know their created works are public domain the moment someone discovers them to be generated.

    1. Falmari Silver badge

      Re: Guarantee

      @StrangerHereMyself "The best guarantee is to deny copyright on all works made by AI. Authors and artists will shun AI if they know their created works are public domain the moment someone discovers them to be generated."

      Guarantee of what? How would denying copyright on all works made by AI fight misinfo and deepfakes? The of creators of misinfo and deepfakes are not going to be claiming authorship and copyright.

      1. StrangerHereMyself Silver badge

        Re: Guarantee

        Because this isn't about misinfo and deepfakes. This is about artists and creators flooding the world with cheap generated artworks and selling them.

        1. Falmari Silver badge

          Re: Guarantee

          But the article is about misinfo and deepfakes, it is not about artists and creators generating artworks with AI. So you can see my confusion.

        2. that one in the corner Silver badge

          Re: Guarantee

          > This is about artists and creators flooding the world with cheap generated artworks and selling them.

          The article is about people who are NOT artists generating fakes and about "creators" creating deepfakes and misinformation.

          Feel free to change the subject (something that never happens around here, couch cough) but best to announce that is what your are doing, instead of just flatly denying the thrust of the original article!

          PS dunno why everyone keeps calling these things "cheap": I am barely artistic but it is still easier (and a lot more fun) doing the art bit for real than spending all the time trying to get the prompts to work! It may be easier than learning the drawing techniques, but it still isn't "cheap" - and you still just end up with a digital image, not a physical drawing or painting - so even more relevant to disputing the "artistic knockoffs" claims!

        3. TheMaskedMan Silver badge

          Re: Guarantee

          "This is about artists and creators flooding the world with cheap generated artworks and selling them."

          And this is a problem, why? Personally, I quite like making pretty pictures with midjourney etc. I have the artistic talent of a brick, and it's quite nice to be able to make the kind of pictures that I would make by hand if only I wasn't incompetent. It's also quite fun to twiddle with prompts and learn how to do things.

          Are you suggesting that people shouldn't be allowed to do that because they're not "artists"? Or that they shouldn't be allowed to sell a pretty picture they made that way because they're not "artists"? That's like saying that people shouldn't be allowed to use a word processor unless they're a qualified typist, much less profit from their two-fingered key pecking. If the "artist's" work is indistinguishable in terms of quality from that of generative AI, perhaps it's an opportunity for the "artist" to up their game and show us what they can really do.

    2. chuckufarley Silver badge
      Headmaster

      Re: Guarantee

      I am a little bit of an artist. I don't get paid for it, I'm not very good at it, but I really enjoy it. I think AI "art" generation is here to stay. It is not only fun (when it's not frustrating as you learn what to do and what not to) but it also provides a unique form of inspiration because of the so called Emergent Qualities of AI. The AI can do math with artistic styles. I can take Norman Rockwell and subtract the impressionism, multiply it by 0.47 Rembrandt, and add a touch of CGI to see an artistic style no human ever has created before. Just doing that is simple compared to what is possible with the current state of the art. Especially if you are willing to take the surprisingly small amount of time required to learn how to train one of the many "helper AI's" that SD supports.

      I am not too worried about staving artists. What worries me is that AI is a tool, and all tools can be weaponized. I feel the greatest threat from AI is the untested and here to fore unthought of applications of it. So in my opinion the best the AI Devs could do for the world is spend every other day thinking of all the ways to abuse what they are writing. They shouldn't try to build in safe guards but instead try to build models that don't need safe guards. While that may be impossible it is at least worth more while than the relentless pursuit of profits.

      1. vtcodger Silver badge

        Re: Guarantee

        "I am a little bit of an artist. I don't get paid for it, I'm not very good at it, but I really enjoy it ..."

        Thanks, I've been looking for even one example of a use for AI that isn't on the ongoing criminal activity side of shady. Yours is the first I've seen.

        1. garwhale

          Re: Guarantee

          AI used to find medicines more quickly, AI used to help analyse medical imaging, AI for earthquake prediction, ...

    3. Michael Wojcik Silver badge

      Re: Guarantee

      Yes, that will work. People consistently avoid doing anything fraudulent. Students never cheat. No one lies on their tax returns, or exceeds the speed limit when driving. It's impossible to find artists who engage in any sort of malfeasance.

      I'm also curious how this dictum would be enforced. How does someone "discover" that work X was machine-generated? How much of the creation process has to involve "AI" for it to fall foul of this rule? What counts as "AI"? What evidence does the offended party present to the Copyright Police? How does this move through the courts? Does some member of the work's audience even have standing to bring an action? How were they harmed? What recourse is available to an artist falsely accused?

  3. chuckufarley Silver badge
    Coat

    As a point of order...

    ...You should know that the most popular Stable Diffusion clients offer a toggle for watermarking their AI generated images. Toggling this off is a violation of Stability AI's EULA but most users think "Hey, it's only a EULA."

    1. stiine Silver badge
      Unhappy

      Re: As a point of order...

      So, if you're generating an image to be used as wall art, do you really want the grayscale equivalent of GETTY IMAGES patterned across the art?

      1. chuckufarley Silver badge

        Re: As a point of order...

        Did you or did you not agree to the EULA? If you aren't going to honor your word for something as trivial wall art then how can I trust you with the important things?

  4. johnrobyclayton

    Only trust what you know, Know who signs

    Deepfakes are an issue only for the stupid that believe what they see and those affected by the consequences of the stupid believing what they see.

    Deep fakes are just visual candy. There is no need to go around believing it.

    It is a choice only believe images that have been digitally signed with a modern robust encryption scheme.

    It is choice to only make decisions based on data whose provenance can be proven to come from a known reliable source.

    Digital signatures all the way.

    If AI can defeat modern robust encryption then we have bigger problems than deep fakes.

    1. munnoch Bronze badge

      Re: Only trust what you know, Know who signs

      I'm not a particularly arty person but in my view art is more than just a pleasing arrangement of colours and patterns, it has to be an expression of some sort of inner-feelings. The machines can for sure produce something that has the aesthetic quality of 'art', but it will be devoid of the emotion.

      Now, granted there are plenty of 'jobbing' artists who are happy to churn out any number of variations on the same old theme and they probably find enough customers to put bread on the table. The artistic equivalent of web developers if you like... I'm no more interested in paying them for their efforts than I am the AI's.

      Anyway, putting existential debate aside, any control that relies on voluntarily adding meta-data clearly isn't going to work and is pointless even considering.

      "Is that a genuine Ming vase?"

      "Why certainly it is, sir, as you can plainly see it DOESN'T have FAKE stamped on the bottom."

    2. AVR Bronze badge

      Re: Only trust what you know, Know who signs

      Your decisions aren't binding on other people; they may well be fooled by deepfakes. You may be affected if that results in someone you consider to be the 'wrong' politician being elected, or an entertainer you like getting shitcanned, or a friend or relative getting faked porn put up about them even if you wouldn't be bothered by that happening to you. And for that matter if most images aren't signed (likely), you may call something a fake when it's actually real.

      Basically digital signatures can only ever be a partial solution.

  5. Tim 11

    human = good, AI = bad?

    I'm a bit baffled by most of the comments on here - they seem to be based on an underlying assumption that stuff generated by humans is genuine/trustworthy/high quality and what's created by AI is fake/low quality. This is obviously wrong - people have been producing fake/low quality content for years (whether deliberately or by inability) and will continue to do so

    Graanted, at the moment a lot of AI output is low quality but in time that will improve and when we get to the point where AI can reliably generate higher quality content than people in a specific area then I'll be happy to switch.

    I don't see why we need to distinguish between human vs AI at all. there's a lot of content out there, of varying quality, some generated by people and an increasing amount by machines. It's up to us as consumers to navigate that minefield and select appropriate content to consume. Whether it's produced by a human or a machine or a cominbination will become increasingly irrelevant

  6. mark l 2 Silver badge

    Surely any watermarks added to AI generated images could be defeated by running the resulting image through filters in Photoshop or scaling the image down to a lower resolution and then having AI upscale it again etc?

    It sounds to me like their statements of "you will be able to detect AI deepfakes from watermarks" are just a marketing gimmick that they hope will keep the regulators away for now.

  7. mpi Silver badge

    Excuse me, why are we having this discussion anyway?

    The idea of watermarking dies the moment the first model goes open source.

    We have open source diffusion models.

    We have open source large language models.

    The people who have a vested interest in producing and disseminating fake info, also have the expertise, or the depthess of coffers to pay for expertise, to use these models.

    So why are we having this discussion again?

    1. Anonymous Coward
      Anonymous Coward

      Re: Excuse me, why are we having this discussion anyway?

      Something to talk about while meeting the the other Big Guys and POTUS over beers?

  8. probgoblin
    Flame

    The Important Bit

    Okay, but are we just going to ignore that "Fundamental Limits and Practical Attacks" is the best possible name for an album?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like