Content Credentials Cloud
That is going to be one hell of a repository of data, if everyone uses it. Can it support that size and bandwidth? Or will the data held there be of an abstracted nature, which is easier to circumvent?
Microsoft, Adobe, and other big names this week pledged to add metadata to their AI-generated images so that future compatible apps will flag them up as machine-made using a special symbol. You may have seen some reports about this perhaps described as some kind of AI watermark. We took a closer look. The symbol – described …
Exactly. I don't see Adobe maintaining that Cloud out of the goodness of its heart - it doesn't have one.
Plus, a verification system that you have to pay for wouldn't be used, meaning the check itself has to be free.
Adobe doesn't do free.
So how's it going to make the millions it is expecting from this ?
"Pascal Monett is the author of this photo, as verified by Adobe ProfitProtect with support from our featured sponsors"
(Meanwhile, the auto-play video superimposed on the top right of your image starts extolling the virtues of Clorets mints for fresh breath when you're on your next date).
It's a start. Every concern the article brings up is valid. But we're in a better situation having ai-generated images marked and someone having to know to remove them if they want to pass of the image as genuine, than the current situation of having no metadata indicating this at all. I'm sure given this spec just came out that more apps will gain support for it over time.
What we need are digital signatures on digital photos that can mark them as original. So I take a photo with my iPhone, it embeds a digital signature from Apple that says "this photo was taken by an iPhone", and probably some other metadata like the date and camera settings used. You want some stuff off by default that you could turn on like embedding your name and location for proof of copyright and so forth. Samsung phones would have its signature, professional cameras would have Canon's or whatever. So long as they keep their private key secure (you have it operate out of Apple's Secure Element / ARM's TrustZone / etc.) it would be pretty hard to dispute.
Now sure you say photos are altered before publication, so what good does it do if an original photograph taken by an iPhone in Ukraine, for example, is modified before it goes on a website - even if all you change is scaling it to fewer pixels so it isn't huge and changing it from HEIC to JPG so everyone's browser can view it? But so long as they had the original photo they could make available in the event of a dispute - someone claims that photo is faked or photoshopped or AI generated or from 2014 - then it could be made available and the digital signature would check out showing the same scene proving it hadn't been altered, when it was taken, where it was taken, and who took it (if it was a press photo rather than from someone who feared retaliation and wouldn't have that identifier enabled)
There's simply no way to tag all the AI generated photos, because if nothing else you can't control all the AI clusters in the world and make them obey such rules. So you have to attack the problem from the other end.
If they crop the image, adjust the exposure to make it more visible, then create multple compressed versions of it at different resolutions on their image server; that isn't faking it any more than taking text that someone has written and changing the fonts etc to your publication's house style. But it is probably enough for a computer to think it is a completely different image.
So, if I create a deep fake porn image of a politician, I need to register that fake (with the CR) on the Adobe Cloud. Then when I share it online, obviously with the metadata removed, it can be detected. That sounds like it's going to work, and I can't think of any way around that :-(
Also, if I grab a copy of an old photo of the Mona Lisa and upload it to the Adobe Cloud today, will that mark ALL global copies of the Mona Lisa as dodgy copies?
This is going to be so much fun.
> you upload your image files' metadata to Adobe's cloud; if one of your files is later shared by someone without its identifying metadata, whatever they are using to distribute the snap could run the image by Adobe's cloud and recover the metadata if there is a visual match
So, unless you're willing and happy to go along with doing things Adobe's way and on their terms, someone else can be "first" to upload *your* image to Adobe's servers, claiming credit for themselves (*), so that you can then be accused of plagiarising your own photo by some automated corporate system that Facebook et al are using?
Nice.
(*) Don't worry, it's not like a big player in the market would allow this sort of thing to happen as long as there's money in it for them, right?
It's obvious to any technically literate person that this is a load of nonsense that can never work.
But the object of the exercise here is to create some wool to pull over the politicians eyes to prevent them from trying to regulate this technology.
"Don't worry, we have a cunning plan that allows us to detect all fake images."
Baldrick: I, too, have a cunning plan to catch the spy, sir.
Blackadder: Do you, Baldrick, do you…
Baldrick: You go round the hostipal and ask everyone, “Are you a German spy?”
Blackadder: Yes, I must say, Baldrick, I appreciate your involvement on the creative side.
Baldrick: If it was me, I’d own up.
Blackadder: Of course you would. But, sadly, the enemy have not added to the German Army Entrance Form the requirement “Must have intellectual capacity of a boiled potato.”
The process of issuing the cards also took biometric data for the subject of the card. These included finger prints and facial recognition data.
This meant that even if they threw the card away, they could still be identified as a non-naturalized person.
I don't believe that this was ever abandoned. It's still used, and as far as I am aware, asylum seekers still get issued with these 'entitlement' cards.
WTF makes "the coalition" believe that deepfake producers won't immediately try to game this, or that the sort of people who willingly believe deepfakes are real without question will even give a damn?
Oh right... the revenue this coalition thinks this vast repository will bring them. They don't actually care about deepfakes. It's just a land grab coupled with an extortion racket... "Nice images you got there... be a shame if someone deepfaked them because you didn't take advantage of our generous repository terms..."
I've just finished going through our photo/video collection from the last .... 28 years or so.
Canon is stunningly reliable in the metadata it incorporates in the image, admittedly, timestamps can be off, and the GPS data didn't show up until the T series we have (2006?? i think). The blackberry stuff is solid, and the timestamps seem to be from the Cell network, and location data is down to 5 decimals. Other cellphones are pretty good, the exception being an LG SWMBO had that seems have had a permanent lateral shift applied to its GPS positions of about 8 or 9 degrees. I suspect it was clock settings not being correctly translated.
There are *TONS* of metadata options for all imaging systems. And there is a *cough* standard for inserting it and modifying it. And there are dozens of applications that ignore it. After that my friends, image metadata is gonna fall down the "we already have 5 standards, and they're all incompatible" wormhole.
The opensource community actually has a pair of tools specifically for removing metadata from images and ensuring that they cannot be traced or tracked back to the device with which they were taken or modified, as much for the journalists as for the other folks that need that.
That there are a bunch of Mega(money)Corp entities trying to engender yet another controlling process on the General UnWashed, means one thing and one thing only, Its gonna cost you yet another monthly service charge to use it, as, since it is so valuable a service, the companies will be allowed to accept the income, and *not* pay taxes on that income.
For years I have often removed some metadata (if present) from photos
e.g. lat / long data (don't really want a photo recording where it was* if it's my home address or nearby, strip that data out unless its a "holiday" visit (& nowhere near the places I usually visit))
As a coding exercise I wrote myself a little app to edit metadata, but there's plenty of software out there that allows you to edit image meta data so trivial for people to remove metadata.
And how is the content credentials cloud going to work - for an image search (to see if a cheeky metadata lacking version of an image exists) its going to need the image (or rely on very flawed image hashing) - so potentially massive storage requirements, and so likely to be a bit unworkable or a huge money pit (or they start charging people - which won't go down well).
There's also the question of how similar an image match is... e.g. from same LLM I have had similar (but very slightly different) image outputs for the various prompts - so quite possible that could be multiple very similar images but with different "credentials" metadata - when essentially (almost) the same image can be created multiple times by people giving similar (or even the, same) prompts to "AI", then can also likely get some "ownership" disputes.
Can cause even more chaos as many "AI" image tools allow you to suggest a starting image, and from output images allow refinement so using various methods can
* Obviously some photos can be IDed if includes recognisable geographical feature (but that does not really apply to my snaps of family, friends & pets at home where location data defo needs removing for privacy reason)