Half a Think
This seems to be, sort of, the start of a useful think that got abandoned before it was thought through properly. Given the problem is 'how do I tell what has been mass produced by an AI disinformation farm?' what's being proposed doesn't look like much of a solution to that. Mulling it over there are 3 broad categories I can think of that apply here:
1. Information that is generated by AI and the producer/relevant authority wants it to be labelled as such.
2. Stuff that just anyone wants to post without being particularly bothered about whether anyone pays any attention to it or not.
3. Information that the producer wants to unambiguously tag as produced by them.
For the first point, "relevant authority" was put in there to cover scenarios where local law requires that AI content be labelled as such. It would work for people that are interested in following local law but obviously fall flat on its face otherwise. A producer may want to label something to prove that they have ridden the prompt dragon skilfully enough to get it to cough up something worthwhile, mostly like a visual artist signing a painting.
The second really, really doesn't need nor should require any kind of crypto-authentication shenanigans. If I want to post utter dross like this on the internet that's between me and my own foolishness and people should probably treat it with all the respect that deserves. Not a lot, for those not used to British sarcasm.
For the third, that might actually be useful. If I want to be sure that something I'm reading really has been produced by the organisation that it's claimed has produced it and hasn't been altered in any way having a handy way of doing so would be helpful. I know that's been possible for decades, it just hasn't spread beyond the niches where the techies find it useful to just be a thing that everyone uses.
Anywho, random ramblings - feel free to point out any and all silliness in the above.
Rosie