A stupid idea
Marking anything AI generated is, regardless of how it's being done, a stupid idea to begin with, as it's equal to the evil bit (RFC 3514). Also because anybody can run a generative AI on their own computers (political parties and large corporations even more so, and even build their models from scratch), and circumvent the flagging occurring already at the source in the first place.
If anything, what could work instead of it is the opposite: ie. marking the reliable and verified content a such. And not by some kind of fingerprint, watermark or a flag, but by establishing something like a custody chain for evidence, which would allow you to trace back the info not only to the original source, and also allow you to compare how it was possibly altered or preserved as it passed through various channels or people until it got to you, also making evident the culprit who did the manipulation (both when it was intentional and unintentional). This could be done by using digital signatures over digital signatures. The technology for this has been available for decades and well established, you'd only need to slap a very thin application-specific layer on that.
That being said the real problem is that the Average Joe just generally doesn't care whether some info is from a reliable source or not, or whether it's true or not. He cares more whether it fits his world view or not, and anything that does not fit, his mind will refuse as some kind of conspiracy or "the fake" information, even when it's actually the verified truth and objective reality. So, in the end this is more a problem with the human psyche, which unfortunately can not be fixed by technological means, only by education and traning.
Then again, politics will not allow the latter, because then they'd also lose the ability to manpilate their voters and public opinion in general, which is the very and really only thing that keeps their sorry, incompetent and corrupt asses in power.