Re: The opposite of what anyone should do
>So a way of reliably marking realistic fakes is essential, at least until society accepts that the phrase 'the camera never lies' is well and truly dead and buried.
I agree with that in principle. However, let's try to imagine exactly how the marking would work.
1) The person who produces the fake marks it. This is easy, but if the point is defending against hostile fakes, then the person making them obviously won't mark them. You can ban them when you catch them, but catching them won't be trivial, so you won't be able to do it at scale, and they'll just make a new account.
2) Have AI service providers mark everything they do. This is easy, but the hostile actor can circumvent it by running the model himself. Running your own model, even for video, is already doable and will be outright trivial by the time any sort of agreement between AI service providers is finalized.
3) Bake the marking into the model during training. This is difficult, I'm not even sure it's possible, but let's assume it is. The hostile actor can circumvent this by using an unmarked model.
3a) So let's assume we've banned the development of unmarked models. This is also difficult and probably legally unfeasible, but let's assume we somehow managed. If all of that holds, then we have a marking system that works. For a few years. After that, computing will be cheap enough that hostile actors can train their own model, circumventing the solution.
4) Develop a software that spots and marks fakes (and/or hire an army of people specifically trained to look at videos and decide whether they are fake). This leaves you in a race with hostile actors, where nobody can ever achieve a complete victory. I.e. the system will always have a non-zero rate of false positives and false negatives. Worse, the hostile actors are at an advantage, because the delta between real videos and fakes is only going to grow narrower over time, not just shift around. Eventually, the fakes will be undistinguishable from real, and that will be it.
I'm out of ideas. Any others?
So, even in the best case scenarios, we simply can't mark them "until society adapts". The best we can do is buy society a few years' time, after which the fakes will be everywhere regardless of whether society is ready or not. Even that is assuming that a lot of things go well. It's more likely that there is nothing we can do to mark fakes in a reliable fashion.
I would humbly suggest that it's better to focus our energies on figuring out how society can adapt to a world where anything you don't see with your own eyeballs might be fake. We'll have to do it soon anyway. Figuring out how to restore the concept of "trusted sources" could be a good first step.