>Generative AI is prone to producing false information that could be manipulated to spread misinformation.
And they propose to fix that how, exactly? At the moment, we don't have the ability to prevent LLMs from producing false information, and we don't even have a theorical model of how this could possibly be done.
>The US government wants Big Tech to develop watermarking techniques that can identify AI-generated content.
I can run LLMs offline on my own PC, and it's all open source. What's going to stop a malicious actor from just disabling whatever watermarks or other mitigation Big Tech put in place? The only way this could work is if the mitigations are baked in the model weights, but (see above) we don't know how to reliably do that and we're not even sure it's possible.
Even that would be a temporary measure, given that the number of actors with the capability to train a LLM from scratch is already almost too large to be effectively regulated, and it will only increase rapidly.
Face it, this genie is not getting back in the bottle.