It is very light
As a statement it is very light on definitions of harm and what constitutes a violation of privacy or misuse of private information or even what constitutes private information.
What is private information in this statement?
What protocols need to be used to determine if private information has been used in the generation of content?
How is harm defined?
How it makes a possible victim feel?
How it makes some hypothetical victim feel as evaluated by some hypothetical reasonable person?
How it makes some random stranger feel on behalf of a possible victim or some hypothetical victim?
What actions constitute harm?
Is there some objective measure that can be represented as points of law?
This reads more as a bunch of elected or appointed by elected officials trying to make as many people feel safe as they can by creating a broad, non specific, general platitude so that they can be elected by as many people as possible.
As it stands, a model trained on publicly available information, with no private information by any definition, with labeling that can be created by anyone using any labeling they desire, can be used to generate anything that the labeling and training can be prompted to generate. This might include a prompt like:
Generate an image of someone who looks like [whatever] in a situation with a description of [whatever] that makes me feel like [whatever] and that might make the person depicted feel like [whatever].
Given enough publicly available information with enough labeling, and training the previous prompt could generate pretty much anything.
The art world has a legal concept called provenance. This is a well understood and legally tested concept that could be of use here.
What is the provenance of an AI Generated piece of content?
Does it derive from Information generated by an individual either as author, artist or subject?
Has its inclusion in a dataset that has been used in the training, or prompt, or readable datasource for a generative AI model been authorised by said individual?
Has the content that has been generated been used in any way that the individual does not authorise?
Does the individual have any objections to the distribution of use of the information generated?
These are the sorts of points around witch a policy around the use of generative AI can be formed around.
Policy developed to make as many people as possible happy enough to elect you is unlikely to create effective policy.
Focusing your policy on goals that are achievable and robustly resistant to legal challenge will result in much better policy.