Training data and regulation
If the providers have to tell how the AI was trained, does that mean that all those LLMs will have to fess up which copyrighted works they used, and then have to pay the authors / estates?
Also there is, I think some hypocrisy going on with the 'how can you promote AI with one hand and regulate it with the other?' Think about a surgeon with a scalpel, if the operation is a success, the patient recovers, but if the surgeon cuts the wrong thing(s), then could rightly be charged with malpractice. I recall one surgeon who was very good, but had the unfortunate tendency to leave his initials literally burned in to his patient's liver. No one would suggest, I hope, that the medical profession should either be denied research funds or not regulated.
AI implementations have been shown to implement unrealised racism, sexism and class-ism, to promote those 'types' of people already doing jobs, and supporting the people who wrote the code or chose the training data (often young white, 'western', middle class males). There are innumerable cases recorded on the Register about facial 'recognition' software getting things wring, being misused or just believed because it suited the bias of the people using it, or they just could not be bothered to try harder. It is a wonderfully effective way of implementing unrealised bias.
I am not against AI per se, but those promoting it have to realise that those of us unto whom AI will be done should have a say in how it is built and how it is used, and the creators and users need to be much more aware of its limitations and dangers than they seem to be.