But AI isn't alive, so you can kill it and start over
If AI models gradually become poisoned by their own mistakes, there is a time tested solution to this problem....
The same one that gets used when farm animals get too inbred...
Cull and start over.....
If the process of making AI is refined enough....
And if can be nailed down well enough how many 'generstions' can produce useful data.... Then whatever one crosses the line from useful to too poisoned just gets not-done, the accumulated training data gets wiped, and you start over....
Either that or AI watermarking becomes really important as a means of excluding AI generated material from the training data pool....