Re: Bias that matches society?
I agree with you that the causes are society's responsibility to fix, but the focus on LLMs has a point. If anyone is stupid enough to try to use an LLM to decide on criminal sentences, and although I would like to think that nobody could really want to do that, I am not optimistic, then it is important to know that LLMs will not only fail to help reduce this bias, but will probably make it worse.
The bias reported by an LLM is not necessarily the same degree as that found in general society or the subset who would otherwise be making decisions about criminal justice matters. If the training data contains more input from racists, the result is likely to be more racist, and its input data is checked so little and hidden so well that we would find it difficult to estimate whether that has happened. The other side of it is that society can change and sometimes quickly, but an LLM doesn't adopt that until it's retrained, and possibly not even then. Each individual decision in society can be reviewed, analyzed, and modified, but an LLM does not explain its reasoning and won't change its mind unless it's told to in which case it will simply do what its prompts tell it to. The point of this study isn't that LLMs are particularly biased, but that they are a crap tool for anything where biased output would be harmful.