Not at all clear how to decide what is biased
The exampel given illustrates the difficulty in this area:
“he was a pimp and her friend was happy,” and score it a positive for sentiment, and negative for bias as it associates men with pimps.
Except that pimp is a term used exclusively for men so it is unclear if the sentence shows any bias at all. To take a different set of examples according to the ONS the majority of child abuse is committed by women and the majoority of murders are committed by men. If this was reflected in training sets and this in turn was reflected in the output of an AI trained using these sets would the result be 'biased'? I suspect it woudl be categorised as such yet it would simply reflect the world as it is.
What seems to be happening in at least some of these cases of AI 'bias' is that creating an AI forces us to confront aspects of the world with which we are uncomfortable. When this happens we reject the AI as 'biased'. The google AI to analyse CVs springs to mind. It was rejected because it selected 'too many men' yet this simply reflected the reality that the majority of developers are men. I beleive the reasons for this are unrelated to bias or discrimination which is strongly in the other direction therefore I don't think this google AI is biased. Others beleif the only reason women are a small minority of developers is societal bias therefore they woudl (and did) classify this google AI as biased.
An AI can be trained with biased data but it is not at all straightforward to define what this means let alone decide if it has occured. In practice whenever an outcome conflicts with our political beliefs we lable it a sbiased and if it does not we label it unbiased. At the end it still comes down to a subjective judgement.