"He was a pimp" ...
... is a statement of fact about one particular individual, not about "all men" or "most men" or anything like that. As such, it cannot be biased.
Might there be a problem with the research methodology?
Speaking of which, I would be curious about two pieces of further research: 1) I assume the "classifier" (after it is fixed, cf. above) can be run on both the training set and the AI output. Is the latter more or less biased than the former? If there is any significant difference, are the machines more politically correct or more in your face than the human authors of the original material? 2) I assume the output of the AI can be passed through some relatively simple software that would correct for the biases (if you have a "classifier" that detects bias it should be possible to augment it with suggestions of what an "unbiased" equivalent would be). How similar would the outcome be to the "politically correct" speech that various busybodies try to impose on us humans?