This is a known problem with a known solution
Let me summarize the problem:
1) AIs can have hidden bias caused by poor datasets and/or algorithms.
2) With certain types of algorithm, particularly neural nets, it can be impossible to figure out what rules the AI is using to reach its decision, and therefore impossible to know whether or not the decision is biased other than by statistical analysis over many trials.
Now let me summarize a parallel problem:
1) Humans can have hidden bias caused by poor teaching.
2) Humans use neural nets, so it can be impossible to figure out what rules the human is using to reach its decision, and therefore impossible to know whether or not the decision is biased other than by statistical analysis over many trials.
What's the difference between a neural-net based AI and a neural-net based human? Scale. But that only makes it harder to know what the larger neural net in a human is really doing (as opposed to analysing results).
The solution that was applied to the human problem? Procedures and rules designed to stamp out individuality (and creativity, and intelligence, and adaptability). I.e., what the civil service uses to ensure you get a consistent result no matter which individual deals with you. It may be consistently bad, with little hope of correction because overseers are bound by the same rules the underlings are, but it is (relatively) free from bias.
If we ever achieve strong AI, that is artificial sapience, it's going to be as biased and stupid as we are. But it may be a lot faster at being so. Forget the singularity with a god-like AI that is compassionate, caring, loving, and wise (as the Xtian God is meant to be), think ancient Greek and Roman gods (and the Judaic JHVH). Those god were essentially humans with all the standard human failings (stupidity, greed, petulance, laziness, anger, jealousy, etc.) with some added magical powers. If the singularity ever happens, it's not going to end well for humans.