Many AI systems exhibit unintentional racial bias. The facial recognition systems that have been trained on white faces are very poor at recognising non white people and have lead to miscarriages of justice in the USA (as documented on el Reg and elsewhere). AI systems concerning re-offending likelihood have been shown to be biassed due to taking a person's home location and associating their chance of re-offending with the prevalence of crime in that area without taking into account whether police target that area and so are likely to record more crime there.
In states where the law enforcement systems have histories of being more punitive towards black people, AI often serves to re-inforce that discrimination. We are currently seeing the result of faulty programming and institutional authoritarianism in the ongoing Post Office Horizon system scandal, where 'the computer says so' was taken as proof of criminal activity, hopefully the AI industry will learn from that disaster, but if history is anything to go by it won't.