If AI/ML is going to have to overcome bias, wouldn't it need to be developed to do so by people who have their own biases? Who determines what is acceptable, desirable, or required?
This could be exceedingly dangerous and move us much closer to Huxley's world. After all, somebody has to decide what is an acceptable point of view. However, how will it deal with inconsistencies? Criticize the current head of state in the US is automatically considered to be a positive thing. Criticize the previous president and you'll be labeled a racist. To complicate matters, the technology will likely be used globally. Criticize the Thai head of state and you'll have committed a crime. Same with hot-button items like gay rights. In the western world any negative statements are considered 'homophobic' and could in some cases lead to prosecution. In many countries (e.g. Russia, Iran) any positive statements on this subject are punishable by law.