deeper rot in the chocolate house
the fact that we got black vikings, black & female historical popes, black & Asian Nazis, and exclusively black, native American, and female US senators from the 18th century was just the extreme tip of the woke iceberg. these were the examples that clearly illustrated that Google's DEI efforts had definitively left the reservation.
Gemini was programmed with hidden diversity prompts (ex: when you enter a prompt "show me ancient Greek philosophers" it would pass something like "show me diverse ancient Greek philosophers" to the model), unfortunately they didn't stop there.
apparently someone at Google didn't think that went far enough and programmed it to go full on anti-white racist mode.
if you asked for what would be an exclusively white collection of individuals; founding fathers, vikings, 17th century French Kings, etc. it would substitute white individuals with any other group, typically black individuals. if instead you asked for what would be an exclusively non-white collection of individuals; Zulu warriors, Japanese samurai, etc. you would never see white individuals substituted into the results. the bias was strictly "don't show white people".
it is even more blatant if the user asked for white people directly. a prompt of "show me a happy white family" or "show me a beautiful white woman" would be met with a block of text instead of any images in which Gemini would claim that asking for white people is racist, promotes harmful stereotypes, and was something that it would not do. it would go so far as to claim that it wasn't able to create images based on *any* racial criteria before suggesting that the user might want to request searching for diverse people instead. of course that's a lie.
you could easily prove that Gemini was lying by simply changing the race/ethnicity of the prompt. "show me a happy Japanese family" resulted in Gemini creating images of Japanese families, "show me a beautiful black woman" resulted in Gemini producing lots of images of black women.
the objections to the biases built into Gemini have nothing to do with "white supremacy" nor any sort of racism other than anti-white racism. basically Gemini was programmed to follow the most extreme anti-white woke agenda.
it's only the fact that applying that agenda at scale in such a ham handed way resulted in such blatant, and occasionally hysterical, results that caused Google to pull down the functionality. the real question is whether or not Google is going to take this opportunity to purge the woke agenda from Gemini, or will they do the bare minimum to stop messing up historical prompts so obviously.
if the problem truly was an over representation of white people in the training data, they wouldn't have instituted the exclusively anti-white blocks.
if the sample set of farmers was for example:
60% white
20% black
10% Hispanic
10% Asian
then it would be reasonable to apply something like a -30% weighting to the white category. in that case you would get white people less often, but you would still get them.
if your sample set instead was, say for vikings:
100% white
then you should always be returning white people regardless of the weighting.
just as if your sample set for Zulu warriors was:
100% black
then you should always be returning black people regardless of the weighting.
that would make sense, if your goal was to correct for inherent bias in your training data. but what we are seeing isn't that, it's an implementation of an extreme woke anti-white agenda.
hopefully Google takes this opportunity to reflect and correct giving us a useful tool free from any agenda. I don't think that they are going to do that though.
with any luck there will be other large scale models that will be built and offered to the world that are more interested in objective reality, in being a helpful tool free of such an obvious and blatant political agenda.