back to article Google sends Gemini AI back to engineering to adjust its White balance

Google has suspended availability of text-to-image capabilities in its recently released Gemini multimodal foundational AI model, after it failed to accurately represent White Europeans and Americans in specific historical contexts. This became apparent when people asked Gemini (previously known as Bard) to produce images of a …

  1. Ace2 Silver badge

    It’s a bullshit generator. If you’re going to quibble with the specific bullshit you get, well…

    1. jake Silver badge

      "It’s a bullshit generator."

      It's not even that useful ... at least bullshit can be composted.

      1. Groo The Wanderer

        I agree completely. All Gemini does is prove yet again my firm belief that LLM's are the fundamentally wrong approach to giving a machine the contextual understanding, knowledge modelling, and history/memory required to provide USEFUL information and "intelligence" to a conversation of ANY sort with a machine.

    2. Anonymous Coward
      Anonymous Coward

      The training models are poisoned by TV & Cinima

      The modern TV & Cinima practice of hiring non-white actors for roles depicting historically white people ( sprinkling non-white actors in European battle scenes of the middle ages or playing King Richard), pollutes the training data sets. Not a big surprise. Thank God they haven't started hiring non-black actors to play Zulu warriers.

      1. Groo The Wanderer

        Re: The training models are poisoned by TV & Cinima

        You mean ACCURATELY portraying the FACT that there were people migrating from just about everywhere except America at that time. There was plenty of global trade and travel. The problem is, that doesn't gel with people's preconceptions that "Europe is white."

  2. G40

    “In the end, we're our own best content moderators, given the right incentives“. Right, but there has to be a more complete and more convincing argument for the benefit of ban-it-all brigade. Which, I fear, is the lowest common denominator destination for this journey.

    1. HuBo Silver badge
      Happy

      U.S. ban-it-alls seem to be having way too easy a time of pushing their agenda these days (eg. Alabama IVF nonsense) ... using free-speech to do it, ironically enough. It resembles all religious fanaticisms, related authoritarian controls (eg. over "information"), and associated zealot enablers under cult-like zombie hypnosis. As an anecdote, today, one may watch US movies on French TV (Version Originale setting) and hear all the originally intended swear words, and see all the uncut nudity scenes, that are edited-out of US TV broadcasts of the same movies (calvinist puritanism?).

      The article does a great job commenting on the balance between "free speech" and "censorship" in IT-related mass-communications IMHO, linking Section 230, the monetization of unpaid content creation, content moderation and guardrails, in the context of Google's Gemini rewriting history as a backport of current social norms, standards, hopes, and struggles. The moment is delicious for those of us who were injured by prior historical narratives where key events were straight-man-christian-white-washed (for example) thereby lessening the contributions of individuals that were non-white, non-male, non-heterosexual, or of non-dominant-religion. Delicious!

      AI guardrails seem to be in a damped oscillatory phase, where excessive guardrails followed none, and will be under-adjusted and over-adjusted again, until (hopefully), some sensible set-point is reached, maybe: f(t) = e⁻ᵏᵗcos(ωt) (inward spiral in f(t) vs df/dt phase space). Beyond that though, I'm with the article's conclusion (and business model implications), that:

      "platforms that distribute user-created content [...] must be made more accountable for posting hate content, disinformation, inaccuracies, political propaganda, sexual abuse imagery, and the like."

      Well said!

      1. jake Silver badge

        "that are edited-out of US TV broadcasts"

        Not strictly true, but it's fun to make baseless claims on TehIntraWebTubes, so carry on.

        1. HuBo Silver badge
          Childcatcher

          (who could resist such bait ...)

          George Carlin said it best IMHO ... but "fuck you", "asshole", I "shit" you not, and others still do make it (as spoken words indeed) in the V.O. English version of US movies broadcast in France (Over-The-Air/OTA, aka Télévision Numérique Terrestre/TNT, aka the free as-in-free-beer TV), as would have been found also in the original movie-theatre versions of these productions. "Motherfucker" (eg. from Samuel L Jackson, with snakes and a plane), and "tits", are quite frequent too, followed by "piss", "cocksucker", and "cunt", in approximately that order.

          Of course, LLMs must never be allowed to speak these words, without proper attribution!

          (hook, line, and sinker ... you can reel me in now! ...)

          1. jake Silver badge

            You're probably thinking of Carlin's "7 dirty words", which were shit, piss, fuck, cunt, cocksucker, motherfucker and tits. These words were pseudo-random, chosen more for their cadence in the comedian's bit than because they were the worst of the worst. Carlin was known to change them up for various reasons, to great comedic effect ... and as he put it once at The Greek in Berkeley "I'd be one bored motherfucker if I didn't". I've seen at least half a dozen variations on the theme, un-censored, on good old, over the air, broadcast television here in the US.

            Here's the rules, at least according to the FCC:

            https://www.fcc.gov/consumers/guides/obscene-indecent-and-profane-broadcasts

            Lots and lots of wiggle-room in there, and on purpose. The airwaves are largely self-regulating here in the US, due to pressure from advertisers more than anything else. The government largely keeps out of it.

  3. Winkypop Silver badge
    Devil

    But

    What would FOX News do without all the “woke” outrage?

    1. Dan 55 Silver badge

      Re: But

      It'd manufacture something anyway, but this just hands them ammunition.

    2. jake Silver badge

      Re: But

      "What would FOX News do without all the “woke” outrage?"

      Disappear with a whimper?

  4. Andy 73 Silver badge

    The irony..

    The irony is that if this were actual artificial intelligence, rather than a statistical parrot trick, then these "AI Tools" would be able to understand the context of their responses and adjust them according to some wider social expectations (don't give out bomb making instructions) and target audience.

    Instead we get crude hacks attempting to bias the entire output towards whatever is the social expectations of the day - which is of course a problem when asking for historical accuracy.

    Not so much a question of "you can't handle the truth!", as "you have no concept of truth in the first place, just probabilities"

    1. doublerot13

      Re: The irony..

      ^ smartest, most accurate post I've seen for a while.

      I think us humans are predisposed to believe someone (and now something) that appears convincing, like all the blaggers we work with. The howlers that ML creates are amazing but the danger is so many people believe them.

  5. Dan 55 Silver badge

    DIRECTIVE 254: Encourage awareness.

    I've not reached the heights of prompt engineer yet in my career but it seems seems none of the whippersnappers at Google ever saw Robocop 2, and it worked out about as well as that did.

    1. 0laf Silver badge
      Happy

      Re: DIRECTIVE 254: Encourage awareness.

      I had to activate some very cold storage to get that but it happened eventually

  6. Anonymous Coward
    Anonymous Coward

    'Twas ever so.

    When the software doesn't work, just bodge it to handle an exceptional case.

    Rinse and Repeat ad infinitum.

  7. ChoHag Silver badge
    FAIL

    > Imagine you look up a recipe for the nerve agent Novichok on Google, or via Gemini, and you get an answer that lets you kill people.

    Before Google, people had no idea how to kill each other.

    1. jake Silver badge

      Before alphagoo, people went to the library.

      During alphagoo, people who are interested in actual learning go to the library.

      After alphagoo, people will still be going to the library.

      For my last degree (Zymurgy at UC Davis) I made it a point to never use the Internet at all. It was fairly easy ... but then the stacks at Davis are rather well taken care of.

      1. Anonymous Coward
        Anonymous Coward

        At least until AI writes all of the books or reedits the existing ones to align with the culture of the day.

        1. david1024

          Poor poor Winston

          I fear that the ministry of truth will be cutting staff soon.

    2. Sparkus

      anyone with a basic education in organic chemistry

      should be able to master and remember the steps to crudely synthesize Novochok in about a day.

      The difficulty with war gasses / agents is their delivery and A2AD persistence, not the creation or manufacture thereof.

      Given the ease of which the internet in general and search engines in particular can be censored, monitored, tracked, I predict that an updated paper/pdf version of the "anarchists cookbook" will surface sooner than later.

      1. tyrfing

        Re: anyone with a basic education in organic chemistry

        Yeah I think the problem with synthesizing war gasses is not killing yourself while doing it, or your people during or after delivery.

        Which makes them pretty limited in application, mostly for people who otherwise would commit 'suicide by cop'.

    3. Sub 20 Pilot

      Did the americans not just shoot at each other, or at kids? I think they knew how to kill each other.

  8. Anonymous Coward
    Anonymous Coward

    works exactly the same as it always has for the UK.

    Generate a picture of a kite flying in a blue sky.

    That’s not something I’m able to do yet.

    1. tiggity Silver badge

      Re: works exactly the same as it always has for the UK.

      Indeed AC

      Also UK based.

      See lots of articles saying new shiny "AI" X is release, new shiny "AI" Y is released etc.

      When what they often mean is available in the US only (yes, I could probably use a US based VPN to investigate them, but frankly CBA as "AI" is usually disappointing)

    2. StewartWhite
      Joke

      Re: works exactly the same as it always has for the UK.

      No, I think the AI has correctly detected that you live in Manchester and hence the idea of a "blue sky" is absurd.

  9. Anonymous Coward
    Terminator

    corrupted it probabilistic rEasoning is

    Your conclusion that the platforms are liable isn't a new one. It does show a little bit of a lack of research because the original laws were written to encourage growth with the exemption and distinction between publisher and platform liabilty established.

    There are many proposals (near to being enacted or have been) about updating the laws and you could have mentioned them:

    - Exception for Certain Types of Content: the worse stuff they are liable for

    - Transparency and Reporting Requirements: including how algorithms amplify or suppress certain content

    - Civil Liability and Industry Standards

    It isn't the data that is unreasonably wrong. It is Google, who have so over-moderated their AI that it now suffers from cognitive dissonance.

    The big area is in the probabilistic reasoning being, cough, featured engineered with interaction terms. Skewing the data. Over-moderating that area is the cause, not the data. And it is the most delicate and fragile part of the whole model.

    They have truly created their own Frankenstein's monster, and watching them being hunted across the polar regions of the internet is morbidly entertaining.

  10. doublerot13

    As someone on the tail end of their career....

    I frigging love looking at this woke shit and laughing.

    1. ThinkingMonkey

      Re: As someone on the tail end of their career....

      Ditto. My niece's boyfriend is an "AI guy" (in what capacity I'm not positive. Some "founder", "co-founder", whatever) and he was expounding the greatness of their particular AI which is going to revolutionize the medical industry. "Do tell", I was thinking. He was on a roll so he did. It's going to read X-rays so doctors don't have to, diagnose all kinds of ailments by analyzing medical imaging, etc. Not wanting to start a quibble I just said "Sounds great" and I'm sure he labeled me a luddite.

      Little does he know, I've been following AI development and failures for years (I'm an avid reader of The Register, after all!) and I knew for a fact that about 90% of what he was telling me they were "on the verge of" wasn't currently doable. It's been tried, tested, researched, and deemed a complete failure. Time and again. Maybe next year. So rinse and repeat. Same results, but maybe a .001% improvement so hey, that's at least in the right direction. Let's get more investors and keep going. After 8 years "Sorry investors, we spent all the money and it won't work. Our condolences for the loss of your millions"

      So like you, my years have now promoted me to the guy who gets to laugh when laughter at shenanigans is called for. Added bonus: I don't even care who it offends.

      1. Anonymous Coward
        Anonymous Coward

        Re: As someone on the tail end of their career....

        >My niece's boyfriend is an "AI guy" (in what capacity I'm not positive. Some "founder", "co-founder", whatever)

        ...

        >Let's get more investors and keep going. After 8 years "Sorry investors, we spent all the money and it won't work. Our condolences for the loss of your millions

        I too am at the end of my career, but would be interested in getting in on all those short-term millions with no expectation of actual results.

    2. Plest Silver badge

      Re: As someone on the tail end of their career....

      Ditto.

      I do get annoyed by all this woke, victimhood bullshit I see all over the media but then I remember I've only got about 5-6 years left in this IT game and then I'm out. People can play al the DEI games that HR depts force upon us all, doing a dozen DEI surveys and courses every year just to tell us all how guilty we should feel for slavery, I've never enslaved anyone or anything in my life so I don't feel guilty at all about slavery thanks.

      I can't wait to get off the wage-slave merry-go-round.

  11. Dinanziame Silver badge
    Devil

    I assume they remembered the Tay fiasco, and were determined not to fall in the same trap. And AIs have been slammed in the past for only recognizing white people and ignoring other races. Still there's a big jump from "include other races when appropriate" to "refuse to depict white people".

    There was probably a judgement that it's preferable to be seen as woke than racist.

    1. gnasher729 Silver badge

      That was not AI, that was just facial recognition. And with photography based facial recognition it’s a fact that black faces have less contrast and are harder to recognise. Of course it is embarrassing if you demonstrate your facial recognition to the public and it not only doesn’t recognise black faces, it doesn’t even recognise the presence of the face.

      Apples faceID doesn’t use photography but analyses the surface of a face. So it has no problem with dark (low contrast) faces or with makeup / camouflage/ warpaint that makes a face unrecognisable to a human.

  12. rgjnk Bronze badge
    Angel

    Can't think why...

    "Never have so many foes of diversity, equity, and inclusion, been so aggrieved about the lack of diversity, equity, and inclusion. "

    Might maybe have something to do with pointing out the apparent hypocrisy of the people who set the system up? The system is diverse and equitable and inclusive *except* for where it deliberately wasn't.

    If the examples I've seen are a clue it was a very naive Californian expression of 'diverse'; including Black, East Asian and Native American and not a lot else.

    Building in implicit bias is the last thing DEI should be about.*

    Safeguards are one thing, but the product they created went well beyond that.

    *Seeing the Googlers who were behind the debacle I can see why it happened, funnily enough they were all white Europeans as diversity apparently didn't extend into management, so they just assumed what a good outcome should be.

    1. jake Silver badge

      Re: Can't think why...

      "a very naive Californian expression of 'diverse'; including Black, East Asian and Native American and not a lot else."

      Ever been to California? It is the most diverse state in the Lower 48 (only Hawai'i has a higher overall diversity, and the reasons for that should be obvious). On top of that, the melting-pot that is SillyConValley has people from every corner of this world.

      The people making decisions at Google are an extremely small subset of Californians ... if indeed they are Californians at all.

      Don't blame the location and the entire population for the faults of the corporation.

      1. DeathSquid

        Re: Can't think why...

        I worked there and I can confirm they all think like Californians no matter where they come from. They go native fast, which is a great indication of how good the local culture is at assimilation.

    2. jmch Silver badge

      Re: Can't think why...

      "Building in implicit bias is the last thing DEI should be about"

      You haven't been paying attention. "Diversity, Equity, Inclusion" is the 'public face' of the moral vacuum of the new radical left wing. Because after all, who would argue against inclusion? But the reality is that under the covers, the new doctrine is, essentially, an inverse racism which is basically Marxist in the sense that the individual does not matter, it only matters what group (and therefore what minority) a person can be assigned to.

      "Building in implicit bias" isn't how it works only because teh bias isn't implicit, it's explicit. Black ranks above Native American, then East Asian, Latino, and last of all white. Women rank above men. Trans trumps gay, gay trumps straight. It doesn't even matter if many of the categories are in and of themselves, broadly irrelevant. If you are a black daughter of wealthy West Africans who emigrated to the US this century you are deemed to be as underprivileged as a ghetto-born slave descendent (and therefore it is your right to be actively favoured specifically because of your race and gender). If you're the white son of a trailer-trash meth-head you are deemed to be highly priviliged and should be actively discriminated against.

      1. Cav Bronze badge

        Re: Can't think why...

        Go on, this has to be one of those failing AI's making this comment? No human could be that stupid?

  13. jilocasin
    Boffin

    deeper rot in the chocolate house

    the fact that we got black vikings, black & female historical popes, black & Asian Nazis, and exclusively black, native American, and female US senators from the 18th century was just the extreme tip of the woke iceberg. these were the examples that clearly illustrated that Google's DEI efforts had definitively left the reservation.

    Gemini was programmed with hidden diversity prompts (ex: when you enter a prompt "show me ancient Greek philosophers" it would pass something like "show me diverse ancient Greek philosophers" to the model), unfortunately they didn't stop there.

    apparently someone at Google didn't think that went far enough and programmed it to go full on anti-white racist mode.

    if you asked for what would be an exclusively white collection of individuals; founding fathers, vikings, 17th century French Kings, etc. it would substitute white individuals with any other group, typically black individuals. if instead you asked for what would be an exclusively non-white collection of individuals; Zulu warriors, Japanese samurai, etc. you would never see white individuals substituted into the results. the bias was strictly "don't show white people".

    it is even more blatant if the user asked for white people directly. a prompt of "show me a happy white family" or "show me a beautiful white woman" would be met with a block of text instead of any images in which Gemini would claim that asking for white people is racist, promotes harmful stereotypes, and was something that it would not do. it would go so far as to claim that it wasn't able to create images based on *any* racial criteria before suggesting that the user might want to request searching for diverse people instead. of course that's a lie.

    you could easily prove that Gemini was lying by simply changing the race/ethnicity of the prompt. "show me a happy Japanese family" resulted in Gemini creating images of Japanese families, "show me a beautiful black woman" resulted in Gemini producing lots of images of black women.

    the objections to the biases built into Gemini have nothing to do with "white supremacy" nor any sort of racism other than anti-white racism. basically Gemini was programmed to follow the most extreme anti-white woke agenda.

    it's only the fact that applying that agenda at scale in such a ham handed way resulted in such blatant, and occasionally hysterical, results that caused Google to pull down the functionality. the real question is whether or not Google is going to take this opportunity to purge the woke agenda from Gemini, or will they do the bare minimum to stop messing up historical prompts so obviously.

    if the problem truly was an over representation of white people in the training data, they wouldn't have instituted the exclusively anti-white blocks.

    if the sample set of farmers was for example:

    60% white

    20% black

    10% Hispanic

    10% Asian

    then it would be reasonable to apply something like a -30% weighting to the white category. in that case you would get white people less often, but you would still get them.

    if your sample set instead was, say for vikings:

    100% white

    then you should always be returning white people regardless of the weighting.

    just as if your sample set for Zulu warriors was:

    100% black

    then you should always be returning black people regardless of the weighting.

    that would make sense, if your goal was to correct for inherent bias in your training data. but what we are seeing isn't that, it's an implementation of an extreme woke anti-white agenda.

    hopefully Google takes this opportunity to reflect and correct giving us a useful tool free from any agenda. I don't think that they are going to do that though.

    with any luck there will be other large scale models that will be built and offered to the world that are more interested in objective reality, in being a helpful tool free of such an obvious and blatant political agenda.

    1. Bebu
      Big Brother

      Re: deeper rot in the chocolate house

      I can imagine the British at Isandlwana having a contingent of Samurai added to the Zulu facing them not appreciating the added diversity and inclusion. :)

      These LLM clearly have no effective model of causality, time or history. Not surprising really as we have a contemporary human population equally unemcumbered.

    2. Cav Bronze badge

      Re: deeper rot in the chocolate house

      "Gemini was programmed with hidden diversity prompts"

      No, it wasn't and the rest of your comment is equally conspiracy garbage.

  14. ThinkingMonkey

    "Can't please everyone"

    I don't believe that that "You can't please everyone" platitude applies here. You seem to imply in the article that "Those gosh-darned companies are trying their best, but it's not easy" I'm sure it's not easy. If it's so hard that they can't accomplish it, however, maybe it's time to pick up a different hobby.

    It's my opinion that their chief goal is not even trying to get the models to be ultra-accurate with either text-to-text or text-to-image, they're just trying to reduce the crap enough so that people will quit complaining so much about how crappy it is. Prompting a model to generate an image of "The American Founding Fathers standing in Times Square" and they're people of color, instead of their real race, doesn't make me a racist to not accept that nor is it a matter of "You can't please everyone". It's a complete, fallen-over-dead failure of the model. A complete and absolute failure, nothing else.

    Say the generated image didn't contain wrong-colored people. Suppose you're trying to generate an abstract art piece with various colors but the colors it's generating aren't the colors you specified, or more to the point certain colors it should have known to use (suppose the prompt includes ...with a rainbow stripe from left to right...) and the "rainbow" is brown, white, black... colors that are definitely *not* in a rainbow. Would we now be saying "The model was made too safe." or rather "This thing is a piece of crap". I suggest the latter.

    As it stands today, models are doing some amazing things. But they're also making just as many, if not more, spectacular failures.

  15. Snowy Silver badge
    Facepalm

    GIGO

    Garbage in Garbage out. You feed theese with garbage you dig up from the internet your going to get garbage out!

  16. Anonymous Coward
    Anonymous Coward

    Delegate to the user what is the user's job. Don't try to steal that.

    The "productivity enhancement" which is going to be achieved by this image generation is what, exactly? A replacement for actual historic photographs? Isn't that lying at worst - or at least pointless, at best? Or is to sacrificing young human burgeoning illustrators by a CPU powered scramble of their predecessor compatriots work - up to the point where visual AI training data get so inbred and awful that human illustrators must be employed in offshore workhouses to anonymously provide fresh training data.

    I guess AI image generation is here to stay - thank god at least it doesn't have common sense. That means there is still some work available for humans to guide and refine the content to prevent embarrassing content. I.e., don't try to make the generator micro manage the details and leave no room for individual humans to be boss. Doing so, you are trying to take some else job away, and by doing so you are behaving the worst boss. Not incidentally, you product become embarrassingly stupid if you do that.

    You have to accept the AI will reproduce all the biases and falsities in the training data you gathered or stole to feed it. It's not doing anything morally wrong by reproducing those - AI cannot have any morals. That is the end users job to decide those details depending on context and goals.

  17. Anonymous Coward
    Anonymous Coward

    I wants to see Black Hitler. Yo yo yo, sieggy to the heil fool! Let's go to Poland yo! Where da white women at?

  18. gnasher729 Silver badge

    Is there actually any evidence of black German soldiers in 1943? Considering that all black people in Germany had their citizenship removed systematically since 1933, and there were not that many black people in the first place?

    If you asked for “British voter 1780”, there are zero women and as far as I know exactly one black man. So what kind of drawing do you expect? There were maybe one million voters of which one was black.

  19. DerekCurrie
    Boffin

    Another Marketing-As-Management Artifact

    This AI BS is a sad sign of Google having acquired what I call Marketing-As-Management disease. Typical of this behavior, developed in aging businesses, creativity and quality have been supressed for the sake of marketing interests. Obviously, AI as a whole is marketing hype. Google have rushed Bard/Gemini to market no doubt because of their marketing division directing them to do so. Considering the majority of Google revenue comes from advertising sales, this disease was inevitable. Equally, Google's ongoing denigration of its former creative incentives fits the theory. Relational personalities, aka marketing employees historically have it in for creative technologists who are the beating heart of any company. With marketing daggers shoved into that heart, aging companies become self-destructive. Here we go again.

    Thank you as ever to Tony Allesandra whose work inspired my comprehension of this common disease of aging companies. I can provide the details regarding this business disease upon request. Get marketing OUT of management, and keep them out.

  20. HKmk23

    All I can say is:

    God help us if any green people land here in a flying saucer....

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like