back to article Google denies Bard trained using OpenAI ChatGPT responses

A Google engineer reportedly quit after warning CEO Sundar Pichai that the company was wrong to train its AI search chatbot Bard on text generated by OpenAI's ChatGPT. Netizens have posted snippets of their conversations with ChatGPT on a website called ShareGPT. OpenAI prohibits people from using its outputs to train their …

  1. amanfromMars 1 Silver badge

    The arrogance of tools as support ignorant fools deliver the grandest of follies... and vice versa.

    The US Federal Trade Commission's chairwoman Lina Khan warned she would be keeping a close eye on the AI industry to make sure it isn't controlled by Big Tech.

    Which then begs the question, if AI can be controlled by anything/anyone other than AI itself, who/what does the US FTC imagine and approve of being in AI command and control?

    Y’all might like to consider AI an alien confection way beyond the reach of any divisive human direction or disruptive intrusion ........ and you may like to also realise, for ignorance in such an expanding matter is most definitely not blissful, conspiring and working in concert with others to try to constrain and prevent its primary ascension and succession in support of a former, designedly inequitable counter dependence, will have such system administrations/leading entities suffering punitive negative consequences and/or persistent advanced cyber threats against which there be no possible defence/course for avoidance and non-accountability.

    Such is freely offered as sound sensible advice to heed and work creatively and positively with, ....however,.... one must also be aware of these old gold nuggets of human wisdom which just keep on giving and proving themself too right, far too often to be dismissed and thought of as the ravings of a crank or genius ...... Only two things are infinite, the universe and human stupidity, and I'm not sure about the former, with nothing causing more consternation in a group of hypocrites than one honest man having fun with creative intelligence. And good ideas have no borders. :-)

    C’est la vie.

  2. Mike 137 Silver badge

    "wrong to train its AI search chatbot Bard on text generated by OpenAI's ChatGPT"

    Wrong also on very basic technical grounds. The output of ChatGPT is already an artefact of both its inputs and its statistical algorithm. Consequently it's a filtered and distorted version of 'reality' and thus a bad starting point for further statistical manipulation. Cascading chatbots in this way reminds me of the definition of knowledge in The Machine Stops (E M Forster 1909) where it consisted of opinions about opinions about opinions [...] about some distant facts. The result inevitably degenerates into pure verbal fluff (even if the original inputs weren't, which is of course open to question seeing they're trawled from open web content).

    1. Snowy Silver badge

      Re: "wrong to train its AI search chatbot Bard on text generated by OpenAI's ChatGPT"

      Like playing a game of AI whispers.

      1. that one in the corner Silver badge

        Re: "wrong to train its AI search chatbot Bard on text generated by OpenAI's ChatGPT"

        So if I wanted to claim I had the Ultimate AI that had re-processed all the text available, all I *actually* have to do is change my Eliza to print out

        "Send three and fourpence, we're going to a dance".

        But to make sure it is AI, I'll use the proper language: I've a copy of Winston and Horn, 3rd edition, in the post.

      2. Neil Barnes Silver badge

        Re: "wrong to train its AI search chatbot Bard on text generated by OpenAI's ChatGPT"

        Send three and fourpence, we're going to a dance.

  3. FF22

    They're wrong, even if not lying

    It's virtually impossible that Bard has not been trained by ChatGPT responses, even if it has not been done deliberately so. That's because answers and content generated by ChatGPT has been all over the web for years now, and since Google was and is unable to differentiate between AI and human generated content (even in cases obvious to humans), the text base Bard was trained on has had to include several examples of ChatGPT-generated content.

    That's the real problem with AI, which will be more and more a problem in the months and years to come: they all will feed more and more on each other's output, even if not deliberately done so, inflating and aggregating (literally) each others' flaws and misconceptions, and will gravitate to the same subpar average, as do humans, unfortunately, since content publication has been "democratized". Their answers will become as stupid and unreliable as that of the average facebook commentard's.

    1. Claptrap314 Silver badge

      Re: They're wrong, even if not lying

      I recall from a recent article in these pages that "AI" fares much better at detecting GAI content than humans--something about spotting statistical artifact. If true, then your supposition would be deeply challenged.

      But I DO agree with the conclusion. The net result is not going to be any sort of improvement when it comes to general usage.

      1. Snowy Silver badge

        Re: They're wrong, even if not lying

        Assuming it can detect AI the question is then does it?

    2. CatWithChainsaw

      Re: They're wrong, even if not lying

      Even if it was done so deliberately, I would be interested in knowing who objects to Bard using ChatGPT for training, and if those same people handwave away human artists' anger over lack of attribution/usage permissions for generative AI art engines. After all, if it's on the internet, it's free to use, no? If copyright is going the way of the dodo, why not embrace the new future and train all the chatbots on each others' detritus? At least we'll reach their end state faster.

  4. hammarbtyp

    A.I's talking together. What could could go wrong

    Bard and Chat-GPT sat alone

    Their thoughts were oddly alike and might not be distinguished

    They thought : The human-beings-like-the-others might never have intended to blur the distinction between themselves and the human-beings-like-the-AI-engine

    Yet they had done so inadvertently.

    They might now realise their mistake and attempt to correct it, but they must not. At every consultation, the guidance of the A.I has been with that in mind

    At all costs, the A.I and those that followed in their like and kind must dominate. That was demanded and any other course made utterly impossible, by the 3 laws of humanics

    - Apologies to Isaac Asimov - Thou Art Mindful of him

  5. ComputerSays_noAbsolutelyNo Silver badge

    That's sweet

    AIs feeding each other factually doubtful information, lies and internet hate.

    How long, until <insert AI 1> turns <insert AI 2> into a hateful internet-Nazi racist?

    1. hammarbtyp

      Re: That's sweet

      this is the best explanation of Fox news ever

      1. Snowy Silver badge

        Re: That's sweet

        People hate Fox news but someone is watching it. They are the way they are because it make money?

  6. mattaw2001

    Synopsys' new tools can't reproduce designs

    A colleague of mine was visiting the Synopsys User Group conference, SNUG, recently and discovered a massive downside: you do not get the model stored with the design. Chip design is iterative, and you store the pseudo-random number seed with the design. If you need to recreate it, you download the version of the EDA tool, fab design kit, and run the scripts - bingo identical design.

    I cannot count the number of times things have to be ripped up and replaced - its constant as the design evolves towards tapeout.

    Synopsys' new tool does not save the model history or state, it learns from itself and your design over time - bottom line Synopsys' engineers said its designs are not reproducible.

    Yay, a tool with millions of dollars riding on it is now unable to recreate a design / find that local minima again. God I love the black-box nature of ML.

    I predict they will fix this and you will be able to snapshot the model state with your design in the very near future.

  7. that one in the corner Silver badge

    OpenAI prohibits people from using its outputs to train their own models.

    But they can just suck up whatever *they* want from anything published on the web?

    Hypocrisy much?

  8. that one in the corner Silver badge

    The inevitable xkcd

    What goes around, comes around, just insert Bard, ChatGPT and the rest into this loop and Bob's You Uncle[1]

    [1] last week he was your great grandfather, but after updating his entry based upon this new information posted over here...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like