back to article Google talks up its 540-billion-parameter text-generating AI system

Though AI models are proving to be more and more powerful as they increase in size, performance improvements from scale have not yet plateaued, according to researchers at Google. While neural networks have grown, are they really any smarter? Companies are making larger and larger language-processing systems, though they still …

  1. b0llchit Silver badge
    Coat

    Slightly obvious flaw

    I found a significant error in the model:

    In fact, 50 percent of its training data come from conversations on social media websites.

    1. Tom 7 Silver badge

      Re: Slightly obvious flaw

      So basically right biased AI bots are training the next AI bot. We've already got Nadine Dorries thanks!

    2. Charlie Clark Silver badge

      Re: Slightly obvious flaw

      That's great if you want a bot to join in the conversation

      It's not so good for the next Walter Kronkite, Brian Redhead, Bertrand Russell, Max Headroom, etc.

    3. gerryg

      Re: Slightly obvious flaw

      Thirty years I was involved in an early natural language processing research project.

      They used a children's book to be parsed by the algorithm and by a bunch of first year undergraduates.

      On comparing the results there was much dismay that the algorithm seemed to suggest the book was all about mud (IIRC) and the undergrads thought it was all about dinosaurs.

      Subsequent analysis suggested the dinosaurs caught the eye of the undergrads but that the majority topic of the book was about mud.

      In another project an advertising agency was working on getting the unrevealed answers from huge corpus(es?) rather than the answers consumers thought researchers wanted to hear.

      I mention all this because there is a possibility that the AI project is producing high quality but unacceptable results

      1. This post has been deleted by its author

  2. anthonyhegedus Silver badge

    Able to explain jokes?

    Really? I mean are we at the stage where a machine can explain jokes? The mind shudders!

    1. Primus Secundus Tertius

      Re: Able to explain jokes?

      Can it explain itself, then?

    2. Peter D

      Re: Able to explain jokes?

      A machine that can write jokes would be of great help to comediennes.

      1. Mike 137 Silver badge

        Re: Able to explain jokes?

        "A machine that can write jokes would be of great help to comediennes"

        To comedians of both genders, probably.

        Way back in the early '80s I knew a guy that wrote jokes for radio comedians. We built a database of intros, transitions and punch lines that allowed him to generate humour more quickly.

        1. Peter D

          Re: Able to explain jokes?

          "To comedians of both genders, probably."

          Indeed but such a high proportion of comediennes are unfunny I feel more good could be done targeting that group first before starting on John Bishop et al.

    3. Doctor Syntax Silver badge

      Re: Able to explain jokes?

      Does it understand when jokes are explained to it?

    4. b0llchit Silver badge
      Coat

      Re: Able to explain jokes?

      No, no, no, you misunderstand.

      It is able to explain while it is the center of all jokes. This is a model with a programmed self-awareness that can react consistently when it becomes the center of attention. It can explain to you how it gets to an answer to your question, where the question is the significant form of attention. That in turn amounts to a real joke among the informed, thus the system explains the joke, which itself is, by answering using its learned social media replies.

      See, completely logical. Ain't that a joke?

    5. FeepingCreature Bronze badge

      Re: Able to explain jokes?

      That is actually huge, because understanding a setup-punchline joke requires understanding that an agent can have mistaken beliefs.

  3. Doctor Syntax Silver badge

    I can't help thinking that to teach an infant human language doesn't require 540 billion "parameters" (does a "parameter" maybe correspond to a neurone. But the results "still suffer from the same weaknesses: they all generate toxic, biased, and inaccurate text."

    1. Charlie Clark Silver badge
      Coat

      I also think the term parameter is misleading here. It's obviously ludicrous in the normal sense of a variable that when changed, changes some behaviour (gear ratio on a car, for example): there's no way operators can deal with that number of parameters.

      You forgot to add that apart from "toxic text", babies are able to produce other toxix items…

      Mines the one with the weird stain on the front and baby wipes in the pocket…

    2. LionelB Bronze badge
      Childcatcher

      I can't help thinking that to teach an infant human language doesn't require 540 billion "parameters" (does a "parameter" maybe correspond to a neurone....

      More likely to a synapse - they mediate the flow of information between neurons. The human brain has ≈1000 trillion synapses.

      But the results "still suffer from the same weaknesses: they all generate toxic, biased, and inaccurate text."

      Weaknesses? Sounds spot-on human to me.

  4. spold Silver badge

    A script for Shakespeare

    ....an infinite number of monkeys (or AIs?). OK, the monkeys won.

  5. Pascal Monett Silver badge

    So the number of billions of parameters is the new megapixel count for AI ?

    We recently read about a 176-billion-parameter pseudo-AI, now we have a 540-billion-parameter pseudo-AI. I'm guessing it's supposed to be better. It's also likely to need all of SAP's engineers to set those parameters.

    So, who's going to invent the trillion-parameter AI ? And which country is going to devote all of its population to configuring it ?

    Shouldn't be long now . . .

    1. LionelB Bronze badge

      Re: So the number of billions of parameters is the new megapixel count for AI ?

      Errm, isn't it the training which is supposed to "set" the parameters?

      Still take a fair number of (human*) bods to curate the training data sets, mind.

      *I know, let's get another AI to curate the training data. What could possibly go wrong?

  6. Chris Gray 1
    FAIL

    GIGO

    What these "AI" things can do is what we should expect based on what they are. Feed them a mountain of stuff produced by humans and they will eventually be able to respond and act much like humans. Feed them biased stuff, and of course they will be biased.

    But, is that what we want? Wouldn't we rather have machine intelligence that is factually and scientifically correct? You get that by specific algorithms and carefully selected reference data. Lots of work, but in theory the world only has to do it once.

    GIGO - Garbage In, Garbage Out

    1. LionelB Bronze badge

      Re: GIGO

      "But, is that what we want? Wouldn't we rather have machine intelligence that is factually and scientifically correct? You get that by specific algorithms and carefully selected reference data."

      We used to call those "expert systems". Sometimes useful, but "intelligent" they were/are not.

      "Lots of work, but in theory the world only has to do it once."

      Seriously?!

      1. Chris Gray 1

        Re: GIGO

        And you think the current crop of AI stuff is "intelligent"? Seriously?!

        My point is that I question whether we want to emulate how humans act "intelligently", or try for something that to a greater extent we *know* makes correct choices. Humans often make correct choices, but they often make incorrect choices as well. Can computer systems with more resources do better? Probably not much if we design them to emulate how we believe humans make choices. I want them to do better.

        1. LionelB Bronze badge

          Re: GIGO

          "And you think the current crop of AI stuff is "intelligent"? Seriously?!"

          Um, no, how did you get that impression? Clearly not from what I wrote.

          "My point is..."

          I didn't and don't dispute that; my point was that that approach has been around for many decades (under the banner of "expert systems" or "knowledge-based systems"), and even described them as (potentially) useful; but also that no-one would be inclined to describe them as particularly "intelligent" in the sense that people generally associate with the term "artificial intelligence".

          I didn't, but might also have mentioned, that such systems tend to scale badly, succumbing to combinatorial explosions; and the only way around that is reliance on non-exact heuristics (arguably like humans!) which potentially undermines their ability to make "correct choices".

          A possibly more pertinent point is that we live in a messy, noisy, dynamic world; what even constitutes a "correct choice" in the real world (outside of very constrained problem spaces) may be unclear and/or subjective.

          1. LionelB Bronze badge

            Re: GIGO

            To expand on this a little - a question that doesn't seem to be discussed much is: What do we want "artificial intelligence" to mean or encompass?

            Many commentators appear to assume (often implicitly) that AI must mean human-like intelligence. Chris Gray 1's post that I originally replied to clearly (and reasonably) suggests that AI may potentially include intelligence that is in some respect(s) or context(s) superior to human intelligence. But what about other forms of animal intelligence (e.g., the ability of some insects to perform complex navigation and manoeuvring tasks, far beyond anything we have yet achieved with autonomous vehicles)? And what about "alien" forms of intelligence, which don't seem to resemble anything seen in nature?

            My feeling is that future AI might end up closer to that last one, on the grounds that we really don't know very much at all about the organisational principles (as opposed to mechanisms) underlying human and other animal intelligences. Not surprising, perhaps, as those "principles" are the product of aeons of opaque evolutionary hacks. Future AI, it seems, is likely to end up as a product of an amalgam of human design hacks (with bits and pieces borrowed from nature). The results may not look much like human, or other animal intelligence at all.

    2. hoola Silver badge

      Re: GIGO

      There is also the issue of who decides what stuff to feed them and the source.

      If a large quantity id from Social Media, then by it's very nature that probably skews the balance of the material......

      And that takes us back to the previous statement:

      GIGO - Garbage In, Garbage Out

  7. This post has been deleted by its author

    1. Anonymous Coward
      Anonymous Coward

      Biased

      Variables

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022