back to article Startups competing with OpenAI's GPT-3 all need to solve the same problems

Text-generating language models are difficult to control. These systems have no sense of morality: they can spew hate speech and misinformation. Despite this, numerous companies believe this kind of software is good enough to sell. OpenAI launched its powerful GPT-3 to the masses in 2020; it also has an exclusive licensing …

  1. pip25
    Meh

    "We really shouldn't have a world where every single company is training their own GPT-3"

    Yet, on a smaller scale, this is pretty much what is happening, right? Thank goodness we have OpenAI, who knows how terrible things would be if they were called ClosedAI or something. </sarcasm>

    1. Walt Dismal

      Re: "We really shouldn't have a world where every single company is training their own GPT-3"

      There is a fundamental flaw in GPT-3 (and which Google always had).

      It is that languages provide base sets but these are not culture or sub-culture differentiated in AI training. In other words, within English, for example, there are many users in distinct cultures each having their own specialized meanings and even separate words. The AI language engine must have a means of maintaining separate dictionaries but joining them when suitable to do so. For example, there should be a general population dictionary but then also maybe a doctor's, medical, dictionary, and a lawyer's, and a chef's, and a gangster's etc. GPT-3 can't do that. Therefore there IS a need for someone(s) to step up and do augmented trained vocabularies. However there is no need for training for words like 'shiznit', potrzebie, or sploodge, though they might be used in banking and finance, in my nightmares.

      1. Warm Braw

        Re: "We really shouldn't have a world where every single company is training their own GPT-3"

        That, I think, is part of a greater problem: in order to "train" these models, you need a huge amount of raw material. There likely isn't enough if you subdivide it into domain-specific chunks. And equally, owing to the volume, it's infeasible to filter out the "unsuitable" material on the input side that result in the need for low-pass (in the quality sense) profanity filters on the output side.

        My uninformed guess is perhaps that the systems might need to know a little bit more about languages so they can ingest their training data more efficiently as neither approach on its own seems to be without a fundamental limitation.

    2. doublelayer Silver badge

      Re: "We really shouldn't have a world where every single company is training their own GPT-3"

      On a different level, it's just a hypocritical statement. They say that people shouldn't develop their own model because "it would be massively environmentally costly, compute costly, and we should be trying to share resources as much as possible", but they can develop their own model despite those costs. Basically, what their argument boils down to is "We can build our model, and you should pay us for ours instead of doing one of your own because you will be harming the environment".

      This shallow attempt to put down competition might bother me except the product's so obviously useless and I really don't want people to waste resources making more of it.

  2. Filippo Silver badge

    "lack of common sense and inability to be accurate "

    I strongly doubt that those problems can be solved, within the current paradigms.

    A language model just knows what bits of text tend to go together. The reason it can answer correctly "how many teeth humans have" is just that those words tend to go together with "32" in the training set.

    The same is not true for "how many teeth math teachers have". The "math teacher" bit there is not connected to "teeth" at all. It would have to go through "humans" in order to arrive at "32", but "humans" doesn't appear in the question at all, and it's not a word that's so strongly connect to either "math teacher" or "teeth" that the model will give it a high weight all by itself.

    The reason "math teacher" is not strongly connected to "human" is that that connection, for people, comes from non-textual experience in a very obvious fashion, so we don't write it very much at all. But a language model only has text, so that connection is very weak.

    So: I don't think those problems can be solved within the paradigm of language models. You'd need something that's also trained on real-world input somehow.

    1. Pascal Monett Silver badge

      Re: "lack of common sense and inability to be accurate "

      Indeed, and worse, any example purporting to prove you wrong will just be a tweak of statistical nature.

      There is no AI.

      It's statistics all the way down.

      1. Filippo Silver badge

        Re: "lack of common sense and inability to be accurate "

        Yes, exactly.

        Ultimately, we don't even have a decent definition of what "true intelligence" is, nevermind how it works. Who knows - it might turn out to be fundamentally statistics too! If it isn't, then the current wave of AI research is fundamentally wrong and won't ever work really well.

        But even if it is, it's still mostly not statistics on text. Most of our intellect works properly before we learn to read, and people who can't read are usually not mentally deficient either.

        So, I feel fairly confident in saying that we are never going to get "common sense" or reliable correctness out of text models alone, no matter how many teraflops we throw at them, or how many clever tweaks we apply.

      2. Not Irrelevant

        Re: "lack of common sense and inability to be accurate "

        How do you know that that's not how you think? Maybe you're statistics all the way down.

        Realistically, we're simulating intelligence the best way we know how, but if it's related to the way WE think is rather tangential because a reasonably accurate simulation of something is good enough to be perceived as that thing and once you get to that point it doesn't matter much. We haven't gotten there yet, but it doesn't make the approach bad.

        1. doublelayer Silver badge

          Re: "lack of common sense and inability to be accurate "

          I mostly agree with you. However, I take exception to this part: "Realistically, we're simulating intelligence the best way we know how".

          No, we're not. Collecting terabytes of text someone else wrote and writing a search algorithm isn't simulating intelligence. We have that already. It's called the "I'm feeling lucky" button on Google, and how many people use that? It's obvious what a text prediction model isn't doing, and if you need to simulate intelligence, you'll need to do at least some of those things. A child can make the connection that teachers are usually human, if they teach mathematics, they're still teachers, and therefore understand questions even if they don't have the answer. This model can't do that. There's a way to check whether it understands the question. Ask a five-year-old how many teeth a math teacher has, and they will likely say "I don't know" because they're unaware of the 32 number. Or maybe they'll guess, but they will use intonation to indicate that they're doing that. GPT won't do that.

          1. Charles 9

            Re: "lack of common sense and inability to be accurate "

            Which seems to indicate a lack of context is all. Just more data to feed into the system to create those associations that teacher = human.

            Try something no AI could figure out that a human can do easily, even as a child in the sticks.

      3. Michael Wojcik Silver badge

        Re: "lack of common sense and inability to be accurate "

        Only for an extremely reductive and unhelpful definition of "statistics".

        Now show us evidence the human CNS is doing something fundamentally different.

    2. Tom 7

      Re: "lack of common sense and inability to be accurate "

      Language models will only work well when the contextual model is there to back it up. And most of the other human intelligence quirks too. And probably sex hormones too judging from watching teenagers!

    3. Greybearded old scrote Silver badge
      Joke

      Re: "lack of common sense and inability to be accurate "

      'The reason "math teacher" is not strongly connected to "human"'

      Because it's met some?

    4. FeepingCreature Bronze badge

      Re: "lack of common sense and inability to be accurate "

      There's a "feeling of making a right argument" and a "feeling of making something up" in humans. So we can exert pressure via this mechanism and notice that we're talking nonsense. But I don't know that the system underneath that, the system that generates the broad strokes of "Well, a math teacher is a ... " " ... human, so they would ... " "... have 32 teeth" in humans is fundamentally different from a text predictor.

      I've noticed myself saying things that are utter nonsense, just because they're words that were historically associated. *Usually* I catch myself before actually vocalizing them, or at least notice in hindsight. But GPT has no module that could notice that. That said, there are systems like that - generative-adversarial networks. It seems possible that a transformer set up like a GAN, with a babbler and a nonsense-noticer, could approach the human tier or even surpass it.

      (Why surpass it? For "just the part of my brain that makes up things that I could possibly say", GPT-3 is extremely well informed. It would not surprise me if it already has human-level "overhang".)

  3. nintendoeats

    Three words to ruin your day:

    Machine Learning Compiler

  4. Anonymous Coward
    Anonymous Coward

    We're safe for decades yet.

    The total inability of Google to understand that a search for "ticking clocks" is not somehow a mistaken search for "non ticking" clocks rather shines a light on the holes in "AI".

    1. Ace2 Silver badge

      Re: We're safe for decades yet.

      Remember when you used to be able to search for something specific? Now most of what they return is “we think you were looking for” crap.

  5. anthonyhegedus Silver badge

    None of it is Intelligence. Artificial, yes, but it's a simulation of intelligence. It's intelligence in the same way that a waxwork is a person. It looks like a person, and can be used as a person in lots of simple use-cases, but it falls down (literally) when pressed.

    1. Charles 9

      Then what IS intelligence, if it's not a case of knowing it when one sees it?

  6. Anonymous Coward
    Trollface

    Magic Sales Bot

    "But Doyle stopped using it, he told us earlier this month, due to the model's tendency to just make up information"

    But isn't that what sales are all about?

    1. innominatus

      Re: Magic Sales Bot

      Exactly! But then again perhaps replacing (human) sales droids who lie through their teeth to get their commission (and leave devs to build their fantasy) with straightforward robo-BS might not be so bad after all?

    2. Michael Wojcik Silver badge

      Re: Magic Sales Bot

      On a more serious note, machine prose generation is already widely used commercially, for example in sports and financial reporting, and in the niche non-fiction market. Philip Parker's system is the best-known example of the last; his Icon Group International claims to have published over a million titles, either in electronic form or print-on-demand.

      The first documented commercially-published machine-generated novel seems to have been True Love from SPb, in 2008. It wasn't very novel, actually, being a style-transformed pastiche of Anna Karenina,1 but it was produced by software. There appear to be a number of commercial projects in this area so it's likely there are quite a few more commercially-published machine-generated novels by now.

      And of course there is extensive academic and commercial research in the field of machine-generated prose, and related fields such as machine-assisted prose generation.2 Doyle's efforts are not representative of the state of the art. A few years back CACM had a cover story on computational journalism, for example.

      (Note, too, that in some other fields computer-generated "creative" work has been around longer and had greater success. Computer-generated classical music was winning praise from critics in the 1990s.)

      1If you're going to steal, steal from the best.

      2I've done some work in computational rhetoric, for example. Computational narratology, computational adaptations of formalist and structuralist literary analysis, computational folklore, computational psychology ... there are many cognate fields of research.

  7. 42656e4d203239 Silver badge

    where is this Internet of which they speak?

    It seems to me that if you scrape Reddit & Twatter for your training text the "AI" will be generating an awful lot of 'interesting' language.

    Project Gutenburg might be more useful, but the generated patterns/word sequences would resemble turn of last century (1900s) language....

    1. SCP

      Re: where is this Internet of which they speak?

      "the generated patterns/word sequences would resemble turn of last century (1900s) language"

      Thou say'st that as tho' t'would be an unsavoury thing, innit.

  8. Fruit and Nutcase Silver badge
    Joke

    "four watermelons"

    AI is when the system does not generate a false positive but retorts "Four watermelons? Are you absolutely sure? Even two would be a handful.

  9. Il'Geller

    There is no neutral AI, any AI has its own unique bias. This phenomenon should be considered and used. Then all problems will be solved.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like