Reply to post: Re: "lack of common sense and inability to be accurate "

Startups competing with OpenAI's GPT-3 all need to solve the same problems

FeepingCreature Bronze badge

Re: "lack of common sense and inability to be accurate "

There's a "feeling of making a right argument" and a "feeling of making something up" in humans. So we can exert pressure via this mechanism and notice that we're talking nonsense. But I don't know that the system underneath that, the system that generates the broad strokes of "Well, a math teacher is a ... " " ... human, so they would ... " "... have 32 teeth" in humans is fundamentally different from a text predictor.

I've noticed myself saying things that are utter nonsense, just because they're words that were historically associated. *Usually* I catch myself before actually vocalizing them, or at least notice in hindsight. But GPT has no module that could notice that. That said, there are systems like that - generative-adversarial networks. It seems possible that a transformer set up like a GAN, with a babbler and a nonsense-noticer, could approach the human tier or even surpass it.

(Why surpass it? For "just the part of my brain that makes up things that I could possibly say", GPT-3 is extremely well informed. It would not surprise me if it already has human-level "overhang".)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon