back to article Why OpenAI recruited human contractors to improve GPT-3

It turns out the machines still need us after all, at least for now. And while the largest systems get the most attention, the secret to truly useful, fair AI are best served small and with plenty of human input. The quality of text created by neural networks has improved over time as models scale with ever-increasing training …

  1. Anonymous Coward
    Thumb Up

    A shot in the dark

    On the surface this appears to be a good start. It is akin to creating a knowledge base of how humans respond to queries.

    But in the context of OpenAI's InstructGPT, it seems to be a one shot effort.

    While successful, I feel that continual or recurring human input will be needed to prevent degradation (or stasis) over time.

    Justice Stewart once said of pornography "I know it when I see it" and the problem AI has is that it doesn't know it without a human to see it.

    1. Alan Brown Silver badge

      Re: A shot in the dark

      A human needs to be taught what pornography is in the first place too. The thing is that after a while it can infer all by itself whilst the poor AI is still guessing

  2. John D'oh!

    It's just a mirror of society. Pretty sad really.

  3. amanfromMars 1 Silver badge

    IT just has to say of that & those following trails & tails wearing blinkers or blinders*

    My gast is flabbered, and unfortunately for humanity, because of the stagnant ruts IT and AI confines them to, not at all surprised that it is a persistent human suffering, that advancing alternative astute intelligence presented in novel plain text output for simple understanding and wider general universal acceptance from a colossus of a source/titanic collaborative effort is then best thought to be modified/altered/moderated/adulterated/corrupted/perverted to more native input for stagnant rut acceptance and provision/onward supply.

    Oh, and the persistent human suffering ‽ An endemic intrinsic systemic lack of wider general universal intelligence. And that one simple single inescapable fact renders the Earth catastrophically vulnerable to remote alienating intervention and extensive invasive exploitation by SMARTR Advancing IntelAIgent Sources, which is freely shared here with y’all in the form of another question for you to deny and claim it is impossible to be true ..... thus further proving the redoubtable valid honesty of the proposition???

    * .....https://dictionary.cambridge.org/dictionary/english/be-wearing-blinkers

    1. RobLang

      Re: IT just has to say of that & those following trails & tails wearing blinkers or blinders*

      Nice to see GPT joining in the conversation.

  4. nematoad

    Eh?

    "Aligning language models like GPT-3 may make them less likely to generate text that is less toxic, biased, and more accurate,"

    Now my parsing of this sentence may be wrong but surely this is an example of a double negative.

    If the idea is to try and stop the AI from producing offensive text then anything that stops it from being "less toxic, biased, and more accurate" is not going in the right direction.

    1. Warm Braw

      Re: Eh?

      Maybe AI takes over the subs desk at the weekend?

  5. jeffrey funk

    Why do the answers change by the day? The article doesn't explain why Smith and Gelman experience this. Maybe the system is only using humans for training purposes but then why do the answers change.? Is it because Smith and Gelman's questions prompt the system to be trained on their questions and the proper responses? Even if this is the case, there are still humans behind this magic, humans that will need to train the system on millions of edge cases.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like