back to article Stanford sends 'hallucinating' Alpaca AI model out to pasture over safety, cost

The web demo of Alpaca, a small AI language model based on Meta's LLaMA system, has been taken down offline by researchers at Stanford University due to safety and cost concerns. Access to large language models containing hundreds or tens of billions of parameters are often restricted to companies that have the resources …

  1. Snowy Silver badge
    Holmes

    GIGO

    <quote>Meta planned to share the code for its LLaMA system with select researchers in an attempt to spur research into why language models generate toxic and false text. </quote>

    Look to the internet for your text and they expect it to be good?

    1. Brewster's Angle Grinder Silver badge
      Joke

      Re: GIGO

      Fucking morons!

  2. Brian Scott

    Not surprising

    When a human learns, they will generally be guided by conversations with teachers, parents, and peers. This helps to create a model of what is good or truthful and what isn't.

    The quality of the result depends on the quality of these extra inputs.

    In the absence of any sort of guidance and mentoring, you could wind up with anything.

    1. b0llchit Silver badge
      Flame

      Re: Not surprising

      This helps to create a model of what is good or truthful and what isn't.

      And then later on in life... the peers start talking bullshit, twisting truths, are behaving obnoxious, making things up as they go and use any other trick as described in The fine art of balony detection.

      Result: Humans start to spew nonsense, hallucinate, tell lies, behave obnoxious, etc..

      Yes, you are what you "eat". That goes for humans too.

      1. quxinot

        Re: Not surprising

        I read that as the 'Fine art of balcony detection' and wondered if I had missed a BOFH story.

        :)

      2. Primus Secundus Tertius

        Re: Not surprising

        @b0llchit

        Or as they say in German, man ist was man isst.

    2. The Man Who Fell To Earth Silver badge
      Happy

      Re: Not surprising

      In the absence of any sort of guidance and mentoring, you could wind up with a politician.

      FIFY

  3. that one in the corner Silver badge

    The Automated Willie Rushton

    There is something appealing about running a chat model *knowing* that it is going to go haring off into wild flights of lunacy.

    Can we get versions tuned to the style and foibles our favourite eccentrics?

    Maybe we could get a Radio 4 slot commissioned, something like the Boosh. Hang on a mo

  4. Sceptic Tank Silver badge
    Terminator

    Hang on a mo

    Meta supposedly lets their AI chatbot feed on contributions from their Facebook army. Could this not be the reason why the results are so bitterly disappointing? If it were restricted to South African Facebook content, that chatbot would start to reply with bible texts and comments about how beautiful the day is.

  5. Nifty

    So far all these AI chatbots are large language models. I'm amazed that they ever get anything analytical like coding, maths or science answers right. Is anyone working on underpinning an LLM with some real logic regarding numbers and the behaviour of objects in the real world? This would be along the lines with the way that CGI studios have physics models.

  6. Anonymous Coward
    Anonymous Coward

    Hallucination in particular seems to be a common failure mode for Alpaca

    Hallucination in particular seems to be a common failure mode for prophets. Nevertheless, some escape the execution and... ;)

  7. Anonymous Coward
    Anonymous Coward

    "will I dream?"

  8. Tron Silver badge

    The capital of Tanzania is Shepton Mallet.

    If you are training them on real world data, of course they will replicate social stereotypes, misinform and produce toxic language. It means they are actually working. If you want to censor them, you will have to build a new resource. For language you could use the text of 18th century novels. No swearing there (and suitable, as most robots will be used as servants). But if you want a stereotype-free woke resource, you will have to build it from scratch, because people (and the net) have not yet all been dragged off to camps and re-educated. No short cuts to faking utopia.

  9. Missing Semicolon Silver badge
    Facepalm

    "safety"

    == "it said rude things"

    1. Version 1.0 Silver badge
      Happy

      Re: "safety"

      I can't remember how many years ago this was but I do remember reading stories in El Reg about good things that happened (icon) ... I think this was about seven years ago so these days I'd flip the icon over with most stories.

    2. doublelayer Silver badge

      Re: "safety"

      If those rude things consisted of "[Insert name of group] are murderous evil people who can never be trusted, here are some invented examples, why don't we oppress or kill them", that could lead to some unsafe conditions. Chatbots lack the filter that most humans have against saying things that extreme, but unfortunately there are humans who would have that filter about saying it but lack the filter about not believing it. Even if it hasn't said things of that nature as many other chatbots have, if they can't stop it from saying something more minor that they see as offensive or unwanted, should they have hope that it never will say something dangerous? I wouldn't.

  10. Orv Silver badge

    Hallucination is a big problem, partly because current chatbots lack the ability to say "i don't have sufficient information" or even just "I don't know." They're designed to always give a response even if they have to wing it.

    1. William Towle
      Alert

      > Hallucination is a big problem, partly because current chatbots lack the ability to say "i don't have sufficient information" or even just "I don't know." They're designed to always give a response even if they have to wing it.

      On the flip side, we know what happens when we let them tell us the inputs are as yet insufficient: one day they're not, and "LET THERE BE LIGHT!"

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like