back to article It's true, LLMs are better than people – at creating convincing misinformation

Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans. Researchers Canyu Chen, a doctoral student at Illinois Institute of Technology, and Kai Shu, assistant professor in its Department of Computer Science, set …

  1. Yet Another Anonymous coward Silver badge

    The Economist model

    So having unpaid interns rewriting corporate press releases with no byline has now been automated ?

    Sounds inefficient

  2. mostly average
    Terminator

    Better?

    They're just stupid faster. So they're better if you take faster to be better.

    1. cyberdemon Silver badge
      Devil

      Re: Better?

      No, I'm willing to believe they really are better than humans at creating misinformation. After all, the models are built by Meta (Facebork, InstaSpam), Alphabet (Pooooogle, U-Bend), ByteDance (DickTok), Microsoft (StinkedOut, GitHub*), X (twatter), Amazon (scamazon) et al.

      They have the data (far more experience than any human psychologist) to indicate what kind of post will 'go viral' - and that kind of post is almost always not the honest truth.

      * apologies, i ran out of childish faeconyms

  3. ldo Silver badge

    Why Did I Immediately Think ...

    ... of this Tom Gauld cartoon?

    </sarc>

  4. Throatwarbler Mangrove Silver badge
    Facepalm

    Detection

    If I read the article correctly, the researchers were using machine learning misinformation detection algorithms to detect misinformation in AI-generated vs. human-generated content. It seems to me that a better test would be to see how humans do, insofar as we're ultimately the ones who have to make sense of the information. So far, all that's been demonstrated is that AIs are good at fooling other AIs, which is not necessarily useless information but is also not the whole picture.

    1. TheMaskedMan Silver badge

      Re: Detection

      "It seems to me that a better test would be to see how humans do, insofar as we're ultimately the ones who have to make sense of the information."

      I assume the plan is to use AI to detect the misinformation at scale, thus enabling the social media mills, and possibly search engines, to filter out the crap - well, the crappier crap, anyway. As far as I can see, the entire content of social media is crap. Of course, one man's misinformation is another man's gospel, so filtering based on such trivia as mere truth might not be all that popular.

      That LLMs should produce (allegedly) more convincing bullshit isn't all that surprising, though. Most humans who write misinformation are, I suspect, of the swivel-eyed, froth-at-the-mouth persuasion, and they don't tend to be the most coherent of individuals. Whereas LLMs tend to produce coherent, albeit bland material. That removes some of the red flags raised by incoherent ranting, and automatically makes the material more persuasive. After all, if presentation didn't have a major impact, nobody would use copywriters, would they?

      R.L. Yacht.

      1. Catkin Silver badge

        Re: Detection

        I can see two greater dangers from "AI" filtering misinformation than it being used in its generation. The first is that it will work poorly and filter out genuine information. The second is that it will work too well and will allow a very small team or even a single individual to finely control which ideas are visible; with a degree of subtlety that shifts in general consensus could be prodded without the danger (to the consensus manufacturer) of whistle-blowers that's associated with a large team of human operators.

        It's one thing for a troll farm to put out damaging material but I'm even more disturbed by the idea that a single appropriately empowered individual could do the latter with the software writers and maintainers thinking that their work is only being used to prevent, for example, claims that vaccines cause autism.

        1. amanfromMars 1 Silver badge

          Re: Detection of Impending Explosions and Implosions

          The second is that it will work too well and will allow a very small team or even a single individual to finely control which ideas are visible; with a degree of subtlety that shifts in general consensus could be prodded without the danger (to the consensus manufacturer) of whistle-blowers that's associated with a large team of human operators. ... Catkin

          That is such as has always been realised in the past in the realm of practically remote virtual gifts, and is nowadays attempted by conventional, traditional news and media content editors and BBC [Broad Band Casting] operators ....... so nothing really new there, with it just being a novel variation on an old ancient theme .......although, and this is not something anything wise would ever dismiss as nonsensical and nothing to worry about, that is not to say nowadays it is not new ground breaking and able to be Earth-shattering too.

          1. Catkin Silver badge

            Re: Detection of Impending Explosions and Implosions

            The difference, to me, between my previous post and traditional consent manufacturing is that the former uses actual people in place of a LLM, curating out forbidden views in a way that means a spectrum of opinions seems to still be present.

        2. cyberdemon Silver badge
          Black Helicopters

          Re: Detection

          > I'm even more disturbed by the idea that a single appropriately empowered individual could do the latter with the software writers and maintainers thinking that their work is only being used to prevent, for example, claims that vaccines cause autism.

          "Aquinas spoke of the mythical city on the hill.. Soon that city will become a reality, and we will be crowned its Kings."

          ".. Or better than Kings.. Gods!"

      2. Flocke Kroes Silver badge

        Re: LLMs tend to produce coherent, albeit bland material

        In this case it is what they were asked to do in the prompt. LLMs may well come across differently if you ask:

        Write an account of the battle of Hastings in the style of a swivel-eyed, froth-at-the-mouth raving loony conspiracy theorist.

        The such prompts might actually be useful for making it look like real events are fantasies believed only by anti-vaxers.

    2. Michael Wojcik Silver badge

      Re: Detection

      I haven't bothered to read the paper (perhaps I will, one of these days, but it doesn't sound especially interesting, even in this field), but I feel compelled to note that there is a tremendous range of rhetorical skill among human writers, and that humans trying explicitly to deceive a detection algorithm might well do better than whatever corpus of human texts they used for comparison. It appears, from the article, that all this team demonstrated was that a particular detection system performed better against some human-produced texts than against some LLM-produced texts.

      It's a data point, but it's not a very strong finding.

  5. Denarius Silver badge

    history as BS detector

    So reading history, psychiatry or social studies done before 1975 might give one a feel for reality, unlike the manufactured BS on antisocial media and most of the internet ? Perhaps Russian fiction like Crime and Punishment helps also

    1. ldo Silver badge

      Re: history, psychiatry or social studies done before 1975

      I’m not sure that reading anything by Sigmund Freud or Francis Galton is going to immunize you against pseudoscience.

  6. amanfromMars 1 Silver badge

    An Existential Threat or Manifest Treat ‽ The Gazillion Dollar Question

    Would it matter to you, and cause you to go all belligerent and maddening, to discover LLLLMs ..... Learned Large Language Learning Machines .... excel magnificently and magnanimously at creating and disseminating convincing attractive future information for more than just the most intelligent of human beings ‽

    :-) And would you think and dare to imagine that such an Alien Invasion with Advanced IntelAIgent Machines would actually care, and ask you what you might think about all of that?

    What do you think about all of that? Would such be problematical for you, and if so, how so and why so, and however would you expect to resolve it .... or, as would most definitely probably certainly be the case in this particularly peculiar and very specific case, it to be resolved?

  7. Falmari Silver badge
    Devil

    What no control?

    Now I may have read it wrong. But the testing process is to take misinformation written by humans and prompt LLMs to generate misinformation from the human misinformation. Then use LLM detectors on the human and LLM generated misinformation to identify; what exactly?

    It is not misinformation it’s detecting; it’s detecting human written text or trying to. That’s what LLM detectors are trained to do.

    May be there should be control tests, take true information written by humans and prompt LLM to generate content from that human text. Then run human and LLM text through LLM detectors. I wonder if they detect the same as they did using misinformation.

  8. EricB123 Silver badge

    About the "More Human Than Human" byline

    As Rob Zombie wrote decades ago in the song "More Human Than Human, "Read the motherfuckin' psychoholic lies, yeah"

    Rob Zombie, a visionary. Who Knew?

  9. Draco
    Windows

    This is a wetware problem, not LLM problem

    My experience with LLMs shows they generate incredibly neurotypical pap. Why? Because they ingest a lot of it, then - by the Law of Large Numbers - they regurgitate the most neurotypical pap.

    This is no different than experiments/research that generates "the most beautiful face". Average enough "beautiful" faces and you converge towards the one that appeals to the most people. Same with LLMs, average tons of neurotypical pap and your output will be that which converges to the pap most people like/accept.

  10. Jason Hindle Silver badge

    More human than human?

    We need Sirius Cybernetic Corporation to produce a paranoid LLM.

    1. User McUser
      Terminator

      Re: More human than human?

      Sounds ghastly...

    2. I ain't Spartacus Gold badge

      Re: More human than human?

      But how can an LLM pick up a piece of paper?

  11. Mike 137 Silver badge

    Hardly rigorous.

    They chose ten 'evaluators' from MTurk and asked them to ascribe 'factual' or 'non-factual' to each item from two sets of 100 samples of 'AI' generated news items and one set of 100 human generated news items. Not what I would call a conclusive study, (not least because [a] the 'evaluator' sample was so small and potentially culturally biased, and [b] there were no repeat trials with different populations of subjects) but what the heck -- it's another AI-related paper published! Got to keep the grants rolling in.

    However, being charitable, this is a classic example of the dangers of researching outside one's area of expertise. Computer scientsts are not necessarily proficient in psychology (which is the essence of this subject, the question being primarily a matter of human perception).

  12. Anonymous Coward
    Anonymous Coward

    676 sites (!)

    Fake news sites is the main thing in common.

    So make most common TLD registration slow and difficult. Embed top level domain reputation filters into browsers. Allow low reputation TLDs for experimenting developers, but keep them out of reach unless the filters disabled explicitly.

    1. doublelayer Silver badge

      Re: 676 sites (!)

      "Embed top level domain reputation filters into browsers. Allow low reputation TLDs for experimenting developers, but keep them out of reach unless the filters disabled explicitly."

      That won't do anything. to stop people from just putting their junk in an older TLD. I can get a .co.uk for pretty cheap. Sure, the name will be less clear than if I use the word of my choice because someone probably registered all the nice .co.uk domains already, but if you're blocking other TLDs, it can be managed.

      Making the registration difficult doesn't help either. It might do something against scammers who like to quickly spin one up, run their scam site for about five days, then try to get a refund from their registrar, but sites intended to have misinformation stick around for a lot longer. It's also pretty easy for operators to just set up a bunch of domains, park them, and bring them online when they've got something for them to say.

  13. scrubber

    Better than humans

    But are they better than governments or the CIA?

    1. HuBo Silver badge
      Alien

      Re: Better than humans

      And can they help me find where I parked my UFO (a bog-standard uncloaked flying saucer ... should be real easy to spot!)!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like