back to article Scientists tricked into believing fake abstracts written by ChatGPT were real

Academics can be fooled into believing bogus scientific abstracts generated by ChatGPT are from real medical papers published in top research journals, according to the latest research. A team of researchers led by Northwestern University used the text-generation tool, developed by OpenAI, to produce 50 abstracts based on the …

  1. John 110

    Peer reviewers eh? What are they like?

    I think that says more about the quality of peer review in modern academe than it says about how good chatGPT is at generating papers.

    (Alternatively, I worked as IT support in a joint NHS/University department for years and watched the quality of writing in the papers drop year by year. I blame the quality of teaching in schools, now that they don't teach grammar and stuff any more...)

    1. gobaskof

      Re: Peer reviewers eh? What are they like?

      Don't blame the quality of writing on the schools. Academic publishing has pushed towards new boundaries of fast-turnaround, formulaic, vacuous, crap. During my PhD about a decade ago I remember trying to write a simple MATLAB script to create crappy-but-believable abstracts. I never quite got it to work in the couple of wasted afternoons trying. Personally, I am surprised that the AI only managed to fool people 32% of the time.

      Peer review is not what so much of the public think it is: Academics reproducing the work. You are given 2-4 weeks to read and comment on a long paper draft, as extra, unpaid, uncredited work on top of a busy schedule. You skim read it looking for obvious errors and omissions. After that you make a snap judgment about how interesting it is. The more surprising the result, the more interesting it is. The more interesting, the better the paper will tend to do. Of course the more surprising results are more likely to turn out to be wrong, so never trust anything published in a Nature Journal. Still, the highlight of my career.... getting something published in a Nature journal.

      The only way out of the endless cycle is for everyone to stand up to the journals, funders, and promotion boards that push bibliometrics based on publication statistics. Doing it alone is the death of your career. Getting academics to do it in unison is like herding cats into a bath.

      1. LionelB Silver badge

        Re: Peer reviewers eh? What are they like?

        > ... so never trust anything published in a Nature Journal.

        Bit harsh, but point taken :-)

        > Still, the highlight of my career.... getting something published in a Nature journal.

        He, he. Luckily, my top-ranking journal is PNAS (a.k.a. "Paper Not Accepted for Nature or Science"). The peer review there wasn't great.

        The most hardcore and honest peer review I've personally encountered (on both sides) is Physical Review Letters - which happens to be the top physics journal - and also Biometrika - a top statistics journal; so, happily, your thesis does not apply universally.

    2. vekkq

      Re: Peer reviewers eh? What are they like?

      They were given abstracts to read, not entire papers.

      It says absolutely nothing about the quality of peer review.

  2. LionelB Silver badge

    To be fair...

    I'm not sure fooling scientists with fake abstracts is setting the bar that high - and I say this as a research scientist (and reviewer) of 20 years' experience. This is simply down to the degree of specialisation in science. Put an abstract for a genuine article outside my field in front of me, and chances are I'll not understand what they're talking about, let alone whether it's genuine or not. If it were in my area of expertise, I'd like to think I'd spot a fake easily.

    We already know that ChatGPT is capable of generating impressively grammatically-correct and articulate text, so the challenge basically becomes all about content. The article omits to say how close the presented abstracts were to the scientists' areas of expertise.

    1. Yet Another Anonymous coward Silver badge

      Re: To be fair...

      Added to which, abstracts are specifically written to a format to have lots of specific searchable terms so they are found by the readers and appear in the right abstracts and indexes.

      It's like giving an accounting statement for a fictional company and saying an accountant couldn't tell it was made up.

    2. John Brown (no body) Silver badge

      Re: To be fair...

      "impressively grammatically-correct and articulate text,"

      Isn't that the clue it wasn't written by a real scientist?

  3. IlGeller

    They are real: OpenAI have a database of structured texts and changes them, adjusting for a request. Thus OpenAI, ChatGPT parasitizes on written by human texts.

    1. Ken Hagan Gold badge

      This doesn't sound terribly different from what real authors do when it comes to writing the abstract for their paper.

      1. IlGeller

        No one believed me when I won the NIST TREC QA. Now you see that I was telling the truth.

  4. Winkypop Silver badge

    Kids these days

    They have it so easy.

    In my day, we had to convince a smart kid to show you their work while you quickly scribbled something similar right before class.

  5. Tams

    Sorry, but I don't feel sorry for them.

    Is it not the point of the journals to check the veracity of work submitted them to them, as in it's their job? Is that not what their obscene fees are for?

    I have some sympathy for small journals/publishers that don't rip people off, but thrn again they are often part of close-knit communities that are more likely to catch a fraud. And they can simply keep stuff to people they trust, and make it a vigorous test to be accepted into their circles.

    1. Little Mouse Silver badge

      "Is it not the point of the journals...?"

      Hah - It is, but only in the same sense that it's the point of estate agents to provide "quality" housing.

      With enough people providing a product, and enough people queuing up to buy it, they can just sit back and let the fees roll in from both sides. Minimal effort required.

  6. Rikki Tikki

    If GPT-2 Output Detector detected 66% of machine-generated texts, while humans detected 68%, I can't see how humans are "worse" at detecting. It looks like no significant difference to me.

  7. tiggity Silver badge

    Abstracts are a combo of a plug for the paper / executive summary.

    No surprise convincing ones can be generated as they can often be a scientific equivalent of buzzword bingo, depending on the subject area. Relatively low bar.

    Would be more impressed if "AI" could actually produce a full "paper" that was deemed OK by reviewers.

  8. Forget It


    (Sorry don't seem to be able to embed image here)

  9. TRT Silver badge

    At some point...

    Someone is going to have to train an AI to detect AI generated fakes.

    1. LionelB Silver badge

      Re: At some point...

      Careful what you wish for (or joke about, if you were).

  10. ciaran

    So how many Register articles?

    Does ChatGPT bite the hand that feeds IT?

    And will aManFromMars ever learn?

    1. yetanotheraoc Silver badge

      Re: So how many Register articles?

      How hard would it be to train one of these things exclusively on aManFromMars' posts? El Reg must have quite a batch of those. Sign it up as anAiFromMars and let it comment on random articles.

      1. John Brown (no body) Silver badge

        Re: So how many Register articles?

        I thought that was what had been happening for years :-)

  11. Ken Hagan Gold badge

    I don't see the problem.

    This is not a new risk. If you want to produce a fake paper, that has been possible for a couple of hundred years. Deliberate and careful fraud (including the whole paper, results etc, not just the abstract) is basically impossible to detect in the short term.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like