back to article Some scientists can't stop using AI to write research papers

Linguistic and statistical analyses of scientific articles suggest that generative AI may have been used to write an increasing amount of scientific literature. Two academic papers assert that analyzing word choice in the corpus of science publications reveals an increasing usage of AI for writing research papers. One study, …

  1. Mike 137 Silver badge

    A solution maybe?

    "Their paper suggests that a general lack of time and a need to write as much as possible encourages the use of LLMs, which can help increase output."

    There you have it -- production line 'science', where publication productivity is key to keeping your job. It's been going on for several decades, reducing the proportion of published output that represents real scientific progress. The use of LLMs is just the next logical stage in the process, and might actually assist in addressing it by generating clearly identifiable nonsense that can be filtered out as these papers suggest.

    1. Herring` Silver badge

      Re: A solution maybe?

      Like a lot of things, it's "what can we measure?". Well it's easy to measure number of papers published so we'll make that a target. Which also skews what papers get written and, in turn, what research gets done. I blame MBA types for wanting everything measured all the time.

    2. The Man Who Fell To Earth Silver badge
      WTF?

      Re: A solution maybe?

      I'd be more interested in what constitutes "using AI to write a paper". Does using Grammarly for spell checking count? How about occasionally accepting its suggestion for a phrase rewrite count? How about when it suggests a sentence rewrite?

      1. doublelayer Silver badge

        Re: A solution maybe?

        I'm not sure it matters. The problems with AI are not things that a well-meaning user hits. If you are writing a paper, use prompts to an LLM to aid in writing, then go back and meticulously* analyze every sentence to make sure that it never says anything your research didn't show, then you're fine. The problems come when the LLM is used because the writer no longer cares about accuracy and just wants something that looks convincing. It doesn't matter that they're using an LLM instead of a ghost writer or even their own effort to write about fictional research. The only difference is that faking papers to that extent used to be hard and now it's trivial, so you'll see a lot more of it. Hacks like banning the use of AI to write papers are trying to make this point. What is misconduct now is the same kind of thing that was misconduct before.

        * This comment had no LLM input, meticulously nonetheless.

        1. Anonymous Coward
          Anonymous Coward

          Re: A solution maybe?

          ... then go back and meticulously* analyze every sentence to make sure that it never says anything your research didn't show, then you're fine. Would that be faster, or the result better, than writing it yourself to begin with?

          1. doublelayer Silver badge

            Re: A solution maybe?

            For me, it would be much slower. I don't think there are many people who are using LLMs in this way. However, if there are people doing that, there's nothing wrong with their resulting work. I could conceive of someone who is not confident in their writing skills trying this intending to do as I've described, though I expect that even those people will often give up in disgust when trying to edit properly and go back to writing themselves.

            I think those who have made the rules calling all use of AI misconduct probably feel similarly, but I also think they're not trying to enforce it at that level. Their regulation is likely aimed to make sure that someone who is exposed as having submitted a flawed paper can't get away with blaming the AI for the problems in it. I'm not convinced that they ever could get away with it, but formalizing that isn't surprising.

      2. Fruit and Nutcase Silver badge
        Mushroom

        Re: A solution maybe?

        That's exactly what the current version of Edge at my workplace is hounding me with everytime I highlight something...

        https://www.microsoft.com/en-us/edge/features/rewrite?form=MA13FJ

        Yet another thing to disable now

      3. katrinab Silver badge

        Re: A solution maybe?

        Does using Grammarly for spell checking count?

        I would say no.

        How about occasionally accepting its suggestion for a phrase rewrite count?

        Generally no.

        How about when it suggests a sentence rewrite?

        Provided you check the suggested rewrite has the same meaning, no.

      4. Anonymous Coward
        Anonymous Coward

        Creation or translation?

        I did not read the original two papers as they were lengthy, but I searched for the word "translation". Both had zero hits. That indicates that the authors of those two papers didn't consider the use of machine translation of human written papers.

        Unlike many people tend to think in the Anglo American world, English is the native language of only a small part of the world's population and the world's scientists as well. Many do try and learn to read and write English well enough, but not all are as good at languages as they are at science. For those who English is their first language, it's easy to say they just have to put more effort in it. Yet, due to the prevalence of English many of them don't feel the need to learn another language and have no real clue how difficult it is to get it to a level of writing quality scientific papers.

        So: in reality many non English speaking scientists are held back in their career by their lack of in depth knowledge of English. To be able to read a well written paper to learn about its science is a lot easier then to be able to write one. So maybe there should be an opening for: provide and use quality domain specific translator software, mark the English paper as a translation and make both the original non English and translated English paper available for double checking?

        1. Michael Wojcik Silver badge

          Re: Creation or translation?

          We've had machine translation for quite a while now. The change in word frequencies suggest LLMs are being used to create the papers as submitted. If authors are using (the current generation of generative) LLMs for translation, that's a risky practice; those models were not designed or trained primarily for translation.

        2. jfm

          Re: Creation or translation?

          Which is why there are English-improving services (generally using suitably-qualified piecework editors). Some of these do an initial (pre-human eyes) edit to try and tidy up the English; some editors (I hear) find that their own first pass involves correcting it, including places where the pre-edit has entirely changed the meaning. (I'm being a little cagey because this part of the academic publishing process is often done on the quiet even though it's entirely respectable).

      5. Androgynous Cow Herd

        Re: A solution maybe?

        No, Splelchick is not AI.

        A grammar correction, like inserting a comma before or after a subclass, is not AI.

        A phrase *rewrite* is still not "Artificial intelligence" .

        The difference is that "AI" as the term is currently abused is generative...and none of those examples are generative, they are editorial.

        "AI" (as it seems to be used and misused currently) is another way to say "Computers making sh!t up".

        Now, if an AI generates any portion of a scientific paper, they should get just as much credit as any other grad student that assisted in the creation and collation of the paper.

  2. Len
    WTF?

    Or the explosion of the use of the word "delve" since the release of ChatGPT

    There's also the outright explosion of the occurrence of the word "delve" in PubMed articles that coincides neatly with the release of ChatGPT

    1. HuBo Silver badge
      WTF?

      Re: Or the explosion of the use of the word "delve" since the release of ChatGPT

      ... not to delve into the pivotal explosion of intricate rats meticulously showcasing commendably huuuge penises!

      (some journals are surely peer-reviewed by AI itself)

      1. Evil Auditor Silver badge

        Re: Or the explosion of the use of the word "delve" since the release of ChatGPT

        (some journals are surely peer-reviewed by AI itself)

        Not sure about the I in AI. And neither about the A - rubbish has been published in "peer-reviewed" journals a long time before the latest AI hype.

        1. Anonymous Coward
    2. Antron Argaiv Silver badge
      Thumb Up

      Re: Or the explosion of the use of the word "delve" since the release of ChatGPT

      There's something about the phrasimg and cadence of LLM written paragraphs. They're often overly ornately worded, and praise of positive attributes is overly effusive, to give a couple of examples. And, if there's a conclusion, it's often out of alignment with the prior paragraphs. Like pornography, I'll know it when I see it :-)

      Amazon and eBay are both using LLM written descriptive paragraphs. They remind me of the letters I used to get from Nigerian princes, PhDs, widows, and bankers with wads of extra cash they wanted to give me.

  3. Dr. G. Freeman

    Thought things were becoming more readable.

  4. perkele

    I asked Google Gemini to comment.

    While large language models can be helpful for generating ideas or summarizing information, using them to write entire academic research reports is generally discouraged for a few reasons:

    Lack of Critical Thinking and Originality: These models rely on existing data and might not provide new insights or critical analysis expected in academic research.

    Potential for Factual Errors: The accuracy of the generated text depends on the quality of the training data. There's a risk of including factual errors or biases present in the data.

    Misrepresentation of Authorship: Research reports require clear ownership of ideas and arguments. Using a large language model can blur the lines between human and machine generated content.

    It's important for academic research to be transparent, well-sourced, and demonstrate critical thinking. Large language models are better suited for assisting researchers, not replacing them entirely.

    :)

  5. m4r35n357 Silver badge

    Lazy fuckers

    Having written a (very) few in my time I can say that with a clear conscience.

    1. Pascal Monett Silver badge
      Trollface

      Re: Lazy fuckers

      Welcome to the Future.

    2. ITMA Silver badge
      Devil

      Re: Lazy fuckers

      That - "lazy fuckers" - is a view I hold generally about the use of generative AI.

      If someone is being paid to do a job and they are using LLM based AI to do a significant proportion of it, then what are they being paid for? The AI is doing the "work", not them.

      They are being paid to use their brain to do a job.

      Besides, since LLM based AI virtually always has the caveat of "can give innacurate and misleading results which should be checked carefully" that means you MUST spend an awful lot of time checking what it has done.

      So here is an idea - save the time and do the effing work yourself in the first place as you are being paid to....

      Even using it to summarise a document.

      If you are asked to summarise something that means you've been asked to READ IT and UNDERSTAND IT. If you are using AI to do it, you have done neither.

  6. heyrick Silver badge
    Meh

    Shrug

    Far too much stuff that should be openly available is buried behind paywalls. If AI pollutes that, I'm afraid I might be able to manage a shrug at best...

    Icon: meh, whatever.

    1. m4r35n357 Silver badge

      Re: Shrug

      Ironically there is a huge corpus of low-grade ML articles paywalled at the usual suspects (touted by Science Direct, Springer et. al. at typically $30 a pop) specifically on the subjects of ML (and optimization for training ML). Here is an excellent Arxiv link that gives a good overview of the problem: https://arxiv.org/abs/2301.01984 (The Evolutionary Computation Methods No One Should Use). A lot of this stuff predates LLMs, but just wait ;)

    2. Anonymous Coward
      Anonymous Coward

      Re: Shrug

      The bigger problem is that a lot of the quality information and journalism is increasingly behind a login and/or paywall to prevent LLMs from scraping it because it's usually not cheap to gather quality information. That leaves a lot of nonsense and poor quality information that is easily scrapable. That deluge of crap is increasingly being written by LLMs and hoovered up again by LLMs creating some kind of information death spiral.

      1. Androgynous Cow Herd

        Re: Shrug

        Upvoted for this:

        "That deluge of crap is increasingly being written by LLMs and hoovered up again by LLMs creating some kind of information death spiral."

    3. Anonymous Coward
      Anonymous Coward

      Re: Shrug

      Luckily, it is very rare that a paper on ML/AI is not published first on arXiv. The lead time of the big journals is just too long. Everybody is afraid they are scooped if they wait for the journal to go online.

      I would say, if you cannot find it on arXiv, it is not worth reading.

  7. Red Eyes

    Meticulous Applications

    having just sifted through 30+ job applications I can confirm there is a rise in the use of the word meticulously.

    1. Anonymous Coward
      Anonymous Coward

      Re: Meticulous Applications

      I do hope your efforts will be considered -- and indeed were -- commendably meticulous by all concerned. Calling the task "intricate" would be an understatement.

    2. katrinab Silver badge
      Mushroom

      Re: Meticulous Applications

      Yes. I binned a CV recently that was quite obviously written by ChatGPT.

      Their half-page description of their responsibilities and duties for their Summer internship position was just not in any way what interns actually do.

  8. Mike 137 Silver badge

    Suggestive finding

    From the UCL paper (by far the most readable of the two cited), the test adjectives and adverbs1 (particularly the latter) are for the most part predominantly terms we might expect in marketing copy, being in general somewhat self-congratulatory. The finding that such terms are on the increase in supposedly scientific literature has two possible (not mutually exclusive) indications. Firstly that the LLMs are primarily trained on commercial bullshit rather than on scientific papers, and secondarily that scientific writing is getting less objective. The first is quite expected and may actually assist in identifying AI generated texts, but the second, if actual, bodes badly for scientific progress.

    1: Adjectives: commendable, innovative, meticulous, intricate, notable, versatile, noteworthy, invaluable, pivotal, potent, fresh, ingenious; Adverbs:, meticulously, reportedly, lucidly, innovatively, aptly, methodically, excellently, compellingly, impressively, undoubtedly, scholarly, strategically

    .

    1. Paul Kinsler

      Re: Suggestive finding

      Or, perhaps, that new entrants to scientific fields are more likely to use the new and exciting words from the set considered; especially if they have had less contact with traditional researchers with a more traditional and less cutting-edge approach to adverbary and adjectivisation. It might be correlation, but causation? Even then, the wordage might just be a hangover from an initial, but later thoroughly reworked and checked LLM draft. Perhaps.

    2. Anonymous Coward
      Anonymous Coward

      Re: Suggestive finding

      Academics are pushed by the Research Councils and by their institutions to engage outside of academia - "outreach". They are positively encouraged, and often explicitly trained, to use these non-scientific puff words when doing so. Seems like this is now back-contaminating the "learned journal" material.

    3. Ken Shabby Bronze badge
      Coat

      Re: Suggestive finding

      Bingo!

  9. Fruit and Nutcase Silver badge
    Alert

    amanfromMars 1

    Has been commenting here far longer than these LLMs - still, would be an interesting exercise to analyse his posts. One of these days, his missives may start appearing in text generated by LLMs

    1. Anonymous Coward
      Anonymous Coward

      Re: amanfromMars 1

      amanfromMars 1 texts are by definition an easy 'stop' condition for the LLM's.

      If the text appears to be from amanfromMars 1 then you have optimised your text too much !!!

      Rollback 2 levels and try again .... although it would aid identifying 'AI' generated texts .... so maybe not !!!

      :)

  10. MachDiamond Silver badge

    Word salad bingo

    Anybody that works in the scientific publication production field likely is reading all sorts of other papers all of the time so word choice will often narrow and authors will use many of the same words and phrases. I have to admit guilt on that topic. When I was writing theory of operation documents for subsystems, I first went through plenty of other documents to get a feel for the format and language. One thing you learn if you try to break out of that mold is that it can be thrown back at you to "rewrite it properly this time" from management if you aren't high enough on the ladder to not have that sort of scrutiny. While word usage might be a useful analysis tool for detecting AI written prose, there are other much more mundane explanations.

    1. Richard 12 Silver badge
      Headmaster

      Re: Word salad bingo

      Natural linguistic drift is expected, but also very slow for the reasons you give.

      The article describes a very large step change, which is therefore very unlikely to be 'natural' - for the reasons in your post.

  11. Anonymous Coward
    Anonymous Coward

    In my sector at least, every other paper I'm asked to review comes from China, and then, maybe 80-odd percent of them are 'heres my finite-element model of a thing', it tells me stuff.

    So far, so good. But far too many of them cannot be considered research. They are just a case of import mesh, attach physics model from a standard library and run. The actual original and hard work was done by the development of those libraries.

    For comparison, one of my colleagues at Uni. Of Southampton is about to release a paper that highlights an error in equations used for 80 years. A ML model would NEVER have spotted that because dozens of other derivative works all carry the same error.

    1. Fruit and Nutcase Silver badge

      Indeed. Ml is closed loop, without the original thought or inference for the unknown.

      As Private Frazer would say, "We're doomed!"

  12. Version 1.0 Silver badge
    Joke

    When you are an AI user with writing problems

    After reading this story in El Reg I asked AI to update "I am a drinker with writing problems" ...

    The immediate AI recommendation was "Finding Inspiration in a Glass"

    1. Anonymous Coward
      Anonymous Coward

      Re: When you are an AI user with writing problems

      Thanks, it's very educational to pick a classic quote and then ask AI to make a recommendation. I just gave AI a classic Brendan Behan quote that I've used many times after responses at work. The complex AI review version was depressing and would have made nobody laugh:

      "The analogy of critics being like eunuchs in a harem is a poignant and evocative one. Eunuchs, by virtue of their castration, are denied the physical capacity to engage in sexual acts, despite their intimate knowledge of the harem's erotic affairs. Similarly, critics are often perceived as possessing a deep understanding of art, music, literature, or other creative endeavors, yet they lack the ability to produce works of their own. This perceived inability stems from the fact that critics are primarily observers and analysts, rather than practitioners. They may possess a refined eye for detail, an acute sense of aesthetics, and a vast knowledge of the history and theory of their chosen field, but they may not have the innate talent or the practical experience necessary to create original works. As a result, critics are often accused of being overly critical, nitpicking, or even destructive. They may be seen as lacking the empathy or the emotional depth to truly appreciate the creative process. Their focus on technical flaws or adherence to conventions can sometimes overshadow their appreciation for the beauty, originality, or emotional impact of a work. However, it is important to recognize that criticism, when it is constructive and well-informed, can play a vital role in the development of art and culture. By pointing out weaknesses, identifying areas for improvement, and challenging prevailing norms, critics can help artists refine their craft, push boundaries, and produce works that are more meaningful and enduring."

  13. Bebu
    Headmaster

    The slippery slope...

    just steepened.

    The lazy performance and quality metrics adopted by institutions and granting bodies decades ago led to any number of hacks and rorts (explosion of dubious journals, high publication rates of low quality papers or downright fraud etc.)

    So now its not surprising that a tool (LLM) almost designed for the mass production of what is mostly dross has been embraced by even reasonably honest researchers.

    As a student in the 1970-80s I learnt fairly quickly to quickly skim the abstract, conclusion and discussion, jumping to materials, methods and actual data (usually tabular results) - (graphs were often fucked up - mislabeled, incorrectly plotted....)

    In at least one instance the (sole) author completely misinterpreted his* results as his claims were inarguably contradicted by his own experimental data. (And not thinking of the cold fusion fiasco.)

    I am not sure whether its the loss of the gray cells but it seems to me that the readability of papers has decreased in the last three decades. Once I could pick up the latest copy Nature and profitably read an article or two while having a coffee but now I find it difficult to make much out of the abstracts let alone the main text.

    I have enormous sympathy for researchers whose first language is not English when they must face this. Perhaps latin should have remained the lingua franca of science. But then one would risk the likes of Boris Johnson bollicking on.

    * more probably some poor uncredited posdoc, graduate student or professorial assistant.

    1. Anonymous Coward
      Anonymous Coward

      Re: The slippery slope...

      As I noted above, loads of papers I am pestered to review now originate in China. Many demonstrate little to no original thinking, just relying on here’s my fancy finite element model bought (pirated…) from COMSOL or Ansys. Attach mesh and physics, run.

      It is an affront to science that mentors are encouraging this behaviour to generate numbers of PHd grads; but original thinking is anything but what such papers produce. It’s obviously a useful technique but someone else already did the hard work writing your library and as such there is rarely anything that a Euro university would consider PHd material in there.

      Peer review is obviously a deeply flawed process in itself, as there is some utter rubbish in circulation.

      I do enjoy the informal competition that there is to cite the earliest references possible in releasing anything.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like