back to article AI cannot be credited as authors in papers, top academic journals rule

Science and Springer Nature, two leading academic journal publishers, introduced new rules addressing the use of generative AI tools to write papers in their editorial policies on Thursday. The updated policies tackle the rise of academics experimenting with OpenAI's latest product, ChatGPT. The large language model (LLM) can …

  1. bigtimehustler

    Good luck to them is all I say. The AI will continually improve and their detection software will alwats be just behind it. Eventually, the AI simply won't be detectable. A new answer is really required rather than shying away from it.

    1. The Man Who Fell To Earth Silver badge
      FAIL

      Gobbels Engine

      Because at best the AI is rearranging and correlating what's in the training data, often take from the Internet, AI should be called a Gobbels Engine, as having a lie a thousand times in the training data makes it the AI's truth.

      1. Jonathan Richards 1
        Stop

        Re: Gobbels Engine

        Has anyone considered that there is a regressive loop in the making? It won't be very long before ChatGPT spew (or similar) starts appearing in the training data for ChatGPT (or similar), especially if supposedly creative writers are passing off the outputs as their own. ChatGPT cannot be capable of original thought, although it's more likely than a million monkeys with typewriters to come up with something that *seems* coherent.

        1. The Man Who Fell To Earth Silver badge

          Re: Gobbels Engine

          Prof Gary Smiths example of asking Chatgpt how many bears did the Soviets launch into space, and it's answer of 52, and then it naming bears and giving launch dates, as an example of Chatgpt producing garbage. The more this example gets brought up in discussions, the more it potentially feeds the Gobbels Engine if those discussions get scrapped into the training data.

          Other garbage in the Goobels Engine of Chatgpt already is all the superstition dogma. It's ability to distinguish Science from rubbish like Creationism or Witchcraft is zero.

          1. Tom 7

            Re: Gobbels Engine

            It's ability to distinguish Science from rubbish like Creationism or Witchcraft is zero. So like most of the USA!

    2. Timop

      Yeah let's just wait the fourth generation which is trained from output from third generation which is trained from output from second generation which is trained from output from first generation which is trained from miscellaneous data dump scraped from internet and other undisclosed sources.

  2. that one in the corner Silver badge

    Citations needed

    If a desire to fool the journals leads to these models being able to generate accurate citations then that could hopefully be added to Github's Copilot, which would fill the gaping hole of lack of attribution for the code it regurgitates.

    Sadly, this won't happen, because the goal of the LLMs would be to generate citations that just *look* right but aren't necessarily accurate[1], the same as the rest of the text.

    [1] Bloggs A., Chandra P. et al, 1981, p12 - 17.

    1. Jonathan Richards 1

      Re: Citations needed

      Just so. And this would be so trivial to check, that it would be a dead giveaway for LLM output.

  3. Anonymous Coward
    Anonymous Coward

    What are they worried about when they have working peer review? /s

    1. Anonymous Coward
      Anonymous Coward

      "Peer Review of Scientific Papers"

      Peer review is an extremely politicized process. If you want to your paper to be published by a "reputable" journal, don't step on anyone's toes, and don't rock the boat.

      1. Peter2 Silver badge

        Re: "Peer Review of Scientific Papers"

        This would depend upon the journal. Particular subjects have serious issues with the peer reviewers not wanting criticism, largely because they are living in a pack of cards and are a bit concerned about the slightest touch bringing the whole edifice down around them.

        Others don't. While not being a professional in a particular subject, I do have a lively interest in it, enough to be a member of the foremost professional body which comes with a paper copy of the internationally ranked (A1) journal quarterly. I have seen somebody attempt to roll over toes with a steamroller and overturn the boat. Their paper had a short response which succulently yet systematically pointed out the considerable errors in it.

  4. Pascal Monett Silver badge

    "all paper submissions must be the original work of authors"

    Lets stop tiptoeing around the subject, shall we ?

    All paper submissions must be the original work of Humans.

    When monkeys are capable of submitting a scientific paper we can amend that but, until AI actually means what the letters are supposed to stand for in the real world, machines must remain where they are : useful tools and indispensable support for the actual scientific brain.

    1. Jonathan Richards 1
      Thumb Up

      Re: "all paper submissions must be the original work of authors"

      > All paper submissions must be the original work of Humans

      Klaatu has left the chat.

    2. FeepingCreature Bronze badge

      Re: "all paper submissions must be the original work of authors"

      But so how is it plagiarism *of ChatGPT* if ChatGPT is just used as a tool?

      Is my code plagiarized from vim?

      1. Fred Daggy Silver badge
        Pint

        Re: "all paper submissions must be the original work of authors"

        Agreed, i could argue that it could be an author.

        As always, diligent research to test (verify or refute) the claims and *investigate the material in the citations* is required. Just like in the old days. If the machine got it right, great, if not then one would need to question if AI was a good idea for that paper or topic.

        A better middle ground could be to acknowledge portions that were obtained from an AI source - when and where they occurred. Most academic papers require acknowledgement that web sources include a date and time of access due to their content not being static. Even books require date of publication and/or edition numbers. That's honest and in no way diminishes the work of the human or near-human authors.

        (Beer, because anything academic makes me think of University, which, like Pavlov's dogs, leads me to think of the Union Bar).

        1. Peter2 Silver badge

          Re: "all paper submissions must be the original work of authors"

          The purpose of a research paper is to present research, the thinking behind research tests, as well as the methods used in the tests and the results.

          "AI" in the form of the GPT library does not do research, or the thinking behind research. It doesn't do tests or think about results. All it does is spit out a bunch of text, on the basis of being given an answer first and trying to justify it. From a point of view of writing a research paper it's a bit worthless, isn't it?

          The purpose of peer review is twofold; it attempts to pick up on failures in methods being made by scientists by their peers pointing out their errors and gives feedback to improve.

          Now, if a "paper" is presented with a bunch of text which starts with an answer and then tries to justify it's position without research or thinking then it has zero value from a research point of view. Even a poorly written paper which is reviewed might have something useful in it; "AI" written versions will certainly not. There is also no point in giving feedback to a bunch of text outputted from an "AI" as it's incapable of understanding the criticism or learning from the experience, so any time so expended is completely wasted. This becomes more serious when the reviewer is an expert within their field, making their time more valuable because if they are flooded with crap then it's stealing their time from doing something productive, such as reviewing a humans paper who genuinely wants their feedback to improve their work and understanding.

          Anybody should be able to grasp this without serious thought, and anybody submitting research papers should have figured this out for themselves before wasting the reviewers time.

          1. FeepingCreature Bronze badge

            Re: "all paper submissions must be the original work of authors"

            I could see ChatGPT as a valuable assistant for getting formalisms correct and improve readability. It's definitely worthless for research, and probably worthless for analysis, though it may help generate ideas.

      2. Crypto Monad Silver badge

        Re: "all paper submissions must be the original work of authors"

        > But so how is it plagiarism *of ChatGPT* if ChatGPT is just used as a tool?

        More accurately, it is plagiarism of the input text used to train ChatGPT - which it regurgitates in fragments strung together.

        1. FeepingCreature Bronze badge

          Re: "all paper submissions must be the original work of authors"

          (It does not do that.)

          Seriously, I've seen that bandied about, and it's simply not how LLMs work.

          When a NN regurgitates some of its inputs unchanged, it's called mode collapse and generally considered a bad thing. The whole point of NNs is that they can *interpolate*, rather than regurgitate - and they do so in a highly abstracted feature space rather than between samples.

  5. lglethal Silver badge
    Go

    Can I suggest the following - any paper found to have been submitted having used one of these chatbots to write part or all of it, will see all of its authors banned from ever submitting another paper to these major journals. Have the journals swap ban lists, so that a ban really does mean a ban from all major journals.

    Being banned from submitting papers to major journals would kill an academics career, since 99% of universities work on the principle that the number of articles produced in major papers is the only way to tell if an academic is any good or not. (The other 1% prefer nepotism).

    Such a ban existing would kill the use of Chatbots because no senior academic would allow the risk to their careers just to get a paper out. They've got far too high an opinion of themselves and their own importance...

    1. Anonymous Coward
      Anonymous Coward

      Good luck with that. As another commentard noted "Eventually, the AI simply won't be detectable".

      Going nuclear on scientist careers would also potentially have a chilling effect on submission.

      Not to mention that mere accusation could be sufficient to terminate careers, without any proof of guilt. Especially in jurisdictions where "guilty unless proved innocent" is the norm.

      1. lglethal Silver badge
        Go

        I dont agree with the comment "AI simply wont be detectable.". It might not be detectable at the time of publication, but the tests to detect will follow behind the AI improvements and become able to detect the usage. To give you a good analogy, when athletes dope, there are some doping drugs which cant be detected by current testing. But wait 5-10 years, and those drugs CAN be detected. We've seen how many medal winners from the London games (especially from Russia and their friends) have been stripped of their medals since. They passed the tests then, but the samples are kept, and when new and better tests are available, the samples are retested in the future, and suddnely the dopers are discovered.

        Why should it be any different with a text journal? The text doesnt go away, doesnt degrade (like a medical sample), doesnt need special equipment to keep it frozen and in good condition. It will be much easier and quicker to retest Journal papers for AI usage, then it currently is to re-test for drug cheats in sport. But we do it anyway despite the costs in sport, because having a fair playing field is all that keeps people playing. Well guess what, the same applies in Academia. Being first to publication is a big boost to getting fame and funding. So stamping out the use of AI to artificially speed up the process of getting a paper out for publication is definitely something that should be done.

        I dont want an accusation culture either, but someone cheating to get a paper out first, or falsifying data using an AI in order to secure funding, is taking potential funding from someone who's following the rules. And that should be punished. Finding the right balance is important, but there has to be long term consequences if you choose to break the rules for short term gain...

        1. Anonymous Coward
          Anonymous Coward

          You're trying to conflate drug taking with using AI.

          A better analogy would be the use of calculators in exams back in the 1970s - some people / authorities regarded them as cheating.

          Using AI as a tool is no different than using a calculator today - it's not "cheating".

          However, if scientists try and falsify data or fraudulently claim funding, they should be held to account, whether they use AI or not.

          1. Jonathan Richards 1
            FAIL

            Cheating

            Having taken exams in the 1970's, I can confirm that using electronic calculators was regarded as cheating, for the good and sufficient reason that use of electronic calculators was banned, and so by definition it was cheating. If you want the rules changed, then address the rule-makers.

            Also, I don't think that "AI as a tool is no different than using a calculator". A calculator, properly coded and handled, is more precise and less error-prone than other ways of doing arithmetic. The right input generates the right output. As a tool, LLM is regurgitating words in grammatically correct chunks, which are not reproducible. If scientists have a point to make in scientific writing, they should be able to formulate the idea precisely in a written report. Especially this is true when reporting a novel result, which is the very essence of a scientific paper. Even a monograph drawing on published results is supposed to draw novel novel insights from the collection.

        2. doublelayer Silver badge

          I don't really have a problem banning the tools, and having banned them, to punish those who violate such bans. When those tools have been used to generate false data, it's probably easier to prove the lies and punish the author. However, if the tool is just used to produce text, I'm less confident that a test will eventually become available that detects that well.

          The problem is that AI's text looks generally like humans' text, even if it's written differently. All a test can do is suggest that some words look like they could have been automatically generated, but it can't prove that. It can use a bunch of methods to distinguish between human-written and machine-written text, but many of those are prone to mistakes that could be damaging if people are assumed to be guilty if the computer says it's suspicious. For example, if it builds a model of typical text written by a person and compares writing to that, passages written by other contributors, passages written in a group, or passages that have been significantly modified in response to critiques would probably look quite different from the norm. For that matter, I know that my writing style can vary quite a lot depending on whether I have an interesting point to make, whether I am tired or active, and whether I've successfully switched my brain from informal to formal writing modes. If it's about grammatical or vocabulary patterns, people who are less familiar with the language are more likely to have patterns they're more comfortable with and use more often.

          If five years from now I can present you with a computer-provided report which shades a bunch of text in various warning colors for possible use of AI, how much do you have to see before you're willing to use the punishments you describe? If you wait for extreme confidence, a lot of people aren't going to get caught and they may think that using AI is safe because you'll never catch them. If you use low confidence answers, a lot of innocent people are going to get punished. Even if your test is right, you can't prove it and the person who received the punishment can probe the tool you used to point out its problems and claim you attacked them on faulty evidence.

    2. Rafael #872397

      Being banned from submitting papers to major journals would kill an academic career, since 99% of universities work on the principle that the number of articles produced in major papers is the only way to tell if an academic is any good or not. (The other 1% prefer nepotism).

      Depends on the university. Predatory Publishing and Vanity Publishing rose just because for many universities quantity is better than quality.

      I get a lot of spam from "publishers" asking for my "contributions" on fields I know nothing about, or asking to resubmit a paper ("as is, no need for new material") to a conference to their journals, all for a tiny little contribution to keep the journals open access.

  6. John H Woods Silver badge

    Goodharts Law

    This was an almost inevitable consequence of people who don't understand the writings of scientists determining whether they remain employed on the basis of the number of things they have published.

  7. Anonymous Coward
    Anonymous Coward

    Quis custodiet

    I once cam across a pseudoscience journal published as genuine science by respected academic publisher Elsevier. They had presumably been blagged by a select group of fruitloops who had gained respectable qualifications in related subjects, so they appeared kosher. The fruitloops appointed themselves to the editorial board, wrote papers and peer-reviewed each other's, recruited their students to the friutloop cause and everybody cited each other's papers in an apparent display of scientific orthodoxy. I have no idea how alert Elsevier are to this sort of thing, one trusts that it was rumbled and they shut it down.

    So, when are academic publishers going to recruit AIs to do their journal audits, peer reviewing and, eventually, editing - so much easier and quicker than herding academic cats.

  8. Dr. G. Freeman

    Think it's mostly due to, within a week, ChatGPT would be the most published author in all scientific disciplines by such a wide margin as to make any sort of metrics on papers published (h-index and the like) completely meaningless.

  9. Eclectic Man Silver badge
    Unhappy

    Acknowledgements vs co-Authorship

    I don't see ay problem with acknowledging ChatGPT for providing support with the written text, but a co-author should have contributed to the creative or scientific parts of the paper.

    An acquaintance of mine asked me once about someone with whom he had discussed an idea.* A few months later he discovered that this person had written up and published a paper on it, without his knowledge, listing him as co-author. Now, many people might be grateful for this extra publication, but my acquaintance was a full Oxford Professor and FRS, and somewhat peeved by this, as he had not even been asked.

    Generally my advice to scientists is to read some good books and learn how to write a grammatically correct sentence, then you can write your own papers, and even publish the pearls of your wisdom on sites such as The Register.

    *No names, no pack drill, they are both, AFAIK, still alive.

  10. Anonymous Coward
    Anonymous Coward

    Seems to be a backward decision

    By explicitly banning the crediting of AI contributions, the likely outcome is that authors will hide any AI involvement and publish regardless, with the rest of the planet being none the wiser (assuming AI detection tools aren't successful).

    The opposite of what the publishers appear to be trying to achieve.

    AI / ML isn't going away - the genie is out of the bottle.

    1. Jonathan Richards 1

      Re: Seems to be a backward decision

      Did you read the article? Nature at least has not banned the crediting of AI^W^W LLM contributions, they have explicitly said that such contributions must be acknowledged in the text. That doesn't mean putting "ChatGPT, 1.1.117" down as one of the authors.

      1. Anonymous Coward
        Anonymous Coward

        Re: Seems to be a backward decision

        The point being that they have zero incentive for doing that.

  11. Ian Johnston Silver badge

    Although tools like ChatGPT produce text free of grammatical errors, they tend to get facts wrong. They can cite gibberish studies containing false numbers, but sound convincing enough to trick humans.

    Just like psychology, then.

    1. Tom 7

      A bit like the output from an MBA

      or privately educated person going into industry. Vast reams of documents to show they're busy and important but when you actually have time to read them....

      I've often wondered if the purpose of learning to write essays and papers is to hide lack of knowledge.

  12. Anonymous Coward
    Anonymous Coward

    Doesn't ChatGPT log what it output?

    Surely ChatGPT, provided as it is, as a service rather than a locally installable program, has a log of everything it has ever generated. If so, they could simply offer a service to search this for similarity, and you would then almost definitively know whether something was written by it. It wouldn't help with other LLMs, but would be a start.

  13. DrSunshine0104

    What about editing?

    I know the answer is certainly 'in the future'.

    But can ChatGPT actually make an edit to a section of a text without having rewrite the entire document? Can ChatGPT make a rewrite to paragraph in isolation, make it flow or not change voice? I feel this would be an obvious sign of generated text for now, if it is even a hindrance.

    I do ponder how many students actually use this. Is this a lot of noise for a handful of bad actors or is it actually problem?

    To those who have used it in school... good luck. As an apprentice right of school your work you better hope you can fake knowing this stuff. ChatGPT isn't going to sit for your interview and you'll likely not have the experience to bullshit around the interview questions from the expert across the table.

    1. Anonymous Coward
      Anonymous Coward

      Re: What about editing?

      "the expert across the table"?!?

      You don't have much experience of interviews.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like