Good luck to them is all I say. The AI will continually improve and their detection software will alwats be just behind it. Eventually, the AI simply won't be detectable. A new answer is really required rather than shying away from it.
AI cannot be credited as authors in papers, top academic journals rule
Science and Springer Nature, two leading academic journal publishers, introduced new rules addressing the use of generative AI tools to write papers in their editorial policies on Thursday. The updated policies tackle the rise of academics experimenting with OpenAI's latest product, ChatGPT. The large language model (LLM) can …
COMMENTS
-
-
-
Saturday 28th January 2023 22:07 GMT Jonathan Richards 1
Re: Gobbels Engine
Has anyone considered that there is a regressive loop in the making? It won't be very long before ChatGPT spew (or similar) starts appearing in the training data for ChatGPT (or similar), especially if supposedly creative writers are passing off the outputs as their own. ChatGPT cannot be capable of original thought, although it's more likely than a million monkeys with typewriters to come up with something that *seems* coherent.
-
Sunday 29th January 2023 13:24 GMT The Man Who Fell To Earth
Re: Gobbels Engine
Prof Gary Smiths example of asking Chatgpt how many bears did the Soviets launch into space, and it's answer of 52, and then it naming bears and giving launch dates, as an example of Chatgpt producing garbage. The more this example gets brought up in discussions, the more it potentially feeds the Gobbels Engine if those discussions get scrapped into the training data.
Other garbage in the Goobels Engine of Chatgpt already is all the superstition dogma. It's ability to distinguish Science from rubbish like Creationism or Witchcraft is zero.
-
-
-
Sunday 29th January 2023 12:54 GMT Timop
Yeah let's just wait the fourth generation which is trained from output from third generation which is trained from output from second generation which is trained from output from first generation which is trained from miscellaneous data dump scraped from internet and other undisclosed sources.
-
-
Friday 27th January 2023 23:39 GMT that one in the corner
Citations needed
If a desire to fool the journals leads to these models being able to generate accurate citations then that could hopefully be added to Github's Copilot, which would fill the gaping hole of lack of attribution for the code it regurgitates.
Sadly, this won't happen, because the goal of the LLMs would be to generate citations that just *look* right but aren't necessarily accurate[1], the same as the rest of the text.
[1] Bloggs A., Chandra P. et al, 1981, p12 - 17.
-
-
-
Monday 30th January 2023 10:42 GMT Peter2
Re: "Peer Review of Scientific Papers"
This would depend upon the journal. Particular subjects have serious issues with the peer reviewers not wanting criticism, largely because they are living in a pack of cards and are a bit concerned about the slightest touch bringing the whole edifice down around them.
Others don't. While not being a professional in a particular subject, I do have a lively interest in it, enough to be a member of the foremost professional body which comes with a paper copy of the internationally ranked (A1) journal quarterly. I have seen somebody attempt to roll over toes with a steamroller and overturn the boat. Their paper had a short response which succulently yet systematically pointed out the considerable errors in it.
-
-
-
Saturday 28th January 2023 06:37 GMT Pascal Monett
"all paper submissions must be the original work of authors"
Lets stop tiptoeing around the subject, shall we ?
All paper submissions must be the original work of Humans.
When monkeys are capable of submitting a scientific paper we can amend that but, until AI actually means what the letters are supposed to stand for in the real world, machines must remain where they are : useful tools and indispensable support for the actual scientific brain.
-
-
Monday 30th January 2023 10:21 GMT Fred Daggy
Re: "all paper submissions must be the original work of authors"
Agreed, i could argue that it could be an author.
As always, diligent research to test (verify or refute) the claims and *investigate the material in the citations* is required. Just like in the old days. If the machine got it right, great, if not then one would need to question if AI was a good idea for that paper or topic.
A better middle ground could be to acknowledge portions that were obtained from an AI source - when and where they occurred. Most academic papers require acknowledgement that web sources include a date and time of access due to their content not being static. Even books require date of publication and/or edition numbers. That's honest and in no way diminishes the work of the human or near-human authors.
(Beer, because anything academic makes me think of University, which, like Pavlov's dogs, leads me to think of the Union Bar).
-
Monday 30th January 2023 11:27 GMT Peter2
Re: "all paper submissions must be the original work of authors"
The purpose of a research paper is to present research, the thinking behind research tests, as well as the methods used in the tests and the results.
"AI" in the form of the GPT library does not do research, or the thinking behind research. It doesn't do tests or think about results. All it does is spit out a bunch of text, on the basis of being given an answer first and trying to justify it. From a point of view of writing a research paper it's a bit worthless, isn't it?
The purpose of peer review is twofold; it attempts to pick up on failures in methods being made by scientists by their peers pointing out their errors and gives feedback to improve.
Now, if a "paper" is presented with a bunch of text which starts with an answer and then tries to justify it's position without research or thinking then it has zero value from a research point of view. Even a poorly written paper which is reviewed might have something useful in it; "AI" written versions will certainly not. There is also no point in giving feedback to a bunch of text outputted from an "AI" as it's incapable of understanding the criticism or learning from the experience, so any time so expended is completely wasted. This becomes more serious when the reviewer is an expert within their field, making their time more valuable because if they are flooded with crap then it's stealing their time from doing something productive, such as reviewing a humans paper who genuinely wants their feedback to improve their work and understanding.
Anybody should be able to grasp this without serious thought, and anybody submitting research papers should have figured this out for themselves before wasting the reviewers time.
-
-
-
Tuesday 31st January 2023 07:24 GMT FeepingCreature
Re: "all paper submissions must be the original work of authors"
(It does not do that.)
Seriously, I've seen that bandied about, and it's simply not how LLMs work.
When a NN regurgitates some of its inputs unchanged, it's called mode collapse and generally considered a bad thing. The whole point of NNs is that they can *interpolate*, rather than regurgitate - and they do so in a highly abstracted feature space rather than between samples.
-
-
-
Saturday 28th January 2023 10:01 GMT lglethal
Can I suggest the following - any paper found to have been submitted having used one of these chatbots to write part or all of it, will see all of its authors banned from ever submitting another paper to these major journals. Have the journals swap ban lists, so that a ban really does mean a ban from all major journals.
Being banned from submitting papers to major journals would kill an academics career, since 99% of universities work on the principle that the number of articles produced in major papers is the only way to tell if an academic is any good or not. (The other 1% prefer nepotism).
Such a ban existing would kill the use of Chatbots because no senior academic would allow the risk to their careers just to get a paper out. They've got far too high an opinion of themselves and their own importance...
-
Saturday 28th January 2023 11:46 GMT Anonymous Coward
Good luck with that. As another commentard noted "Eventually, the AI simply won't be detectable".
Going nuclear on scientist careers would also potentially have a chilling effect on submission.
Not to mention that mere accusation could be sufficient to terminate careers, without any proof of guilt. Especially in jurisdictions where "guilty unless proved innocent" is the norm.
-
Saturday 28th January 2023 12:43 GMT lglethal
I dont agree with the comment "AI simply wont be detectable.". It might not be detectable at the time of publication, but the tests to detect will follow behind the AI improvements and become able to detect the usage. To give you a good analogy, when athletes dope, there are some doping drugs which cant be detected by current testing. But wait 5-10 years, and those drugs CAN be detected. We've seen how many medal winners from the London games (especially from Russia and their friends) have been stripped of their medals since. They passed the tests then, but the samples are kept, and when new and better tests are available, the samples are retested in the future, and suddnely the dopers are discovered.
Why should it be any different with a text journal? The text doesnt go away, doesnt degrade (like a medical sample), doesnt need special equipment to keep it frozen and in good condition. It will be much easier and quicker to retest Journal papers for AI usage, then it currently is to re-test for drug cheats in sport. But we do it anyway despite the costs in sport, because having a fair playing field is all that keeps people playing. Well guess what, the same applies in Academia. Being first to publication is a big boost to getting fame and funding. So stamping out the use of AI to artificially speed up the process of getting a paper out for publication is definitely something that should be done.
I dont want an accusation culture either, but someone cheating to get a paper out first, or falsifying data using an AI in order to secure funding, is taking potential funding from someone who's following the rules. And that should be punished. Finding the right balance is important, but there has to be long term consequences if you choose to break the rules for short term gain...
-
Saturday 28th January 2023 17:42 GMT Anonymous Coward
You're trying to conflate drug taking with using AI.
A better analogy would be the use of calculators in exams back in the 1970s - some people / authorities regarded them as cheating.
Using AI as a tool is no different than using a calculator today - it's not "cheating".
However, if scientists try and falsify data or fraudulently claim funding, they should be held to account, whether they use AI or not.
-
Saturday 28th January 2023 22:24 GMT Jonathan Richards 1
Cheating
Having taken exams in the 1970's, I can confirm that using electronic calculators was regarded as cheating, for the good and sufficient reason that use of electronic calculators was banned, and so by definition it was cheating. If you want the rules changed, then address the rule-makers.
Also, I don't think that "AI as a tool is no different than using a calculator". A calculator, properly coded and handled, is more precise and less error-prone than other ways of doing arithmetic. The right input generates the right output. As a tool, LLM is regurgitating words in grammatically correct chunks, which are not reproducible. If scientists have a point to make in scientific writing, they should be able to formulate the idea precisely in a written report. Especially this is true when reporting a novel result, which is the very essence of a scientific paper. Even a monograph drawing on published results is supposed to draw novel novel insights from the collection.
-
-
Sunday 29th January 2023 07:58 GMT doublelayer
I don't really have a problem banning the tools, and having banned them, to punish those who violate such bans. When those tools have been used to generate false data, it's probably easier to prove the lies and punish the author. However, if the tool is just used to produce text, I'm less confident that a test will eventually become available that detects that well.
The problem is that AI's text looks generally like humans' text, even if it's written differently. All a test can do is suggest that some words look like they could have been automatically generated, but it can't prove that. It can use a bunch of methods to distinguish between human-written and machine-written text, but many of those are prone to mistakes that could be damaging if people are assumed to be guilty if the computer says it's suspicious. For example, if it builds a model of typical text written by a person and compares writing to that, passages written by other contributors, passages written in a group, or passages that have been significantly modified in response to critiques would probably look quite different from the norm. For that matter, I know that my writing style can vary quite a lot depending on whether I have an interesting point to make, whether I am tired or active, and whether I've successfully switched my brain from informal to formal writing modes. If it's about grammatical or vocabulary patterns, people who are less familiar with the language are more likely to have patterns they're more comfortable with and use more often.
If five years from now I can present you with a computer-provided report which shades a bunch of text in various warning colors for possible use of AI, how much do you have to see before you're willing to use the punishments you describe? If you wait for extreme confidence, a lot of people aren't going to get caught and they may think that using AI is safe because you'll never catch them. If you use low confidence answers, a lot of innocent people are going to get punished. Even if your test is right, you can't prove it and the person who received the punishment can probe the tool you used to point out its problems and claim you attacked them on faulty evidence.
-
-
-
Monday 30th January 2023 10:58 GMT Rafael #872397
Being banned from submitting papers to major journals would kill an academic career, since 99% of universities work on the principle that the number of articles produced in major papers is the only way to tell if an academic is any good or not. (The other 1% prefer nepotism).
Depends on the university. Predatory Publishing and Vanity Publishing rose just because for many universities quantity is better than quality.
I get a lot of spam from "publishers" asking for my "contributions" on fields I know nothing about, or asking to resubmit a paper ("as is, no need for new material") to a conference to their journals, all for a tiny little contribution to keep the journals open access.
-
-
Saturday 28th January 2023 14:19 GMT Anonymous Coward
Quis custodiet
I once cam across a pseudoscience journal published as genuine science by respected academic publisher Elsevier. They had presumably been blagged by a select group of fruitloops who had gained respectable qualifications in related subjects, so they appeared kosher. The fruitloops appointed themselves to the editorial board, wrote papers and peer-reviewed each other's, recruited their students to the friutloop cause and everybody cited each other's papers in an apparent display of scientific orthodoxy. I have no idea how alert Elsevier are to this sort of thing, one trusts that it was rumbled and they shut it down.
So, when are academic publishers going to recruit AIs to do their journal audits, peer reviewing and, eventually, editing - so much easier and quicker than herding academic cats.
-
Saturday 28th January 2023 16:41 GMT Eclectic Man
Acknowledgements vs co-Authorship
I don't see ay problem with acknowledging ChatGPT for providing support with the written text, but a co-author should have contributed to the creative or scientific parts of the paper.
An acquaintance of mine asked me once about someone with whom he had discussed an idea.* A few months later he discovered that this person had written up and published a paper on it, without his knowledge, listing him as co-author. Now, many people might be grateful for this extra publication, but my acquaintance was a full Oxford Professor and FRS, and somewhat peeved by this, as he had not even been asked.
Generally my advice to scientists is to read some good books and learn how to write a grammatically correct sentence, then you can write your own papers, and even publish the pearls of your wisdom on sites such as The Register.
*No names, no pack drill, they are both, AFAIK, still alive.
-
Saturday 28th January 2023 17:52 GMT Anonymous Coward
Seems to be a backward decision
By explicitly banning the crediting of AI contributions, the likely outcome is that authors will hide any AI involvement and publish regardless, with the rest of the planet being none the wiser (assuming AI detection tools aren't successful).
The opposite of what the publishers appear to be trying to achieve.
AI / ML isn't going away - the genie is out of the bottle.
-
Saturday 28th January 2023 22:30 GMT Jonathan Richards 1
Re: Seems to be a backward decision
Did you read the article? Nature at least has not banned the crediting of AI^W^W LLM contributions, they have explicitly said that such contributions must be acknowledged in the text. That doesn't mean putting "ChatGPT, 1.1.117" down as one of the authors.
-
-
-
Monday 30th January 2023 09:19 GMT Tom 7
A bit like the output from an MBA
or privately educated person going into industry. Vast reams of documents to show they're busy and important but when you actually have time to read them....
I've often wondered if the purpose of learning to write essays and papers is to hide lack of knowledge.
-
-
Monday 30th January 2023 10:26 GMT Anonymous Coward
Doesn't ChatGPT log what it output?
Surely ChatGPT, provided as it is, as a service rather than a locally installable program, has a log of everything it has ever generated. If so, they could simply offer a service to search this for similarity, and you would then almost definitively know whether something was written by it. It wouldn't help with other LLMs, but would be a start.
-
Wednesday 1st February 2023 16:48 GMT DrSunshine0104
What about editing?
I know the answer is certainly 'in the future'.
But can ChatGPT actually make an edit to a section of a text without having rewrite the entire document? Can ChatGPT make a rewrite to paragraph in isolation, make it flow or not change voice? I feel this would be an obvious sign of generated text for now, if it is even a hindrance.
I do ponder how many students actually use this. Is this a lot of noise for a handful of bad actors or is it actually problem?
To those who have used it in school... good luck. As an apprentice right of school your work you better hope you can fake knowing this stuff. ChatGPT isn't going to sit for your interview and you'll likely not have the experience to bullshit around the interview questions from the expert across the table.