back to article Professor freezes student grades after ChatGPT claimed AI wrote their papers

An instructor has accused students taking his agriculture science class at the University of Texas A&M-Commerce of cheating by using AI software to write their essays. As detailed in a now-viral Reddit thread this week, Jared Mumm, a coordinator at the American university's department of agricultural sciences and natural …

  1. martinusher Silver badge

    Self Referential

    The fundamental problem this instructor has is that he's using software to tell him which students are using software to write their papers. The operation of the detection software is as opaque as the software that's used to write the papers so the whole thing becomes a festering mess of confusion.

    I'm so glad I'm not a student these days. Learning isn't learning so much as a regimented exercise in swallowing and regurgitating facts as fast as possible, where adherence to schedules and work formatting carries as much weight in grading as actual content. It almost demands cheating of the student just as a way of getting through the course (....especially as courses and course work aren't coordinated, each subject and instructor has their own fiefdom and if assignments and deadlines clash or overlap then that's not their problem).

    1. Anonymous Coward
      Anonymous Coward

      Re: Self Referential

      The university probably "prohibits using AI in for examinations", and as such must fire the lecturer.

      1. Doctor Syntax Silver badge

        Re: Self Referential

        He should certainly mark himself X.

        1. MOH

          Re: Self Referential

          Professor Xavier??!!

    2. msknight
      Joke

      Re: Self Referential

      Here's a thought... why doesn't he just ask AI which students asked it to write their paper for them!

    3. LionelB Silver badge

      Re: Self Referential

      > Learning isn't learning so much as a regimented exercise in swallowing and regurgitating facts as fast as possible, where adherence to schedules and work formatting carries as much weight in grading as actual content.

      YMMV. My son has just submitted his final-year assignments (English and Philosophy) at a UK university, and that most certainly wasn't the case.

      Agree with the first part of your comment, though.

      1. Michael Wojcik Silver badge

        Re: Self Referential

        It wasn't the case in any of the courses I took while earning my three degrees, either. Or in any of the courses I taught.

        As usual, though, while there are plenty of people here who will moan about the state of academia, there are few who cite any reputable sources or demonstrate any actual knowledge about it.

    4. Binraider Silver badge

      Re: Self Referential

      Which is why the teaching standards of one university over still really stand out in the student / post grad rankings. Today's somewhat standardised course content is often mangled together by copying questions from textbooks. That the accompanying answer books to those textbooks themselves sometimes contain errors in solutions is not that unusual thing to find.

      Standout lecturers that write their own course material are a cut above the rest, and, by default are the universities I would recommend over others. It is no co-incidence that my own highest marks were in areas where the lecturers spoke and wrote in their own words rather than ramble through someone else's obscure textbook.

      You cannot tell me that the lazier end of the lecturer spectrum have not at least experimented in asking ChatGPT to generate course materials themselves at this point.

      The reliability of using an AI to tell if something is AI generated or not is basically zero, and any student in this position has reasonable recourse to ask the lecturer to prove that it's not AI generated; because they cannot.

      In person interviews and tests of ability are far better gauges of capability over coursework, if you want to test if they know a subject; simply ask them about it directly. Show your workings on a whiteboard, etc. (and where a computer is required, again, show your workings). This isn't perfect for grading exams, but it will immediately filter out those that have, or have not actually studied the material; with or without the aid of a chatbot.

      One other point to note. A physics degree in 1880 could be completed in 1 to 2 years, because the subtotal of human knowledge on the subject was possible to express in that sort of time frame. Today, even a 4 year degree with sandwich course is not nearly enough time to scrape the surface of what is out there. Education is increasingly rushed, especially in the 14-21yr old groups for this reason. Grads leaving without "business" skills is a symptom of condensed and ever-growing total knowledge required.

      Of course, I can sit on the sidelines as a recruiter and complain, but the solutions are much harder to find. ChatGPT asking government to regulate them is both laudable and laughable at the same time.

  2. Blackjack Silver badge

    Why not go back to oral exams for the Finals? Sure they take longer but it makes way harder to cheat.

    1. Simian Surprise

      Yes, surely the best way to tell if a student wrote the paper is to make them answer questions about the topic and argument.

      And if they use an LLM anyways, and produce a convincing paper, which they then study thoroughly that they might be able to pass the exam, I'd still say they've learned what they were supposed

      1. Anonymous Coward
        Anonymous Coward

        "... I'd still say they've learned what they were supposed"

        I'd say that is the truth and considering that college is increasingly just about a piece of paper, the entire educational model of college has fallen down.

        To be fair, and like capitalism and other things, I don't think the "founders of college" realized a future where the abundance of knowledge one could achieve on thier own becomes almost limitless... but here we are.

    2. Anonymous Coward
      Anonymous Coward

      re. Why not go back to oral exams for the Finals?

      Cost is probably the main factor, as usual. Learning / teaching is increasingly, if not by now completely 'just business', and therefore usual business rules and tools (aka processes) are applied :/

      1. Intractable Potsherd

        Re: re. Why not go back to oral exams for the Finals?

        So make it a random sample of papers submitted, with perhaps some facility for obviously questionable papers, as we do with moderation. It isn't an insurmountable problem.

        1. Duncan10101

          Re: re. Why not go back to oral exams for the Finals?

          Here's a suggestion: play language models (or students??) at their own game.

          Get a statistical analysis of the student's previous coursework. Grammar, spelling, sentence structure, word usage, you can probably get a metric-of-some-sort for style. Then compare that with the incoming-paper-to-be-marked.

          To get a ChatGPT assignment through that, would mean they'd have to have cheated everything they ever submitted. And I kinda think you could spot that.

          1. doublelayer Silver badge

            Re: re. Why not go back to oral exams for the Finals?

            You have to be careful before using any kind of automated analysis to see if it's reliable enough to use to punish people. People hate it if some algorithm they can't control or audit is used to decide that they're guilty, and they have a reason to feel that way. Existing tools that check for plagiarism aren't affected by this, because even if it puts a high plagiarism score on a document, it can be made to tell the professor and the student where the words were copied from. They can check that to see if the program has screwed up by referring to the student's own work, by not recognizing the use of quotes, or by just being wrong.

            If a statistical model says "92.537% confidence that this writing does not match the historical corpus from this student", what can you do to test whether that's true? How high is that confidence when you compare a student's carefully-written essays against the one that they ended up doing all at once while lacking sleep? How accurate is it when comparing an essay written by multiple people against the essays made by individuals? How confident is it when comparing a student's essay from a literary analysis class to their readme for a computer science project, and what happens when it goes from essays with no references to a heavily-cited work performing a meta-analysis? Unless you have all those answers and they're all in the 0-1% range, it's likely that this will punish completely innocent students.

      2. Blackjack Silver badge

        Re: re. Why not go back to oral exams for the Finals?

        Considering that College are getting fewer and fewer people going and graduating and how they keep charging more and more each year, oral exams and practical exams would at least give them a good excuse.

        We are heading to a future that most people won't have a university degree anyway, unless they got it cheaper in another country, and have cheaper equivalent degrees instead if they have anything.

        Plus hiring non university graduates means people who don't have student debt so you can pay them cheaper!

    3. LionelB Silver badge

      Much assessment at degree level in the UK is currently on the basis of submitted essays and assignments (although that does, of course, vary by discipline), making it way too easy to cheat. Controlled (i.e., no mobile phones!) in-person examinations and/or submitted essays/assignments + oral examination on the submissions has to be the way to go.

      This is already happening. As an anecdote, an acquaintance - a senior lecturer in law at a UK university - was recently convinced that one of her student's final-year submissions was pure ChatGPT; the giveaway was that half of the cited references did not actually exist! (And it was, apparently, way too grammatically correct and articulate compared to previous work by the student in question - let's give ChatGPT its due there.) An option open to her was to demand that the student sit a "mini viva" on the assignment - hasn't taken place yet, but I'm looking forward to hearing how that went...

      1. Red Ted
        FAIL

        Citations/References and ChatGPT

        "...half of the cited references did not actually exist!"

        I suspect that in the other half the title didn't match the reference number, so were also made up.

        This corroborates with my experience that ChatGPT knows what citations should look like, but not what they are. So it will put a statement in its text that has a reference mark against it and then generate a piece of text at the end in the reference list that is formatted correctly, but in no way ties up with a real article that made that statement.

        It's treating them as a style of academic paper writing and making them up, as it has with the rest of the paper.

        1. David Nash Silver badge

          Re: Citations/References and ChatGPT

          Exactly like the guy who was told by ChatGPT that he was dead, and provided with a realistic-looking but completely made-up URL from The Guardian for his obituary.

          1. Strahd Ivarius Silver badge
            Joke

            Re: Citations/References and ChatGPT

            an URL like this one?

        2. LionelB Silver badge

          Re: Citations/References and ChatGPT

          Apparently some of the references were genuine and even appropriate, and the body of the dissertation was not complete nonsense, yet also not cribbed verbatim from standard texts (universities already have tools to detect that). This chimes with my own experience of messing about with ChatGPT - it can indeed generate blatantly bogus nonsense, but it can also be more subtlety bogus.

    4. Michael Wojcik Silver badge

      Oral examinations wouldn't have been terribly useful in the writing classes I taught. They're not great for classes which involve significant programming either. Or ones where students need to work through math problems. Or most where they need to do significant research.

      They also don't scale very well. For small class sizes they may be competitive with the time taken to grade written submissions properly, but with large sections it's just infeasible.

      1. LionelB Silver badge

        Broadly agreed. But in areas where oral examination is unsuitable, students could be orally examined on their written submissions, at the very least in situations where cheating is suspected. Of course this would increase the already considerable marking/assessment workload on faculty.

  3. Anonymous Coward
    Anonymous Coward

    "I'm not grading AI s***."

    The odd things is that "AI s***" is probably a lot easier to read and mark than the corresponding "student s***" that they would have otherwise got.

    As a past teacher/lecturer I'd be inclined to shut up and mark it, and be glad that I am receiving something more intelligible.

  4. Flocke Kroes Silver badge

    Artificial Irony detector required

    Instructor accuses students of having a computer do their homework ... then proclaims the he used a computer do his grading work.

    It would be interesting to run his homework from years ago through ChatGPT to see if he used it and a time machine to get his qualifications.

    1. Filippo Silver badge

      Re: Artificial Irony detector required

      My GF is a teacher, and she's regularly outraged by how teachers in staff meetings don't listen to what's being discussed, chat between themselves while someone else is speaking, fiddle with their phone, speak out of turn, duck out of the meeting with some excuse, and generally do all of the things they spend a lot of their time telling students not to do.

      1. SundogUK Silver badge

        Re: Artificial Irony detector required

        "staff meetings" - bureaucratic waste of every ones time.

        "student lectures" - lead to qualifications that will have a major impact on your future earnings.

        These things are not the same.

        1. GreenJimll
          Headmaster

          Re: Artificial Irony detector required

          Back in my day as an undergraduate, the lectures were optional to attend but the tutorials weren't. Lectures were there to give people who didn't read around the subject the groundings that could get them a 2.2 or maybe a 2.1 if they were lucky and had a good memory. Reading around and doing all the tutorial exercises were what you needed for a decent 2.1 or 1st.

          At least one of my lecturers just read their lecture notes out in lectures, which turn out were actually an old numerical analysis text book written by someone else. They assumed no undergrads would spot this... only this undergrad had the text book because he'd inherited from his brother who done a similar course at another Uni some years before. Handy, as I was able to "read ahead" and be ready for the "trick questions". Also meant that the staff assumed I really understood numerical analysis because I seemed to be able to predict what would happen next when we were discussing in tutorials.

          1. AndrueC Silver badge
            Facepalm

            Re: Artificial Irony detector required

            One of my lecturers just stood at the front writing out what he was saying on an OHP and expecting us to write it down as well. He could have saved a lot of RSI by just handing out a copy of his notes at the start of the lecture. These days (though he's probably dead now) he could just put his notes on a website and go through them with us.

            Unsurprisingly that was part of my HND that I failed. I just don't learn well by rote and have an aversion to memorising stuff that will be readily available to me from other sources when I need it(*).

            (*)I agree that's a bit of an assumption but now that I approach retirement I cannot think of any time when I didn't have an 'external' source of information available. Learn the basics, learn how to diagnose and learn how to find information is my view :)

            1. martinusher Silver badge

              Re: Artificial Irony detector required

              The act of writing something down -- not just blind copying, though -- is one of the tools to help students remember material. Handing out preprinted notes or giving them a website to read from (and there's a special place in Hell reserved for those students who attack books with highlighters).

            2. DevOpsTimothyC

              Re: Artificial Irony detector required

              One of mine would stand at the front and read through the notes from the OHP. He had obviously written the notes up from I don't know how many years before and part way through he'd pass around copies of the notes he was busy reading through

            3. jgard

              Re: Artificial Irony detector required

              That reminds me of a course in my undergraduate days. We would all have to sit there each week, listening to him paraphrase the contents of a book in a very boring fashion. The best bit was that it was his book, an essential on the reading list, and I'm sure he got the library to hide all copies before each course started each semester. I bet he sold at least 200 copies a year with this tactic. He'd come up with new editions every 3-4 years, and always covered the new material, so students had to buy the new edition! With gifts like that he should have gone into enterprise IT sales.

    2. Flocke Kroes Silver badge

      Re: Artificial Irony detector required

      I would like to understand why I am getting downvotes. To clarify: I do not want students passing off ChatGPT output as their own work. Apparently some of them did exactly that for this assignment. What I object to is using ChatGPT alone to identify the cheaters because the results are going to be no better than random.

      One technique for optimising a generative network is to train another generative network to spot machine generated output and use the two networks to optimise each other. I would expect ChatGPT to able able to tell a story. That is what it was trained to do. I would expect ChatGPT to do badly at tasks that require understanding what is happening - for example, ChatGPT 'plays chess' by picking moves that were popular in many games with similar previous moves even when those moves are invalid in the context of the current game. Right at the top of the things I would expect ChaptGPT to fail at is spotting ChatGPT output. It was specifically trained to create output that looks human generated to machine learning models.

      1. LionelB Silver badge

        Re: Artificial Irony detector required

        > One technique for optimising a generative network is to train another generative network to spot machine generated output and use the two networks to optimise each other.

        Anecdotally, that seems to work up to a point... at some stage, though, both networks start to lose touch with "reality" and, metaphorically speaking, crawl up each others' arses.

        > Right at the top of the things I would expect ChaptGPT to fail at is spotting ChatGPT output. It was specifically trained to create output that looks human generated to machine learning models.

        That, right there.

        (And no, I'm not sure why you're getting downvoted either.)

      2. yetanotheraoc Silver badge

        Re: Artificial Irony detector required

        I can't speak for the ones who downvoted, but as I read it "I'm not grading AI s***." doesn't mean he's using AI to grade the papers, he's using AI to filter and decide which ones to grade.

        "then proclaims the (sic) he used a computer do his grading work" seems to be unfounded.

        Arguments about whether LLMs can reliably detect LLM output are more reasonable. Since the "detection" is also an output, this seems to be a badness squared problem. Before detection via LLM becomes close to reliable, the primary output will be so good that LLM will pass the Turing test. Game over.

      3. doublelayer Silver badge

        Re: Artificial Irony detector required

        I don't know. My only guess is that he didn't use GPT to do his grading work, he used it to judge students and didn't even bother grading them. I'm not sure that's a big enough technical inaccuracy to justify downvotes, but it's all I can think of.

        I wonder if any teacher has yet used GPT to actually grade an assignment. It will do it, of course, but I'm hoping that's at least below the threshold where even the less informed realize that's a terrible idea.

    3. matthewdjb

      Re: Artificial Irony detector required

      I did run this paper he wrote

      I asked Chatgpt "Please tell me if the following was generated by AI:", followed by the text of this paper https://www.sciencedirect.com/science/article/abs/pii/S016815912030054X where Munn is lead author. The response was:

      It is difficult to determine with absolute certainty whether this text was generated by AI or written by a human. However, the technical nature and formatting of the text, as well as the lack of personal opinions or subjective language, suggest that it may have been generated by AI.

  5. Filippo Silver badge

    >'I think we need to learn to live with the fact that we may never be able to reliably say if a text is written by a human or an AI'

    That's the big point IMHO. I've no idea whether it's a good thing or a bad thing or a meh thing, but it's a thing. And it's not going away. It just isn't. No amount of regulation will stop it: the only thing that prevents most people from running a LLM at home is compute resources, and those just keep getting cheaper.

    We need to start adapting our society to the fact that you can't know if a bit of text was machine-generated. And, soon enough, the same will go for pictures and video. Journalism, teaching, politics, law-enforcement, more, all of that will just need to figure out how to deal with this. Pretending that you can keep doing the same job in the same way is not going to work.

    1. werdsmith Silver badge

      Kind of reminds me a bit of calculators in exams. They have to have limited functionality or a limited mode.

      It doesn’t make any sense at all to hand calculate logarithms or even look them up in a Frank Castle booklet when they are a button push away. But once there was a time when people thought electronic aids unacceptable.

      1. Peter2 Silver badge

        I grew up with that at school not even that long ago, and they did have a point. Being able to do at least basic mental math gives you a reasonable approximation of what the number ought to be, and if it's wildly out from your expectations then you know immediately to check it. People who haven't learned to do any math in their head simply can't do that and are oblivious to what I consider obvious issues.

        The purpose of writing is to learn to communicate concepts and ideas. If existing tests are unable to do that then just go back to a good old fashioned method of locking somebody in a room for a couple of hours while supervised would appear to be perfectly effective at ensuring that no cheating takes place.

      2. Falmari Silver badge

        When I was at school use of pocket calculators was banned from both exams and class. When I left sixth form I had failed O level maths 3 times as I never managed to answer all the question I always ran out of time.

        Roll on 14 years and I am doing an OND in computer science and one of the degree offers I had required O level maths or equivalent. So I took an evening class GCSE maths.

        First class the tutor comes over, spots the book of logs on my desk commenting he has not see one them in years. Tells me I don't need it I can do it on my calculator, and asks for my calculator to show me how. Seeing no calculator he asks were it is and do I have one. I tell he I have one I use it on my OND.

        He asks why I have not brought it to class. Isn't that cheating I ask. He explains that I need a calculator, the exam design expects calculators to be used. If I don't use one am putting myself at a disadvantage.

        So use a calculator I did and I passed maths finally, at the 4th attempt. It may have been the 4th attempt but it was the first I ever completed. Completed with an hour to spare, I was able to leave the exam after an hour. Acceptable or unacceptable the calculator certainly helped me pass the exam.

    2. Mike 137 Silver badge

      @Filippo

      "We need to start adapting our society to the fact that you can't know if a bit of text was machine-generated"

      Unless the intent of the text is essentially banal, you can usually make a good judgement if you pay enough attention and are informed about the subject. The machine generated text can not, by definition, express original ideas because its source of training is an average of extant ideas. The ideal of education being to teach students to think independently (i.e. exercise originality) the distinction should be possible to make. Unfortunately, modern education is not about teaching students to think independently, but about coaching them in passing exams in existing thinking (right back to the Middle Ages in fact) so the distinction will in fact be much harder to make because banality is expected.

      1. ArrZarr Silver badge

        Re: @Filippo

        This is true for the current level of artificial intelligence, but not necessarily true for the future.

        It's also worth noting that it's unlikely any idea expressed in an essay is truly original, even at undergraduate level. Good luck having any sort of unprecedented opinion on Romeo and Juliet, for example.

        If the assignment's purpose is to check the student's understanding of a well-documented thing, then original thought is genuinely not required either. It's a rare student that will have insight on intricacies of the water cycle that experienced geographers haven't had and published a paper on already.

        For the record, I agree on the ideal of education that you state but education can't just be on independent thinking, there must also be knowledge transfer. All the independent thinking in the world won't be much help if you're starting from scratch on 2+2=4.

        1. Michael Wojcik Silver badge

          Re: @Filippo

          Benvolio masterminded the whole thing.

          Nah, someone's probably gone down that route already.

      2. Filippo Silver badge

        Re: @Filippo

        First of all, assuming that LLMs can't come up with original ideas is somewhat problematic. It depends on what you mean by "original idea", which is a nebulous concept at best. Depending on what exactly is meant, it could go anywhere from something LLMs can reliably produce, to something only a few geniuses can produce a handful of times in their entire career. I just had ChatGPT come up with a description for a character in a novel, and it came up with a freelance journalist who had to skip town after uncovering a local scandal, loves exotic coffee blends, and collects trinkets from her investigations. It's formulaic, but correct and sensible. It is, in fact, pretty much what you get if you pick up a published novel at random. Are we really setting the bar for passing a high-school test higher than that? And this is specifically a creative task.

        Secondly, it's true that school should promote independent thought, and it's probably true that it currently doesn't do that nearly enough, but it's not true that independent thought is the one and only metric of any importance, with everything else being irrelevant to the point that it's not even worth testing. A good professional in any field should (A) know their field, (B) create original ideas about it, and (C) be able to express them in a way that other people understand. Point (B) does not happen without (A), and is largely useless without (C).

        1. that one in the corner Silver badge

          Re: @Filippo

          > I just had ChatGPT come up with a description for a character in a novel, and it came up with...

          Why bother using ChatGPT for that? There is no shortage of random character generators out there, including ones for specific genres: just shake the dice or shuffle the decks, pick the cards out and ta-da! You'll have a far lower carbon footprint than running ChatGPT.

          You can even make up your own card decks or join a writers group and swap cards with each other.

          Of course, one difference between shuffling cards or rolling dice compared to asking ChatGPT is that the latter will be spitting out items that it has already seen are often used together, that are statistically linked. There is a well-worn name for that kind of correlation, isn't there?

          > It's formulaic

          Yes, correct, that is how we describe such correlations. Also, when it gets too closely correlated, we then call it a cliche (sorry, dunno where the e acute is on this Android touch keyboard thingy).

          > And this is specifically a creative task

          I *really* hope you aren't saying that you thought the ChatGPT results were creative!

          1. doublelayer Silver badge

            Re: @Filippo

            I think you have misunderstood their point. Your first question indicates this: "Why bother using ChatGPT for that?" They were using GPT for that to make a point, not because they actually needed the output. They weren't saying the output showed creativity, hence why they called it formulaic. They said that the concept of coming up with characterization is a creative task, and if humans aren't consistently doing better than a program, why do we say that those humans are creative and the program isn't. In short, they were making a point about what we call creativity and the problems with making an objective decision on what is or isn't creative, not saying that they were using GPT to help them write a book.

          2. Filippo Silver badge

            Re: @Filippo

            My post was in the context of answering Mike 137's post above. You seem to be reading it out of context.

            doublelayer's post, which should appear near this one, clarifies this.

            1. that one in the corner Silver badge

              Re: @Filippo

              Ah, ok; thanks to both for putting me straight.

      3. Michael Wojcik Silver badge

        Re: @Filippo

        The machine generated text can not, by definition, express original ideas because its source of training is an average of extant ideas.

        Great. Now all we have to do is solve the small epistemological problem of inventing a reliable decision procedure for distinguishing "original ideas" from "combinations of existing ideas". Oh, and defining those two things in the first place.

        But who doesn't love appeals to indiscernible essences?

    3. Michael Wojcik Silver badge

      I've no idea whether it's a good thing or a bad thing or a meh thing, but it's a thing.

      In any case, it's not a new thing. Paper mills have been around for many years. Student organizations keeping files of essays for their members to plagiarize from is a longstanding practice. We've never been "able to reliably say" if a text was written by the human assigned to write it, unless we actually watched them do it (and even then there are probably students who have gone to the trouble of memorizing some source and plagiarizing it from memory).

      The plagiarism problem has been researched and discussed for centuries in the various academic disciplines, particularly those involved with writing (such as rhetoric, belles lettres, and composition, in the European-US tradition). The Internet made it cheaper and easier, and LLMs make it cheaper and easier yet; but these are quantitative differences, not qualitative ones.

  6. Anonymous Coward
    Anonymous Coward

    As these large language models improve over time to mimic humans, the best possible detector would achieve only an accuracy of nearly 50 percent.

    Not exactly. If 90% of answers are by AI's, it would be easy to beat 50%, without using AI detection.

    1. Michael Wojcik Silver badge

      Yes. It would be better to say "no decision procedure will do significantly better than a random sampling from the distribution would".

  7. localzuk Silver badge

    More misunderstanding of what an LLM is

    So, now we've got university professors failing to understand what an LLM is, and how it works.

    ChatGPT simply does not have the capability to know if something was written by an AI - it has no context to know this. It doesn't remember previous answers or answers by others. LLMs are not designed to do analysis like this.

    What these models are good for is giving convincing looking answers. It doesn't mean they can be assumed to be right.

    1. ArrZarr Silver badge

      Re: More misunderstanding of what an LLM is

      I'm sure that the lecturer with a PHD in classical Greek literature is incredibly tech savvy.

      University professers tend to be smart, but we all have our areas of expertise.

      You might consider the distinction of what LLMs are and aren't to be obvious, but the aforementioned lecturer would consider the distinction between Sappho's East Aeolic dialect and Homer's Ionic dialect obvious.

      We live in a big world. There's a lot of knowledge out there and none of us can know all of it.

      1. localzuk Silver badge

        Re: More misunderstanding of what an LLM is

        Sure. But, if you're going to use a new technology, you should probably know what it does first. Especially if you're going to go using it for something as critical as determining whether you accuse someone of cheating...

        1. ArrZarr Silver badge

          Re: More misunderstanding of what an LLM is

          We know that we need to be skeptical because we're jaded old misanthropes who don't trust new tech until we've got our hands on it and thrown it down a flight of stairs or ten to see what falls out.

          The strawman professor I've created isn't. I agree with you, but my issue was with you being surprised that a professor in an unrelated field misunderstood the concept.

          1. Doctor Syntax Silver badge

            Re: More misunderstanding of what an LLM is

            "The strawman professor I've created"

            Yes, but the point at issue here is a real instructor who tried to use an LLM inappropriately by not understanding what it did.

            1. Anonymous Coward
              Anonymous Coward

              Re: More misunderstanding of what an LLM is

              Even worse, they made an assertion without a citation to provide any kind of legitimate basis for the implied belief. That is a 1st year undergrad who doesn’t get the point of Harvard referencing kind of mistake!

      2. that one in the corner Silver badge

        Re: More misunderstanding of what an LLM is

        > University professers tend to be smart, but we all have our areas of expertise.

        And a good professor - a good academic at any level - will go and ask a colleague in the appropriate field when they come across something outside of their area of expertise.

        Not just apply magical thinking about what "an AI" can do. And certainly not when doing so impacts on other people's lives.

      3. Michael Wojcik Silver badge

        Re: More misunderstanding of what an LLM is

        I'm sure that the lecturer with a PHD in classical Greek literature is incredibly tech savvy.

        I don't have your comprehensive knowledge of all classical Greek literature professors, but I'd expect there are some who are. I've known literature professors working in areas such as TEI who were quite conversant on a number of topics in IT and computer science, for example.

    2. Orv Silver badge

      Re: More misunderstanding of what an LLM is

      I think a lot of people don't understand how AIs work, and assume they're like Star Trek computers, where you can ask "did you write this?" and it has to answer honestly.

    3. Filippo Silver badge

      Re: More misunderstanding of what an LLM is

      In fairness, we see specialists in the field of AI who apparently don't understand what an LLM is and how it works. E.g. the guy who claimed that ChatGPT is sentient.

      I suspect it will be a while before people adjust to these new tools.

      1. Michael Wojcik Silver badge

        Re: More misunderstanding of what an LLM is

        If you're thinking of Blake Lemoine, he was talking about Google's LaMDA, not ChatGPT. And while Lemoine has a Master's in CS and worked on LaMDA, there are specialists and then there are specialists, if you know what I mean. Lemoine is no Geoffrey Hinton.

        Lemoine is also a priest, apparently, and has said that his conclusion about LaMDA came from his "spiritual side", not his CS expertise. Make of that what you will.

        But maybe you're referring to someone else? Obviously there have been a lot of claims made about LLMs in recent months and I certainly can't keep track of them all.

        1. Filippo Silver badge

          Re: More misunderstanding of what an LLM is

          I'm probably thinking about him, and thanks for the correction. The point is: people who actually work on the stuff can apparently misunderstand it, so if someone not involved in the field misunderstands it, I can't blame him for that, not even if he's a university professor.

          I do blame him for basing important decisions on tools he doesn't understand, though.

    4. that one in the corner Silver badge

      Re: More misunderstanding of what an LLM is

      > giving convincing looking answers.

      That is, your basic answeroid.

  8. Anonymous Coward
    Anonymous Coward

    Education itself is partly to blame here.

    If your education system is motivated by making as much money as possible for doing as little as possible, then this is where you end up.

    It's entirely possible to eliminate any AI cheating today. This morning. Now, in fact.

    Stop using assignments and start using exams.

    Job done.

    But we need to remember that we got into using assignments so education could become an "industry" and make money.

    1. SundogUK Silver badge

      Re: Education itself is partly to blame here.

      'Education' is largely credentialism these days, so it will be no great loss if it collapses under the weight of LLM's. Switching to actual exams for those subjects where testing really is important is probably the only way to go.

    2. Doctor Syntax Silver badge

      Re: Education itself is partly to blame here.

      I'd like to think "we" got into assignments for other reasons. One is that some people usually underperform in exams. Some, like myself, might be slow and/or illegible writers (as soon as I got my first student grant I went to a shop down the road and bought a portable typewriter for £10). Others might not respond well to the stress - and, indeed, some will get more stressed than others. And even those who do generally perform well in exams can have the misfortune of not being well - or hitting the wrong phase of the menstrual cycle - on exam day.

      On the whole assignments are better.

      Back in the early days of the OU the course S2-3 was an environment course worth a sixth of a unit. It came at the end of the OU year and students would have got their necessary credits from the course they took alongside it. The assignment was voluntary. It was to write up an investigation of their own choosing. I had the good fortune to be a tutor marking those assignments. Most of them were a write-up of something the student had already put years of observation, care and thought into. Marking became a matter of wondering what to do when I'd just given a 10 and the next one was even better. I don't think many papers received less.

      1. ArrZarr Silver badge
        Unhappy

        Re: Education itself is partly to blame here.

        Just to be the voice of dissent, I've always been awful at coursework but really good at exams. A perfect world would have one coursework option and one exam option of identical difficulty for the same course.

        Not going to go into why coursework was so hard for me, but despite getting an average of 99% over the four science exams I took for GCSE, I didn't get an A* because coursework.

        1. Cybersaber

          Re: Education itself is partly to blame here.

          ...one of the voices of dissent.

          FTFY (in the sense of 'you're not the only one' not that you were somehow wrong and I corrected you.)

          I got an F in every single English class in school, and did credit-by-exam to get an A each year, and thus and graduated with honors. I knew what I was about, and even got accused of cheating because my exams were so good (they had someone in there watching me and vouched that I didn't cheat.)

          Seems like the answer is better education of educators themselves. Perhaps some metrology should be introduced into the requirements to become a teacher. Or maybe colleges should evolve a new position that divorces the teaching from the measuring, and have specialists for both. I.e. one that specializes in the cramming of ideas into heads, and one that specializes in developing effective and flexible ways to make sure the output (i.e. the students) accurately and effectively meets the educational standards of the institution. That's the way human progress has gone, so why are educators still stuck in the same ancient model of 'one head does it all?'

          1. that one in the corner Silver badge

            Re: Education itself is partly to blame here.

            When you got out into the the real world, did you get given tasks to do that were like assignments or ones that were like exams?

            Learning how to do well at which would be better preparation for going into the workplace?

        2. Michael Wojcik Silver badge

          Re: Education itself is partly to blame here.

          Just to be the voice of dissent, I've always been awful at coursework but really good at exams

          I don't see how this is a dissent. The post you replied to said "some people".

          Since the earliest scientific pedagogy studies we've known that humans vary in how they learn and how they respond to various pedagogical approaches.

          1. ArrZarr Silver badge

            Re: Education itself is partly to blame here.

            It all comes down to "Different people are different".

            The core issue is the application of one size fits all methodology to schools and education in general. At 4 or 5 you get pushed into a pipeline that you won't get out of until 16 (or 18, or 21/22) and taught, for the most part, in the same way as everybody else despite different people being different.

            In my experience, exam boards feel do a decent job of accommodating the majority of people but those at extremes like the good doctor and I really struggle with the half that we don't get on with.

            And anyway, I needed a way to start the post.

    3. Orv Silver badge

      Re: Education itself is partly to blame here.

      You can still have cheating in exams unless you proctor it very closely. Students are extremely resourceful when it comes to avoiding the work you want them to do.

    4. doublelayer Silver badge

      Re: Education itself is partly to blame here.

      "But we need to remember that we got into using assignments so education could become an "industry" and make money."

      At least sometimes, we got into assignments so that students could learn better. There are a lot of things in life that don't fit as well in an exam as they do in a longer assignment. Since we're mostly IT people here, a good example is computer science. Of course I did computer science exams, and they're worth doing, but I didn't write big programs for those. The exams often had us writing code on paper and eventually stepped up to a text editor, but we weren't testing things and we weren't called on to innovate. When they assigned us projects, we were doing both of those. Which better represents the way that knowledge will be used once you have the credential? Similarly, both exams and assignments sometimes included adding or modifying an existing codebase. The exams had small ones so you could actually read and understand them in the three hours this question shared with all the others, whereas the assigned ones had much larger ones, in some cases up to ten thousand lines. When I started working, I often had codebases to learn and integrate my code in, and they were rarely three pages long.

      Exams are useful in some cases, but there are many activities that education should simulate which don't fit in the exam format.

  9. Boolian

    See me

    The article probably only had parts written by AI - if the lack of punctuation, mispunctuation, and an apparent truncated/ missing paragraph is anything to go by.

    6/10

  10. mrcook

    A simple solution

    The solution seems pretty obvious to me: students should regularly save drafts of their work, thus giving them the ability to prove the work is theirs, or at least written by a human.

    The inclusions of these draft copies can be compulsory on submission (via email), or the lecturers can do random checks. No complex systems required, and only a little extra work for both parties.

    1. Doctor Syntax Silver badge

      Re: A simple solution

      I like the idea but no doubt the production of "drafts" would be automated PDQ.

      1. Michael Wojcik Silver badge

        Re: A simple solution

        Modern composition pedagogy focuses on revision in response to reader feedback; and feedback is generally a combination of instructor and peer-group response. A student certainly could automate that process with an LLM, but doing so successfully requires quite a lot of user interaction to tune the LLM output, so at least the return on using an LLM is reduced. Some students would probably end up expending more effort guiding the LLM than they would have doing their own writing.

        And, while this is controversial, some would argue that by coaxing an LLM through multiple drafts in response to audience feedback, the student is becoming proficient at a tool and skill that satisfies the original aim anyway. This is the Calculator Argument. If the goal of, say, general-education composition courses falls within the "competent citizen" remit that the modern US university system (to name just one system – of course there are others that have different histories and social roles) inherited from the early US religious colleges, then being able to produce competent prose with the assistance of various tools might be a satisfactory response. It's better than students learning how to buy a paper from a mill.

    2. RT Harrison

      Re: A simple solution

      Use a content versioning system. Submit the repository if your work is questioned so that the history of the document can be analysed.

      1. that one in the corner Silver badge

        Re: A simple solution

        How long to knock up a Python script[1] that can:

        * send a text prompt into an LLM

        * commit the result

        * pause for a random time between 2 and 36 hours

        * preferably, tweak a random seed to be given to the LLM [2]

        * if not yet time to hand in assignment, loop back to the start

        [1] dunno myself, still "getting around to" learning Python, but I hear all the cool kids use it.

        [2] can't name an LLM off the top of my head that gives you access to a seed like that, but bet your bottom dollar the UI for that will appear as soon as your idea gains traction (cf how Stable Diffusion gives you access to all the controls compared to the dumbed-down UI of Craiyon)

  11. RT Harrison

    Use a content versioning system.

    Submit the repository if your work is questioned so that the history can be analysed.

    1. Cybersaber

      A pretty novel thought, but after thinking about it, I'm not sure this would be as effective.

      'Hey bot, write me a sentence about X'

      <save>

      'Hey bot, using this as a starting point, write me another paragraph about X'

      <save>

      ...

    2. Simon Harris

      I imagine that would consist of 27 revisions all timestamped throughout the night before the essay was due.

  12. WolfFan Silver badge

    LLMs: plagiarism devices

    So I got roped into teaching American History 1, Settlement to Civil War, at the college, again. I’m in Florida. Florida was, and is, rabidly rebellious. There’s a reason why Governor DeSatan was elected.

    So the Heap Big Paper is on the American Civil War. I spent a lot of time on the ACW. I ensure that it is clear that the Confederacy was doomed from the start; if they failed to knock the Federals out in a year to 18 months, they would lose. No ifs about it. Earlier on I had spoken about Winfield Scott, the man who won the Mexican war even though Mexico had more and better trained and equipped troops than the US. (They did. They also had Antonio Lopez de Santa Anna. Santa Anna thought that he was the Napoleon of the West. He was the McClellan of Mexico.) Winfield Scott created the plan dubbed ‘the Anaconda’ by his detractors at the start of the ACW. The name was adopted by those who made it work, the way that the Big Bang was first called that by the exceedingly atheist Fred Hoyle, to attack the Catholic priest who thought it up, and was then, to Hoyle’s dismay, adopted by its proponents. The Anaconda was simple: strangle the south. Cut off trade. Cut off supplies. Starve them out. The beauty of it was that the Army need merely not lose, and provide garrisons for seized seaports and such. The Navy would do the heavy lifting. The Army would have to seize enough of the major rivers, such as the Mississippi, to deny their use to the south, but did not have to, for example, take the southern capital. Just surround them and squeeze.

    Marse Bob _did_ have to take the Federal capital if he wanted to win; thus his expeditions to Maryland, Pennsylvania, and disaster. If he just sat and waited, he would lose, and he knew it. The great land battles in the East were Marse Bob trying to kill the Anaconda before it killed him. As long as the Federal forces held together, the Anaconda would strangle the south. When Grant took Vicksburg and the next day Longstreet failed to hammer Meade, it was game over. In Lincoln’s words, “the Father of the Waters again flows unvexed to the sea”; the entire length of the Mississippi, plus the Ohio, the Tennessee, and the Missouri, were in Federal hands. The Confederacy was split into two. And Lee had to retreat from Gettysburg, having lost far too many men for no gain. Marse Bob would never go north again, he couldn’t. Grant would come south to chase him. And the US Navy blockaded Confederate ports, and mounted assaults on them, and cut the south off from the world… except through Mexico. Except that Mexico had been invaded by France and their was a major war going on. And the Mexicans remembered who it was who had wanted all of Mexico, plus Central America down to Costa Rica, for slave plantations. The Anaconda strangled the South. Farragut took Mobile Bay; Sherman took Atlanta and marched to the sea; Grant vowed to fight on this line if it took all summer; Wilson rode from the Mississippi to the Atlantic; Sheridan burned down the Shenandoah. Marse Bob could do nothing.

    Needless to say, True Sons of the South really don’t like hearing that Marse Bob killed a lot of people for nothing. They really don’t.

    So I got a nice paper from a True Son of the South. A paper which I knew for certain he didn’t agree with. A paper containing a great many very familiar sequences, in several cases word-for-word familiar. I dug a old copy of my thesis out of my files, and lo! So much was identical. Including the references. It seems that m’man had been quite specific when setting the parameters for his LLM search, and there were only so many possible sources in the LLM training data. If he had been less specific, odds are that the LLM would not have created something so close to my own paper. M’man got a zero, and failed the class. He appealed. I gave a copy of his paper to the dean and then a copy of my paper. M’man got himself a road scholarship, as in hit the road, Jack, and don’t come back no more.

    Go ahead, boyz’grrlz. Use that LLM. If I catch you, you will regret it.

    1. Cybersaber

      Re: LLMs: plagiarism devices

      "people got killed for nothing" "was doomed from the start"

      Ouch. I'm a Texian (nee Texan.) I may or may not know as much about you on the subject, but both the Republic of Texas AND the Republic of the Rio Grande were long-shots re: prospects of success of their revolts.

      Texas exists. The other republic does not. Does that make Sam Houston a hero but Antonio Canales an idiot that "got people killed for nothing?" What about the people at the Alamo? Those people knew what they were signing up for. Are they heroes ONLY because Houston eventually won? Does their legacy really mean 'you're only heroic because your side eventually won?'

      I mean sure there's a lot of that practically in history, but that's not the argument you say you bait your students with; You make a really dark argument that's just an advanced form of 'Might Makes right.' It's kind of awful as a position that would lead to some rather untasty conclusions about the moral rightness of me coming over and thumping you on the head with a history book as long as I had such overwhelming might as to make any resistance on your part unlikely to succeed. If you resisted, not only would you deserve the thumping, but should be derided and yourself responsible for any additional bruising you received as part of an unsuccessful attempt to resist.

      I hope I'm misunderstanding you because a professor of history should be the first to recognize that 'history is written by the victors' but the last to engage in post-hoc moral analysis themselves.

      Look, you can disagree about whether the War Against Northern Aggression was right or wrong, or whatever motives the politicians behind it had. (Hint: They were politicians - i.e. mostly scuzzballs in it only for themselves.) but I hope to God that you at least come clean later and chide any students that DIDN'T argue against the ugly premises underlying your baited argument. Even though I mostly agree with you, I'd still argue that you got to the 'right' conclusion based on flawed premises.

      1. doublelayer Silver badge

        Re: LLMs: plagiarism devices

        It didn't sound like a moral premise to me. It sounded like a premise of pragmatism. Not "was it right for [insert side here] to do what they did during the war", but "did [insert side here] have a reasonable chance of victory against the tactics used by the other side". You could have a group incapable of victory using their tactics whether or not that group is also morally right. In short, you appear to be arguing about the causes of the war against someone talking about the practices during the war.

        1. Cybersaber

          Re: LLMs: plagiarism devices

          They were not incapable of victory, they just had bad odds. Just like the other revolutions I mentioned. Just like America did. Just like the British peoples against Rome.

          The argument *was* a moralizing one. It didn't just observe the low chance of success and then the outcome. The OP tacked on a moral judgement that they shouldn't have resisted, and insinuated that they were morally wrong to do so (else the phrase "wasted lives" has no meaning if wasting lives isn't a bad thing. I don't think they were making the stance that pointless death was no big deal.)

          The OP furthermore said so that it was a moral statement - They and a student disagreed on the morality of the resistance.

          I mean, it's your life, read it how you like. I'm just telling you what the OP's chosen words and construction mean according to semantic construction rules. I left open the door to say maybe I misunderstood their intent and offered the olive branch of clarifying to distance themselves from what I pointed out were the logical conclusions of their argument.

          1. doublelayer Silver badge

            Re: LLMs: plagiarism devices

            And we're still talking about different things. There is a moral argument about whether fighting is justified when your chance of victory is small enough, and there's a related one about forcing others to fight for you under those conditions. Neither falls under your "might makes right" case, which usually applies to an argument which declares that the victorious side's rationale for fighting the war is the moral one, not necessarily that their conduct during the war was moral. Similar, you can choose to interpret "wasted lives" in a number of ways. It can be a moral judgement on whether the battles should have been fought. You can also read it as an amoral argument: if you waste some lives in a bad military tactic, then you don't have those lives for other battles which are more important or more likely to lead to victory. Or you can take it as a moral judgement for the opposite side: if you don't waste lives on a resistance that fails, you will have more people to resist the post-war situation which may have a greater chance of success, which is a tactic that has been used several times in world history.

            The fact that you called it "The War Against Northern Aggression" suggests you may have an opinion on the causes of the war, but that doesn't make that the topic they were asking about.

      2. WolfFan Silver badge

        Re: LLMs: plagiarism devices

        They had no hope if they didn’t get a quick victory. By 1864 even Jeff Davis knew it. He wasn’t getting the major European powers to come in on the Confederacy’s side; France was bogged down in Mexico (remember always, Cinco De Mayo isn’t Mexican Independence Day, it’s the day that they hammered the hell out of a French army) and while the upper classes in Britain leaned towards the Confederacy, the lower classes hated slavery with a sufficient passion that multiple textile mill workers refused to work with ‘slave cotton’, even if it meant that they themselves lost their jobs. Politically it was impossible for Britain to side with the Confederacy. And even if France could extract itself from Mexico, France declined to support the Confederacy by itself. Britain provided lots of help to the Confederacy, including arms and even ships, in some cases bending the law to so provide. CSS Alabama was built in Britain, then armed elsewhere, and ran wild, taking and sinking Federal civilian shipping until caught off France by USS Kearsarge. Bob Lee went north twice, lost at Antietam, lost at Gettysburg, lacked the strength to try again. Jeff Davis pinned his hopes on holding on, causing enough casualties, to affect the election of 1864 and have Lincoln lose, as it was clear to him, and Lee, that the Confederacy could not win on the battlefield. But then Farragut damned the torpedoes in Mobile Bay, and Sheridan left the Shenandoah so devastated that, and I quote, “A crow flying over would have to carry its own provisions” and Sherman made Georgia howl. McClellan, the Dem candidate, had built his whole campaign around getting peace and ending this unwinable war; with Federal victories everywhere, with alleged Rebel victories over Grant ending with Marse Bob retreating and Grant pursuing, suddenly the war doesn’t look unwinable. Lincoln won… and Grant kept on the pressure. And Marse Bob really wished he had some of the boys he threw away in Pennsylvania and Maryland .

        The real force behind the Federal victory was the Navy. It was the blockade which blocked supplies. It was the amphibious assaults on various ports which sealed the blockade. When CSS Virginia attempted to break the blockade on 8 March 1862, and sank multiple Federal wooden steam frigates, but had to go home to reammunition and to fix the ram, damaged when sinking one Federal, it looked as though the blockade would be broken… but in events that storytellers could not have scripted, because no-one would have believed it, that night USS Monitor arrived from New York. And on the morning of 9 March, Monitor managed to not lose. The American republic was, for the second time in the same water, saved by a force which did not just lay down. (The first being Admiral Compte de Grasse, fending off the Royal Navy and dooming Cornwallis) For four hours, 49 men in a little ship with just two guns, held off Virginia. At the end of the battle, Virginia had 97 dents but no penetrations in her armor, but Monitor had lost one gun and had her captain blinded by shell fragments. But it was Virginia who retreated, never to fight again. And that was the last time that the Confederacy had a chance to raise the blockade. When the news of Monitor’s stand reached Britain, one major national daily newspaper opined that “Yesterday the Royal Navy had 146 ships of the first class. Today we have two, for only two are fit to stand in battle against the American ironclads “ And now you know an additional reason why Britain declined to side with the Rebs.

        It was the blockade which caused the rampant inflation that destroyed the Confederate economy. It was the domination of the sea which allowed the Federal forces to roam at will. Bob Lee could do nothing about it. Every man killed after Monitor drove Virginia away died unnecessarily. The only way for Bob Lee to force a victory would have been to assault the Federal fortifications around Washington… and he lacked the manpower to do that, thanks to his losses at Antietam and Gettysburg. And he knew it. By the end, Jeff Davis was trying to get black slaves into uniform to thicken the Confederate lines, such was the desperation and lack of manpower. For some reason the slaves were reluctant to fight for slavery. Gee. I wonder why.

        1. ArrZarr Silver badge
          Joke

          Re: LLMs: plagiarism devices

          "Monitor managed to not lose"

          A story for the ages. Both ships spent the fight playing the worlds angriest game of skee-ball off the other ship's functionally impenetrable armor.

    2. Ghostman

      Re: LLMs: plagiarism devices

      I'm taking a little effort here to give you some information you seem to not know. Well, hopefully you didn't know this and weren't just spreading the same ole crap around.

      There never was an American Civil War. No mention of it in government papers from either side, even the large collection of books printed by the War Department in 1880 referred to it as "The War of the Rebellion: A Compilation of the Official Records of the Union and Confederate Armies". Hows' that for a title? This was printed by the Government Printing Office by an act of congress in 1880. I'm fairly sure that if it HAD been a civil war, it would have been mentioned in the title, or at least somewhere in that series of books.

      The Confederacy did not want to take over Washington. If so, they would have done so very easily after the 1st Manasas battle when the Union troops fled the field, ran past the spectators/picnickers and hid out amongst the buildings of D.C.. Actually, the war could have been over before that.

      Have you ever been to Arlington? Did you know that it was the home of Robert E. Lee? Have you been to the grave site of John F. Kennedy? Think a few cannons at that location, or even on the higher ground around the Lee homestead? Not very much of D.C. at that time would have been out of reach of Naval cannons. A couple hours of bombardment would have brought the Federal government to it's knees. A capitulation by Lincoln would have closed the deal fairly quickly.

      The ONLY reason people refer it as a civil war was that Lincoln in his Gettysburg address said "we are now engaged in a great civil war".

      So please refer to it as the War of Secession, War Between The States, War of Southern Independence, The Late Rebellion, The War of 61 to 65 (most accurate), and not use the improper name of civil war.

      If needed, I can send you digital copies of the over 300,000 pages. You may need those if you still believe the war was fought to free slaves.

      1. doublelayer Silver badge

        Re: LLMs: plagiarism devices

        So, in your mind, if I can find a single document talking about a war that doesn't call it by a certain name, that name is forbidden under all circumstances? A civil war is a war which occurs between groups in the same country. Before the American Civil War, they were one country. After the American Civil War, they were one country. It was a civil war, and it was the only one to occur in that country, thus The Civil War is a perfectly appropriate title for it. The alternate names you suggest are not great, especially for a global audience:

        "the War of Secession": You're going to have to tack on some more adjectives, as there have been a lot of secessions over the years. And, given that this secession completely failed, maybe that's not the best name.

        "War Between The States": This works a bit better, but it's more words for the same concept as "civil war".

        "War of Southern Independence": Without independence happening. That's kind of like talking about World War II as "The War of the Abolition of Poland" even though, at the end of it, there was still a Poland.

        "The Late Rebellion": This is just silly. That phrase worked a decade after the war since "late" meant "recent". Right now, that war is no longer recent and we tend not to use "late" for that purpose anymore.

        "The War of 61 to 65 (most accurate)": Come on, if you're posting here, you should already know that you can't just use two-digit years, and you should probably also know the benefits of uniquely identifiers. If we're going with that, I can name the first post-independence civil war in the Democratic Republic of the Congo (1961-1965), or depending how early you're willing to go, the Burmese-Siamese war which ended in 1665 and started, depending on your definitions, in 1661 or 1662.

  13. raving angry loony
    FAIL

    utter idiot

    That prof is an utter idiot. I do believe there's at least one case of an "AI detector" determining that the US constitution was written by an A.I.

    (I don't know what that says about that particular piece of writing...)

    Ah, found a link: https://twitter.com/williamlegate/status/1648389809818181637

    1. that one in the corner Silver badge

      Re: utter idiot

      > the US constitution was written by an A.I

      Behind the barn on a cloudless night, the townsfolk swore they heard a clap of thunder and strange lightning hit the ground. The next morning, a perfectly circular section of the barn wall was found to have been burnt away. Just beside it, one of the town's less-favoured sons, a ne'er do well tough, was found in just his undergarments on the ground, telling an unlikely story of how "a naked man approached him and demanded his boots, his trews and tricorne hat before stealing his horse". A stranger calling himself Nathaniel Gorham left by the East Road, ignored by all.

      Two days later, as the news reached the city, reported as "a marvellous tale from our country cousins", another stranger startles Edmund Randolph, grabbing his arm and pulling him aboard a wagon with the words "Come with me if you want our Constitution to live; please, call me John".

      The rest is history. We now can only wait to see if the future was won.

  14. Emmeran

    If the student has used any of the large cloud services to build their paper, then there is a pretty damned good chance one of the AI's has already absorbed it into their 'knowledge base'.

    That student may have just plagarized themselves.

  15. The Dogs Meevonks Silver badge

    It would be very easy to tell if I wrote something or had it AI generated.

    I ahve a typing habit that means words like the 2nd one in this sentence are incorrectly spelled... It's always flipping 2 letters around and is caused usually by my right hand typing quicker than my left.

    So Have often becomes ahve, And will sometimes be na and so forth.

    Then there's the ' & ;

    I don;t know why... perhaps this keyboard is slightly smaller than my last one... but more often than not... I hit the ; key instead of the ' one... So any contraction ends up misspelled.

    I think it would be easy to spot my work... because spell checkers might pick up the first group... But they never pick up the second type of error.

    So to fool anyone... I'd just need to tell my document program to change all don't to don;t and select a few other words and misspell them...

    1. Brian 3

      Wait, so your right hand is quicker than your left... 'ahve' - you are using your right hand for the letter "a"?

  16. Craig 2

    `AI` generates human-looking text by having digested an insane amount of historical human-generated text. So an AI detector would have to digest large amounts of AI-generated text to have a reference point. I would assume it could eventually become just as proficient at detecting artificial content as both generation and detection models improve.

    A problem is that there is now an increasing amount of AI generated text in the public domain and future models could be training on this content. AI could eventually, as someone earlier put it `crawl up it's own arse`.

    1. that one in the corner Silver badge

      Yay, have one black-box "AI" deciding whether a piece of text was written by human or LLM. That will work well when it comes time to appeal the decisions:

      "Can you explain to us how the program decided the text was machine-generated?"

      "Nope"

      "Well, can you show us which parts of the text were key in reaching this determination?"

      "Nope"

      "Can you give us any reason at all to believe that the determination was correct, despite the protestations of the accused?"

      "Nope. Weeellll"

      "Yes?"

      "It is a computer, they always get it right! And we spent lots of money running it, we wouldn't waste our funds if it was wrong, would we?"

      "True enough. Appeal rejected, no degree for you laddie. Next!"

      "Okay

      1. hayzoos

        The computer said so, therefore must be correct.

        This is exactly my problem with "AI" as is how it is used. A brain for the brainless. A decision maker for the wishy washy. And it is being used for such important things. You aint seen nuthin yet.

  17. Mitoo Bobsworth

    What happened to hard work?

    As AI is used more & more to 'level the playing field', as it were, I can see a wave of mediocrity flooding humanity as brains are used less & less.

    1. 43300 Silver badge

      Re: What happened to hard work?

      Social media has already provided a very effective start in dumbing everything down to a level of bland predictability. This is just the next step!

    2. Michael Wojcik Silver badge

      Re: What happened to hard work?

      Nick Carr made this argument, initially about web search, 15 years back that-a-way.

      Carr was pretty much on the money about utility ("cloud") computing that year, too.

      Of course not everyone agrees with Carr, and not everyone who agrees with some of his work agrees with all of it. His previous prediction, about the diminishing return in competitive advantage from IT innovation, doesn't yet seem to be fulfilled; that may be due to a large amount of relatively inefficient IT still in use (providing a pool for continued efficiency gains, and rewarding organizations that claim them), or because there are still first-mover deals available, or because of the appeal of IT "improvements" to the market. I'm not familiar with his more-recent stuff.

      1. ArrZarr Silver badge

        Re: What happened to hard work?

        First sentiment feels wrong to me, but web search is a tool and the results you get are dependent upon how you use the tool.

        Second sentiment is pretty spot on.

        Third sentiment feels right but only when applied to absolute efficiency gains, which have diminishing returns in their very nature. In my experience, the big efficiency gains happen when The People Doing The Work push for a major IT upgrade and The People Doing The Work get to make the relevant decisions rather than The People Who Manage The Money.

        </opinion>

        1. 43300 Silver badge

          Re: What happened to hard work?

          "First sentiment feels wrong to me, but web search is a tool and the results you get are dependent upon how you use the tool."

          It's a while since I read the book, but as I recall one of the main points was that the results of web searches are to a large extent dependent on how the search engine works and how it ranks the results. This of course makes them highly suscpetible to reflecting the ideology of the company which provides them,

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like