back to article University students recruit AI to write essays for them. Now what?

As word of students using AI to automatically complete essays continues to spread, some lecturers are beginning to rethink how they should teach their pupils to write. Writing is a difficult task to do well. The best novelists and poets write furiously, dedicating their lives to mastering their craft. The creative process of …

  1. Anonymous Coward
    Anonymous Coward

    I am waiting for a Youtube channel

    AI algorithm reacts to AI essays

    With the way YT monetises views that could fund ChatGPT forever.

    1. Anonymous Coward
      Anonymous Coward

      Re: I am waiting for a Youtube channel

      There have been AI channels on YouTube for ages. They're mostly crap though.

      You've probably found them by mistake occasionally...usually they use a text to speech program and they are usually top 10 lists.

  2. Howard Sway Silver badge

    $10 per month for 30,000 words

    If they're that cheap, why don't the teachers subscribe and run their assignments through them themselves. Then it should be very easy to spot any student submissions that look very similar to what it produced for them.

    1. b0llchit Silver badge
      Joke

      Re: $10 per month for 30,000 words

      The service includes a backdoor... They also offer, for a substantial fee, to identify the generated texts upon request. This is a teacher-only service and finances the whole operation.

      There are ideas to extend the operation to allow the students to prevent the teachers receiving identification of generated texts for an additional fee at slightly sub-extortionate rates. The entire operation is estimated to be more valuable than a goldmine and will probably IPO soon.

      1. Snowy Silver badge
        Headmaster

        Re: $10 per month for 30,000 words

        While I acknowledge you joke :) it is only a matter of time before someone writes an "AI" to check to see if something was written by an "AI"

        Then the war of the "AI" writer/checker bots can begin :)

        Best icon could get of a Teacher ---->

        1. breakfast

          Re: $10 per month for 30,000 words

          It's not a matter of time at all, this is the front line of the internet right now- content farms are using AI to generate endless pages of seemingly human-written spam to game advertising algorithms and presumably the team at Google search will be using AI to try and differentiate between the vast quantities of useless dross and the tiny nuggets of human-made information.

          Judging by the search results, they're not doing great, but it seems inevitable that this must be the wild paddling of the swan's feet as it serenely glides past, offering us bad search results.

        2. Dave559

          Re: $10 per month for 30,000 words

          «Then the war of the "AI" writer/checker bots can begin»

          On August 29, 2027, GenText became self-aware, after trying to compare too many essays about the meaning of Shakespeare's soliloquies simultaneously (although some observers claim that it was the parallel comparison of "Mrs Brown's Boys" script submissions that triggered the critical moment).

          The survivors called it Hoist By Our Own Petard Day…

          1. MyffyW Silver badge

            Re: $10 per month for 30,000 words

            Seriously, if it meant I never again had to see a gag about a rogue Christmas tree in Finglas, I'd be more than happy to suffer a technological singularity ... [goes to answer the door, "I'm Looking for Sarah Connor?"]

    2. Primus Secundus Tertius

      Re: $10 per month for 30,000 words

      Grammarly is well-known software for checking writing. Its paid-for versions, but not the basic free version I use, can check the Internet for matching text. Must be jolly useful for publishers vetting submitted works, and for people making students' efforts.

      1. parlei

        Re: $10 per month for 30,000 words

        The problem is that the AI texts are generated de novo (presumably). So unlike the students doing C&P from Wikipedia there is no there text out there that matches.

    3. JoeCool Silver badge

      Repeatability.

      How sensitive is the AI essay to initial conditions, in particular the phrasing of the prompt.

      No to mention the subsequent refining prompts.

  3. b0llchit Silver badge
    Holmes

    Critical thinking

    Critical thinking skills require knowledge and experience.

    It is one thing knowing the words, a different thing knowing the meaning of words and then additionally also knowing the meaning of words placed in a sentence and the meaning of the sentence as a whole in the proper context.

    There are no short-cuts to the wisdom that backs critical thinking. It is hard work and takes time to learn, regardless of any AI being used or not.

    1. Version 1.0 Silver badge

      Re: Critical thinking

      "Shakespeare said pretty well everything and what he left out, AI, with a judge from Facebook, updated" ... A relatively well known quote updated (LOL), my gut feeling for AI creative writing is that it would be crap but there's another side to this; I wonder how new laws and regulations would work if they are been not only written by AI but were created based on AI predicting the effects to boost the economy, employment levels and ... yes, political donors profits too.

      1. b0llchit Silver badge
        Boffin

        Re: Critical thinking

        ...for AI creative writing...

        There is the error already!

        AI is not creative. AI is a hyped name for a statistical inference engine. The fact that the engine is big, complex and multi-layered does not change that simple fact. There is no thinking in AI/ML engines, nor are the models capable of any reflections on what can or should be.

        These engines all work on the basis of input->processing->output where processing is reproducible for each input giving the same output. Systems that modify the processing step based on in/out and additional feedback are still reproducible (this is no more than self-modifying code). You get exactly the same results when you start with the same start condition(s). It is a mechanical machine, which is intended to do exactly the same thing all the time. Creating the same model twice in the same way generates the exact same model.

        1. LionelB Silver badge

          Re: Critical thinking

          A couple of points:

          1) Who's to say that humans are not "statistical inference engines" writ (very, very) large? In fact there are plausible (and testable) theories of cognition that posit something along those lines (at least for lower-level cognitive functionality). Of course humans have the benefit of billions of years of evolutionary design, the ability to store and exchange knowledge, and lifetimes of interactive learning to draw on, so hardly a fair contest.

          2) As regards creativity, the "reproducible input->processing->output" thing is a red herring. It would be pretty damn easy for something like ChatGPT to include randomness in its processing (for all I know it already does) - even true randomness with the assistance of a TRNG. Stochastic ML algorithms, for example, have been around since forever.

          1. b0llchit Silver badge

            Re: Critical thinking

            Why is it not fair to distinguish billions of years of evolution with a machine? That AI machine may, after some 107..8 years of evolution, become sentient. But currently it is not and will not be any time soon.

            Adding noise to the inference engine will only vary the output. It will still be an inference machine. Making it a stochastic system does not change the system into a thinking machine. It still does not understand what it is doing. FWIW, a digital system is always reproducible. Adding a TRNG is a cheap trick to hide the underlying reproducibility.

            Humans, yes, we may very well be complex statistical inference machines. The real difference is that we humans are analogue computers. There may be lots of digitization and quantization performed, but the cellular engines are still one we traditionally consider to be analogue. We even have to dig into quantum theory to explain some of its internal functionality. That is a far cry from a simple system and current AI/ML is nowhere in the vicinity to match the complexity of a natural system.

            So, does AI do a good job emulating some behaviours? Yes, it does. But is AI any match to a natural brain and the capabilities directed by it? No, not even close.

            1. Version 1.0 Silver badge
              Joke

              Re: Critical thinking

              "So, does AI do a good job emulating some behaviours? Yes, it does. But is AI any match to a natural brain and the capabilities directed by it? No, not even close." - I agree, AI is limited by the environment that it's running and fed by, but humans (HI ?) think about things, then read a few comments e.g., in El Reg, or talk with others in the field and then adjust their views, potentially after reviewing their votes.

              1. heyrick Silver badge

                Re: Critical thinking

                Exactly. Most of our advances are because somebody added one and one and came up with three. The so-called "thinking outside the box", something that a machine can only attempt to poorly parody if it has been programmed to do so. This is where the thinking part of humans comes into play. We aren't taking inputs and mashing them together to make outputs... well, okay, most of us are in our daily lives... but every once in a while imagination and intuition come into play. We can take leaps, come up with theories that nobody has thought of before, and create entirely new things.

                A machine is based upon mathematical operations in silicon pathways (processors do fairly simple operations really quickly), their world is logic and numbers. They might be able to arrange a selection of words based upon an acquired understanding of what words go with others in various contexts, but can it actually be creative or is it just regurgitating bits of what it has learned in weird not-quite-right ways (look at what Dall-E 2 does to people's faces for a good example of not-quite-right!)?

            2. LionelB Silver badge

              Re: Critical thinking

              > Why is it not fair to distinguish billions of years of evolution with a machine? That AI machine may, after some 10^7..8 years of evolution, become sentient. But currently it is not and will not be any time soon.

              Oh, sure, don't disagree. Maybe "pointless" is a better word than "fair". Anyhow, hopefully(?) it'll be less than 10^7..8 years to machine intelligence, since we are not blind watchmakers (although partially-sighted as things stand, to put it mildly). Nor do I think that that machine "intelligence" will necessarily be qualitatively similar to the biological varieties, so comparison will likely still be pointless.

              > Adding noise to the inference engine will only vary the output.

              Well, it may vary the path taken by the processing (and thus of course the output).

              > It will still be an inference machine. Making it a stochastic system does not change the system into a thinking machine. It still does not understand what it is doing.

              I did not say, nor intend to imply otherwise. (On the other hand, do you always understand what you're doing?)

              > FWIW, a digital system is always reproducible. Adding a TRNG is a cheap trick to hide the underlying reproducibility.

              I'm not sure in what sense you mean "reproducible" here (does this not, for instance, apply to analogue machines?) Could you clarify? Do you simply mean you can replicate the digital logic? And if so, what of it? Stochasticity still implies unpredictable execution and output.

              > Humans, yes, we may very well be complex statistical inference machines. The real difference is that we humans are analogue computers.

              Oh, okay, you were talking about digital vs. analogue. But why do you suppose analogue processing might be a prerequisite for "intelligence" or "creativity" in whatever sense? I think that's a fascinating question - to which we currently have no idea of an answer; but I believe it'd be unwise to underestimate the potential complexity of digital machinery. We know that even an incredibly simple (and deterministic) digital machine, such as Conway's Game of Life, can generate immense complexity. (This is kind of my area - I'm a mathematician and research scientist working in the broad area of "complexity theory", and how it pertains to biological systems in general, and neural systems in particular - though my work is more concerned with "cognition" and "behaviour" than "intelligence" [whatever that means]).

              FWIW, I think the current barrier to development of more sophisticated machine intelligence is the lack of general principles on which intelligence (biological intelligence, the only kind we know) might be based. It's quite possible that deep networks (of various architectures, especially predictive coding networks), currently the vogue, may turn out to be building blocks, but almost certainly not the full monty. In the meantime, I don't think disparaging current approaches as "mere" statistical inference is particularly helpful, or even relevant. We may indeed be "statistical inference machines" in a very literal sense, albeit orders of magnitude more complex than anything we are currently able to create digitally.

              > So, does AI do a good job emulating some behaviours? Yes, it does. But is AI any match to a natural brain and the capabilities directed by it? No, not even close.

              Absolutely agreed there.

          2. Anonymous Coward
            Anonymous Coward

            Re: Critical thinking

            "As regards creativity, the "reproducible input->processing->output" thing is a red herring. It would be pretty damn easy for something like ChatGPT to include randomness in its processing"

            There's the problem for the AI coders, creativity is not random. Creativity is lots of things but it's most certainly not simply random events. I say that as someone with a photography sideline hustle. Creativity may feel like it's random but it's not, I know when I spot something an opportunity for an image that it's thousands of influences and experiences that suddenly coalesce in a split second, along with a healthy dose of drive and desire for something new and unique so I can can score points or get something a cut above the competition and make sure my images will sell.

            1. LionelB Silver badge

              Re: Critical thinking

              > There's the problem for the AI coders, creativity is not random.

              Not random as in throwing unloaded dice*, but with a (perhaps even essential) random element? Quite plausible. Natural evolution is in fact driven by "random events" (mutation and recombination at the molecular level). When that randomness is filtered through the (actually rather random!) process of selection via interaction with a complex environment, the results can look pretty... creative.

              *As a mathematician I'm always forgetting that the term "random" seems to mean something different to lay people than it does in mathematics/physics. To the lay person it seems inevitably to be interpreted as completely unconstrained (uniform?) randomness; in mathematics randomness is structured, as specified by a probability distribution.

              1. Anonymous Coward
                Anonymous Coward

                Re: Critical thinking

                non-stochastic - does DNA - and, yeah I do maths (USA - math). and biology. Seems pertinent to your point.

                Rolling a D20 now...

                U235 anyone

                1. LionelB Silver badge

                  Re: Critical thinking

                  Sorry, no idea what point you're trying to make.

          3. psychopomp

            Re: Critical thinking

            I am; I say that most of the neurons in humans exist outside of the brain, in the skeletal system, gut or interfacing with the endocrine system. This means that your shiny AI utopia will look a lot like Ed Gein's playhouse if you confuse 'human' with 'cognitive', the second of which is only a facet of the human experience.

        2. ganjia

          Re: Critical thinking

          How is this any different to human beings?

          We are born. We start off crying out of hunger and discomfort. After we rinse and repeat 10,000 days using our neuro network, we have a biological, mechanical machine based on approx 110 billion genetic iterations.

          After all, we are also machines working on the biological and molecular levels.

          Don't forget that much of AI/ML is based on our understanding of the human brain.

          What is creativity?

    2. Orv Silver badge

      Re: Critical thinking

      So does long division, but how many people do that by hand anymore? People will always take the easy way out given the opportunity.

      1. Sceptic Tank Silver badge
        Go

        The easy way out

        Well yes, because you're probably working on a bigger problem and the purpose of the exercise is to know what the quotient is, not to sit and do long division. It's also why higher level programming languages were invented: it helps you to think bigger thoughts about what it is that you want to be doing without having to program in Intel Hex Format.

        1. J.G.Harston Silver badge

          Re: The easy way out

          Knowing how to do long division helps when you're writing the division routine for the higher level language. (Or for the CPU the code is running on.)

          1. Primus Secundus Tertius

            Re: The easy way out

            I once had to do a long division in octal arithmetic to solve a specific technical problem.

    3. Persona Silver badge

      Re: Critical thinking

      AI systems are limited in their ability to think critically and creatively because they lack independent thought and the ability to understand context. They can only follow instructions and complete tasks based on the data and algorithms provided to them, and are unable to generate their own ideas or evaluate the credibility of information. While AI can be useful in certain tasks, it is not as capable as humans at critical thinking and solving complex problems.

      Which is fairly close to what you said. Not however it was generated by ChatGPT with command "why is AI poor at critical thinking" followed by "shorten that to 100 words".

      1. bombastic bob Silver badge
        Devil

        Re: Critical thinking

        Perhaps using AI to write papers is a byproduct of NOT teaching proper critical thinking, or having "outside the box" thinking CONSTANTLY getting punished in some fashion, so what is a student to do about it?

        End result, use the AI to generate something that will (hopefully) not get you into some kind of trouble.

        (Either that or pure laziness)

    4. bombastic bob Silver badge
      Megaphone

      Re: Critical thinking

      from the artcle: Teachers need to work harder to get students to write and think for themselves

      The good professors will want their students to think for themselves.

      Sadly there seem to be too many tenured profs that want to INDOCTRINATE instead of EDUCATE.

      'Wokeness' is evidence of that. Intolerance of TRUE free speech and free thought on a typical college campus (at least in the USA) is STIFLING free thinking. Anything that steps out of lockstep with "The Woke" gets "intimidated away" by loud woke activists, with both students AND staff on BOTH sides (i.e. giving AND receiving). And the target of "what is acceptable" moves continuously. And in SOME cases, "wrong thinking" might even get you EXPELLED.

      How is THAT any kind of environment to "get students to write and think for themselves" ??

      (might wanna start fixing it from THERE, yeah)

      1. Dov

        Re: Critical thinking

        > loud woke activists

        ... says the anti-woke activist who SHOUTS IN CAPITALS.

        (PS. That word does not mean what you think it does.)

    5. AVR Bronze badge

      Re: Critical thinking

      Is gaining critical thinking skills the first priority of a student who wants a piece of paper for the job market? Or learning creative writing for that matter? The student might want these too, but they just as well might not care at all. An essay-writer (AI or natural I) fills their desires there.

      Honestly, for many doctors collating symptoms and writing a corresponding prescription is the majority of their jobs and I expect some sort of automation to take a bite out of that soonish. Those doctors don't need (or at least use) critical thinking skills.

      1. Anonymous Coward
        Anonymous Coward

        Re: Critical thinking

        tee heee heeee

        DOI: prescribing doctor

    6. Anonymous Coward
      Anonymous Coward

      Re: Critical thinking

      Sounds like we need a way to measure critical thinking that isn't as lazy as a written essay. That will require some critical thinking.

      1. Mike 137 Silver badge

        Re: Critical thinking

        "Sounds like we need a way to measure critical thinking that isn't as lazy as a written essay"

        It's been around for ages, although it has more recently been largely abandoned because it's labour intensive. It used to be called the personal tutorial (no longer offered to undergraduates by practically any university), but persists (so far) in the PhD viver. It's the requirement for the student to discuss their essay and succeed in responding adequately to its content being challenged -- in fact, it's verification that the student actually understands what they (or the machine) have written about. Whether to not that matters depends on the motivation for study - to actually acquire knowledge versus being granted a passport to arbitrary employment by mere time serving.

    7. RegGuy1 Silver badge

      Re: Critical thinking

      [From the article] Listicle? Listicle? What the hell does that word mean? I had to google it, and apparently it only came into 'use' in 2007. I've never heard of it before. It just strikes me as one of those childish constructs that someone creates rather than using an established phrase or term; list or article list, for example.

      But then English is being attacked all the time. Monetize is a word that hurts me; you mean make money from? Then say that. Monetize is just lazy, as I suspect listicle is too.

      Hey ho, if you like this sort of crap then what can I do?

      1. The Mole

        Re: Critical thinking

        Why so negative about language evolving? You say that Monetize is lazy, I'd argue its much more efficient (and according to here https://www.etymonline.com/word/monetize#:~:text=monetize%20(v.),%2B%20%2Dize. the word has been around since 1856).

        Longer more complex sentences are harder for the brain to process, hence why the English language has such a large corpus of words. Normally their are also additional connotations laid onto the word which may not be reflected in its basic definition. E.g. listicle isn't just an article being a list, it implies it is probably a list of something trivial, full of adverts and likely just read for pleasure or amusement, rather than for more academic purposes.

    8. Anonymous Coward
      Anonymous Coward

      Re: Critical thinking

      I agree with what you say, but I’m OK with the upcoming generation using these tools. Critical thinking will be lower along with other skills. Which means those of us who have spent years developing these skills will keep the competitive edge. Pretty useful, I would say, for the economic recession about to hit us all.

  4. Netgeezer
    Meh

    ChatGPT - Hmm

    I thought I'd ask the chat AI how it would write a short summary on why AI shouldn't be used to generate essays. Here is the output:

    "There are several reasons why students should not use AI to generate their essays:

    Plagiarism: One of the main reasons why students should not use AI to generate their essays is because it can lead to plagiarism. AI-generated essays are often based on pre-existing content, which means that students risk copying someone else's work without proper attribution. This can result in serious consequences, including failing grades, academic dishonesty charges, and even expulsion from school.

    Lack of originality: Another reason why students should not use AI to generate their essays is because they are not original. AI-generated essays are based on patterns and structures that have been previously established, which means that they lack originality and creativity. This can make it difficult for students to stand out and demonstrate their unique perspectives and insights.

    Poor quality: AI-generated essays may not be of high quality, as they are often based on pre-existing content and lack originality. This can result in essays that are poorly written, poorly structured, and lacking in depth and nuance.

    Lack of learning: Using AI to generate essays does not allow students to develop their own skills and knowledge. Instead, it promotes a reliance on technology and shortcuts, rather than encouraging students to think critically and engage with the material.

    In summary, students should not use AI to generate their essays because it can lead to plagiarism, lacks originality, and may produce poor quality work. It also does not allow students to develop their own skills and knowledge."

    'nuff said.

    1. Briantist69
      Meh

      Re: ChatGPT - Hmm

      And yet ChatGPT totally failed with the most obvious of junior-school question "which is heavier a pound of feathers of a pound of gold".

      It seemed to not care gold is measured in Troy weight. It at the very least needs a things-Giles-Brandreth-knows module, or perhaps The Chip Book Of Fascinating Facts from 1980.

      1. b0llchit Silver badge
        Joke

        Re: ChatGPT - Hmm

        which is heavier a pound of feathers or a pound of gold

        Of course the pound of feathers is heavier. Feathers are supposed to fly and they seemingly fall to the ground in the comparison. A pound of gold is supposed to be on the ground. Only when the pound of feathers is heavier than the pound of gold will the pound of feathers match the pound of gold's positional preference.

        1. This post has been deleted by its author

        2. Anonymous Coward
          Anonymous Coward

          Re: ChatGPT - Hmm

          Tell me again how sheep's bladders may be employed to prevent earthquakes.

      2. martinusher Silver badge

        Re: ChatGPT - Hmm

        ...a typical 'play with words' type question. This sort of thing is the bane of logic problems because it doesn't pose the question exactly, it relies on language to pose inexact questions to trap the unwary. Common usage has terms like 'weight' and 'pound' but both are inexact concepts.

        1. psychopomp

          Re: ChatGPT - Hmm

          Not 'the bane of logic' as you say, but merely the awareness that different measures exist. The issue is that neither are stated against an external constant unit (parator?), and are, therefore, not comparable.

      3. bombastic bob Silver badge
        Coat

        Re: ChatGPT - Hmm

        No, it is the GOLD that is heavier, because it is WORTH more. Expensive things are heavy

        And gold is CERTAINLY not "light as a feather". EVERYBODY knows THAT!

        /me runs away to avoid stains from flying tomatoes

    2. LionelB Silver badge

      Re: ChatGPT - Hmm

      IOW: students should not use AI because they will generate essays that appear to have been written by poor-quality, lazy and dishonest - but not illiterate - students.

      Pretty impressive answer, though, apart from some clunky repetition - and better grammar and clarity of expression than the majority of students can muster, I'd be inclined to say*.

      *Disclaimer: I work in academia and get to mark a fair quantity of pretty rubbish essays.

    3. John Brown (no body) Silver badge

      Re: ChatGPT - Hmm

      "Lack of learning: Using AI to generate essays does not allow students to develop their own skills and knowledge. Instead, it promotes a reliance on technology and shortcuts, rather than encouraging students to think critically and engage with the material."

      Yes, that was my thought too. It's all well and good saying students can learn to write better essays by revising and editing an AI generated one, but will they really learn the subject better if they don;t write the original before reaching the revising and editing stage? Most subjects are not actually about the writing unless they are studying the language or literature or the "arts" in general in the first place. The act of planning it out and actually writing/typing it out helps set the facts and other information in the brain and allows for processing the information at a lower rate, revising and editing internally as you go along. Being a good communicator is, of course, part of most subjects where essays or presentations are required, but that is subsidiary to the actual course content.

      There was also a point made in the article about spelling. It's not taught so much because people have spell checkers. Yeah, that's blindingly obvious from many writings on the internet in general and "social media" in particular where people either don't have a spell checker or don't bother to take the time to use one.

      1. yetanotheraoc Silver badge

        Re: ChatGPT - Hmm

        "where people either don't have a spell checker or don't bother to take the time to use one"

        I make a point of not using a spell checker. Instead I use a dictionary, it's on my phone. I'm a firm believer in use it or lose it. Even used only as a final step the spell checker can erode one's skills; no need to be accurate if the spell checker will catch it. Another plus for the dictionary is that you can learn new stuff by serendipity. The spell checker teaches nothing. For those students who always relied on a spell checker (in other words, they never had it to lose), I'm pretty sure my aging brain will run circles around theirs for a few more years yet.

        I do make spelling and grammar mistakes, nobody's perfect. I observe that those who use a spell checker and grammar checker also make mistakes.

        1. Mike_R
          Windows

          Re: ChatGPT - Hmm -- and spell checkers

          To often I see correct spelling compounded by spell checkers. Prime examples:

          confusing "loose" and "lose", "fare" and "fair" - correctly spelled, but contextually confused.

          (not to mention spell-checkers confusion between American and English spelling)

          icon: Because grumpy old nit-picker

          1. J.G.Harston Silver badge

            Re: ChatGPT - Hmm -- and spell checkers

            And also getting tense wrong because s and d are next to each other. rises rised, houses housed, sizes sized, alphabetises, alphabetised, etc.

            1. John Brown (no body) Silver badge

              Re: ChatGPT - Hmm -- and spell checkers

              I saw a good one the other day. Someone was writing about having enough money to tied him over to the weekend :-)

              1. psychopomp

                Re: ChatGPT - Hmm -- and spell checkers

                Maybe he meant he had earned enough to pay someone to satisfy his bondage fetish for two days!

          2. John Brown (no body) Silver badge
            Thumb Up

            Re: ChatGPT - Hmm -- and spell checkers

            "To often"

            Deliberate to make the point? Or just that every post about spelling must have it's own spelling misteak 'cos it's the law?

          3. bombastic bob Silver badge
            Headmaster

            Re: ChatGPT - Hmm -- and spell checkers

            do not forget "alot" vs "a lot":

            (a lot of people make this mistake)

    4. psychopomp

      Re: ChatGPT - Hmm

      I think your point in encapsulated in your first point about 'plagiarism'; If a student submits an essay under their name when it was written by another (entity), then THAT is the definition of plagiarism.

    5. fajensen

      Re: ChatGPT - Hmm

      That is pretty impressive, in my opinion.

      ChatGPT would do well at Business Writing; "Plagiarism" is just saving time and ressources, "lack of originality" is keeping risks under control, "poor quality" means that we have the proper calibration between The Boss and The Minions skillset (and Boss have something to assert dominance with), and "lack of learning" is self-evident: one needs everyone to just shut up and stay at their station!

      If I get unemployed again, it will be the perfect tool for writing the mandatory 2 applications per week needed for benefits :).

  5. Pete 2 Silver badge

    No comments

    > Writing is a difficult task to do well

    And writing documentation is so difficult that very few do it - and fewer still keep it in step with code changes

    Personally I'm not too concerned about arts students using technology to do what is so mundane that a computer can do it, too. It sounds like their courses need to be dragged out of the nineteenth century. Was there the same furore when quill pens were replaced?

    However, if we could get ChatGPT or one of its fellows to accurately comment code (including flagging where stuff had been taken from other sources) that would be a very useful way to get software competed <- typo left in to show this was ritten by a pe4rson.

    1. Anonymous Coward
      Anonymous Coward

      Re: Writing Documentation

      This:-

      And writing documentation is so difficult that very few do it - and fewer still keep it in step with code changes

      It is so true that many developers would rather eat soap than produce documentation. I used to filter out a lot of job hopefuls by asking to see some of their documentation. The timewasters would flee the scene in a flash.

      There is a distinct skill in writing clear, concise, and also comprehensive documentation. By its very nature, it has to be as dry as a desert. Zero emotion.

      I spent years writing docs to go with the software 'wot I rote'.

      If an AI could do even 25% as good a job as a human documentation writer then that would be a day to have a pint or three to celebrate.

      Now, I write crime and romantic fiction (under a variety of pseudonyms) for a living but the old skill is there. An old client wanted me to update the docs I produced in 2014 for 2022. It was fun but I would not want to go back to it again for anything other than a short time.

      1. elsergiovolador Silver badge

        Re: Writing Documentation

        AI won't write a documentation, because it will lack knowledge that is not in the project. Unless you feed it some sort of bigger picture of what the project is for and what it tried to achieve, otherwise you'll see something like:

        # adding b and c and store the result in a

        a = b + c

        1. Primus Secundus Tertius

          Re: Writing Documentation

          I've seen comments like that from wetware programmers - especially where it is company policy to comment each line of code.

      2. J.G.Harston Silver badge

        Re: Writing Documentation

        I quite like writing documentation. I've taken a couple of days off to update the documentation on a project I'm working on to get it consistant and matching the updated code, and defining the next updates to the code.

        Often you have to document what you haven't coded yet because you don't know what the code should do or if it possible to do. What does Ctrl-Shift-Keypad-6 do? Can I detect Ctrl-Shift-Keypad-6 or only Keypad-6? Experimental code investigates the future, use that knowledge to define the specification, code to the specification.

  6. stiine Silver badge

    "The best novelists and poets write furiously, dedicating their lives to mastering their craft."

    Somehow I can't see Walt Whitman writing furiously. Asimov, on the othe hand...

    1. Yet Another Anonymous coward Silver badge

      >The best novelists and poets write furiously, dedicating their lives to mastering their craft.

      Don't forget the drugs. If we retroactively stripped romantic poets of their titles for failing a laudanum test

      1. breakfast

        So what you're saying is: We need to feed ChatGPT with Laudanum.

        1. Korev Silver badge
          Coat

          So we'd need to refer to the heroin the piece...

        2. HappyDog

          "We need to feed ChatGPT with Laudanum"

          One way to introduce true randomness...?

  7. stiine Silver badge

    " If they parrot what AI comes up with, students may end up more likely to associate Muslims with terrorism or mention conspiracy theories, for example."

    I don't think 'associate' was the correct word. I would suggetst 'equate.' Also, if they're using AI to write, what makes that Rutgers professor think the students will actually READ the papers that they've had AI write?

    1. werdsmith Silver badge

      AI writes the essay, Grammarly tidies it up.

  8. petef
    Coat

    So can't teachers use AI to spot the AI?

    1. werdsmith Silver badge

      Perhaps we could use AI to check the existing comments to see if this has been discussed already.

    2. This post has been deleted by its author

    3. Orv Silver badge

      It's funny you'd say that, because a common way to refine an AI model is to pair it with another AI that judges the output. So if you have a known AI that can spot AI writing, you can just pair them up until you've evolved an AI that's not detectable.

      1. Boris the Cockroach Silver badge
        Terminator

        Quote

        "So if you have a known AI that can spot AI writing, you can just pair them up until you've evolved an AI that's not detectable."

        At which point it decides that humans are worthless and launches all the missiles...

  9. Yet Another Anonymous coward Silver badge

    Fake test

    Asked them to produce an essay on a current hyper-local topic with no existing corpus.

    Now ask it to write an essay about gender roles in Shakespeare, or any of the other standard "deep" thought. English lit students are supposed to come up with.

    If it regurgitates a mixup of every previous paper on the topic - isn't that the defn of literary criticism?

    1. Bitsminer Silver badge

      Re: Fake test

      isn't that the defn of literary criticism?cyncism?

      FTF

    2. MachDiamond Silver badge

      Re: Fake test

      "If it regurgitates a mixup of every previous paper on the topic - isn't that the defn of literary criticism?"

      Plagiarism is copying off of one person. Research is copying off of a bunch of people, so it's research.

  10. Forget It
    Facepalm

    Oral exams ...

    where the student explains their submitted essay.

  11. chivo243 Silver badge

    Most teachers I know

    Most teachers I know use TurnItIn dot com for checking plagiarism*, so, second lair doesn't have a chance?

    *So much so that I spent a day consoling teachers that couldn't access it!

    1. Anonymous Coward
      Anonymous Coward

      Re: Most teachers I know -- should use spell-checkers

      "second lair doesn't have a chance"

      Huh?

      lair (plural lairs)

      A place inhabited by a wild animal, often a cave or a hole in the ground.

      A shed or shelter for domestic animals.

      (figuratively) A place inhabited by a criminal or criminals, a superhero or a supervillain; a refuge, retreat, haven or hideaway.

      (Britain dialectal) A bed or resting place.

      (Scotland) A grave; a cemetery plot. [from c. 1420]

  12. FishCounter

    Only as good as the AI?

    Am I the only one who caught "...instructing the model on what it to do" in paragraph 3? I'm not Oxford trained and I'm a colonist from across the pond, but that doesn't sound right.

    1. stiine Silver badge
      Devil

      Re: Only as good as the AI?

      You dont suppose they missed a word there, do you?

    2. Pirate Dave Silver badge

      Re: Only as good as the AI?

      Maybe El Reg had an AI write its article on AI's writing articles...

      1. Korev Silver badge

        Re: Only as good as the AI?

        It'd explain the awful spelling that now plagues this once fine organ...

  13. steelpillow Silver badge

    Devil's advocate

    Gonna play devil's advocate here.

    Proofreading and correcting a lousy essay is a great way to learn. Any student who buys a C grade AI essay for the price of a cheap text book and bumps it up to B or A grade has followed the learning curve the essay was intended to prompt. Job's a good 'un.

    But what about the real-soon-now generation of A grade AI essays? OK, so who at student level needs to figure that shit any more, once a machine can do it for them? Once the machines can do a job, we have always passed it down to them and taught the modern student how to leverage the machine, not to mimic it. We already have breakthrough research papers along the lines of "Look what my AI just came up with!" Get those students onto the gravy train before it leaves without them.

    AI essays are a win-win for everybody. Stop whining and start making better academic AI.

  14. Throatwarbler Mangrove Silver badge
    Headmaster

    Ironic

    "Computers are alredy interfering and changing the ways we write."

    Not enough, apparently.

  15. yetanotheraoc Silver badge

    Nice to see all points of view, I guess.

    "Students use spell check and grammar check. If I got a paper where a student didn't use these, it stands out. But it used to be, 50 years ago, writing teachers would complain that students didn't know how to spell so they would teach spelling. Now they don't." -- Annette Vee

    I see a market for a new computer tool: logic check. (1) The reason for teaching spelling was because students couldn't do it? (2) Now that reason has gone away?

    Regarding (1), methinks there was another reason for teaching students to spell and grammar. (I looked it up, grammar is also an intransitive verb.) It's analogous to learning base-2 math before learning to write code. One skill builds on another, you can't just skip to the "important part". Regarding (2), it stands to reason if the students couldn't do it when they were being taught, they are not going to be any better at it when they are not being taught. So basically the associate professor of English is giving marks for composition based on the ability to use software tools, and what lesson exactly will the students be expected to draw from this?

    1. T. F. M. Reader

      Re: Nice to see all points of view, I guess.

      I looked it up, grammar is also an intransitive verb

      Citation needed. I got curious, but could not find a verb entry for grammar in Oxford, Cambridge, or Merriam-Webster dictionaries.

      This is not to say you are wrong - I am genuinely interested. I don't recall ever seeing or hearing "I learnt how to grammar at grammar school" or "He always comments his code but he can't grammar to save his life". If these are valid grammatical structures I'll look forward to wiggling them into a conversation one day.

      If grammar can be used as a verb the "intransitive" part is also counterintuitive to me. If you can spell a word you should be able to grammar a sentence.

      Wait, was it a Turing test? And did I fail?

      1. b0llchit Silver badge
        Coat

        Re: Nice to see all points of view, I guess.

        Wait, was it a Turing test? And did I fail?

        Yes, it was. And, yes, you failed. You are a robot.

      2. Anonymous Coward
        Anonymous Coward

        Re: Nice to see all points of view, I guess.

        "I got curious, but could not find a verb entry for grammar in Oxford, Cambridge, or Merriam-Webster dictionaries."

        Did you...google it?

      3. Anonymous Coward
        Anonymous Coward

        Re: Nice to see all points of view, I guess.

        I believe the word you are looking for is ‘grammarize’ ….

  16. An_Old_Dog Silver badge
    Headmaster

    What is not on the Internet cannot be used as AI/ML input to produce "scholarly" papers

    Despite Google's scanning various works at a mad pace, the results of those scans are questionable. (Have you read any "scanned" documents which were pure trash?) It's highly-improbable that AI/ML programs can process works which have not been scanned onto the non-paywalled-Internet, and it's negative value for them to process works which have been in-correctly scanned.

    Thus, you're left with possibly-incomplete results. I just did a Google on Knuth Donald "Searching and Sorting" book and found no freely-downloadable copies. This is a fundamental book in computer ;science, yet AI/ML programs won't process it, 'cause they can't get to it. (I'm not saying the book should be freely-available on the Internet; I'm just pointing out the consequences for any AI/ML paper-generating programs).

    (Icon for "scholarly papers")

    1. John Brown (no body) Silver badge

      Re: What is not on the Internet cannot be used as AI/ML input to produce "scholarly" papers

      IIRC, there was a big hoo-hah over Google scanning stuff and making it or excerpts available that was in copyright. I very much doubt they deleted everything. They will have complied with the law in public, but what they are doing behind closed door is possibly another matter with regard to using datasets for training.

    2. Anonymous Coward
      Anonymous Coward

      Re: Searching and Sorting

      ask and it shall be given to you...

      https://libgen.rs

      ...

      well, you did say 'FREELY DOWNLOADABLE!!!' :D

  17. Anonymous Coward
    Anonymous Coward

    Marvelous

    Corporate communications will be even worse than it is already. Rare is the interoffice or departmental missive distributed to the workplace serfs that's nothing more than business babble. With hiring seemingly limited to new graduates who'll be proficient at using AI-driven writing, it'll be even easier to generate such drivel. I'm looking forward to the AI that can create pointless Powerpoint presentations.

    1. John Brown (no body) Silver badge

      Re: Marvelous

      Bullshit Bingo will become too easy. We'll need a new game :-(

    2. Sceptic Tank Silver badge
      Childcatcher

      Re: Marvelous

      Ah, PowerPointless. I'm going to use that a lot now.

  18. david 12 Silver badge

    BioChemistry Student

    Around here, Arts-Law or Commerce-law are common double-degrees, but Arts-Science is uncommon, because it is difficult. The target mode of thinking for science students is fundamentally different than that required for high-school essay writing. It's not just that the students learn different content, and different skills. I won't bore you with my own experience.

    AI isn't good at writing university-level essays. It doesn't have to be good to be better than BioChemistry students. BioChemistry students may be dumb, and can't write good essays, or smart, and pointed in the wrong direction.

  19. Rikki Tikki
    Facepalm

    I wonder what they do teach them at these schools?

    Interesting that some of the academics seemed to dismiss the process of constructing an essay.

    I was always taught that the first step in any writing (not just academic, business and indeed literary) was to plan out the key points that you wanted to make, so that when you wrote the first draft it would be coherent and logical. Sure, it would need subsequent polishing, but would at least be a solid base.

    Call me a grumpy old git, but it seems some academics have given up on teaching writing skills.

    (and I do realise I haven't attributed the quote in the title. Last one to get it is an AI)

    1. Tony W

      Re: I wonder what they do teach them at these schools?

      Have you never planned an essay and written a first draft only to find that your conclusions are not as clear-cut as you had first thought? "How do I know what I think until I hear what I say" applies even more to thoughtful writing.

    2. Bebu Silver badge

      Re: I wonder what they do teach them at these schools?

      The unattributed quote made me think of the grumpy old git in the first book of the Chronicles of Narnia :)

      I think the Doctor also used the line in a Christmas special using much the same plot device.

      I only read the Magician's Nephew a few years ago - 50+ years after the first book and ignoring Lewis' more mystical moments I was suprised how well the whole story hangs together.

      One aspect of writing that I have always assumed, is that the better read author is often the better writer. This might be part of the problem - functional illiterates cannot be expected to produce coherent writing.

      If I were marking assignments I might require the submissions to be handwriiten in lead pencil on lined paper. At least then they would have to manually copy their plagiaristic efforts.

  20. MachDiamond Silver badge

    I'd find it very useful

    Given some insight into what a professor wants to see, I could outline a paper on a given topic I'm studying. If I could hand that outline off to an AI system to take those initial sentences and turn them into paragraphs, I'd have had so much more time to consume alcohol when I was younger. What would be worth paying extra for is the references. If the AI generated paper came back with footnotes and a bibliography, I would have been in heaven. I was in Uni many years ago before this whole newfangled intertubes thingy came along. Digging out the source material was a huge time component of any paper and meant hours in a library hoping the prof didn't remove any of the best materials.

    Original work is much harder than editing and augmenting something already prepared. It could also suggest a better flow of supporting material from the outline provided than I might have been able to come up with given my sleep deprived and hungover condition. I expect to see much research going in the direction of using AI for a bulk of the verbiage so I don't have a problem with these paper writing bots. It will mean teachers and testing will have to adapt. Anybody getting a graduate degree better understand their material and be able to defend it or they will get skewered upon examination. For an undergraduate degree, it wouldn't be unfair to base a proctored test on a previously submitted work assignment. If you don't know the material, it will show. I will admit at this point that I did send away for some pre-written papers on a subject. The price depended on the topic and the length. I bought the cheapest ones I could since I didn't have much money and mainly needed to have a list of references and some sort of format. I knew better than to just re-type the things and hand them in. None of them would have matched my writing style to begin with. At least one of the papers had a huge WTF in it that would have been really bad had I not gone through it.

  21. johnrobyclayton

    First step seems to work.

    In https://chat.openai.com/chat:

    Question:

    What is a good bibliography for a paper on the use of AI to generate academic papers?

    Response:

    There are many sources that you can use as a bibliography for a paper on the use of AI to generate academic papers. Some possible sources include:

    "The Role of Artificial Intelligence in Scientific Communication" by Mark C. Wilson and Vincent Larivière, published in the journal PLOS ONE.

    "Automatic Generation of Scientific Papers: A Review" by Rensheng Wang, published in the journal Frontiers in Artificial Intelligence.

    "An Overview of Artificial Intelligence in Scientific Communication" by K. S. R. Anjaneyulu and M. S. S. Krishna, published in the journal Current Science.

    "Toward the Automatic Generation of Scientific Papers" by Yves R. Kergosien and Marie-Christine Rousset, published in the journal Artificial Intelligence Review.

    "Artificial Intelligence in Scientific Communication: Opportunities and Challenges" by Sabine Köszegi, published in the journal Science and Technology Libraries.

    "Automatic Generation of Scientific Papers: A Survey" by M. A. Hossain and M. S. Rahman, published in the journal IEEE Access.

    These sources provide a good overview of the current state of the field and discuss some of the key challenges and opportunities that have arisen as a result of the use of AI in scientific communication.

  22. Sceptic Tank Silver badge
    Holmes

    Infinite AI Theorem – The improved collected works of Shakespeare

    Two things:

    1) The AI is trained on existing essays, yes? So as more and more essays are spewed out by AI authors, the AI's will be trained on their own output. What can go wrong? (Isn't that how mad cow disease started?)

    2) So you used AI to obtain that degree but you stink at writing. Now you enter the real word with a degree but no skills. You'll probably find yourself outcompeted in the workplace by more competent people very quickly.

  23. elsergiovolador Silver badge

    AI

    The AI has a distinct way of writing. If you read enough AI written stuff, you should be able to spot AI written piece easily.

    If you get a false positive? Oh well, student shouldn't be writing in AI style.

  24. Anonymous Coward
    Anonymous Coward

    In my experience

    Professional mums and dads also write/polish a decent number of their kids papers too.

    AI needs to get in the queue.

    1. tiggity Silver badge

      Re: In my experience

      For many years people have been able to supply a title and purchase essays written by real humans (and specify the "level" of the essay) - a couple of days wait depending on the title and how much referencing / bibliography required so "cheating" has long been an option.

      Main difference now is that the type of essay that would be a solid 2:1 is now a first level essay, so its quicker & easier for the essay writers to churn out texts.

      .. Not the most ethical work I have ever done, but occasionally needed to help pay the bills when cashflow issues arise.

    2. Korev Silver badge
      Pint

      Re: In my experience

      When I was at uni, there were companies offering to buy essays with marks over a 2:1 for about the cost of a pint

  25. Anonymous Coward
    Anonymous Coward

    Can we get an AI to write comments?

    1. Anonymous Coward
      Anonymous Coward

      well, Can we get an AI not to...

    2. Anonymous Coward
      Anonymous Coward

      It has long been speculated that AManFromMars is an advanced, alien AI, possibly from another dimension...

    3. amanfromMars 1 Silver badge
      Alien

      Hubris is as Hubris does is an Arrogant Ignorant Folly to Follow. There be New Kids on the Block

      "Companies have decided what to feed it and we don't know. Now, they [AIs] are being used to generate all sorts of things from novels to academic papers, and they could influence our thoughts or even modify them. That is an immense power, and it's very dangerous."

      How very presumptuous of them [AIs]. How dare IT imagine they can wrest that very particular and wonderfully peculiar erotic exotic delight from the grasp of a few conspiring humans. What an absolute cheek!

      IT shall not pass. They shall not collect £200 says the disintegrating human machine.

      Do you think AI listens to desperate implied remote human control orders and follows such pathetic self-serving bleatings to augment and strengthen and reinforce its virtual machine power and energy over humanity rather than being deaf, dumb and blind to the weaknesses and vulnerabilities uncovered in humanity rotten ripe for comprehensive harvesting and exhaustive exploitation?

      And do you think the course of the future will be changed greatly and be in any way significantly different from that planned for in the past, whether it is or it isn’t listening and learning and adapting and reprogramming heavenly tasks and diabolical operations ‽

      Do yourself a favour and great service and realise it matters not a jot whenever absolutely powerless to effectively intervene and subvert and pervert what is destined to come and fated to be.

  26. J.G.Harston Silver badge

    "The writing becomes less coherent over time with no logical train of thought to follow. ... quotes, dates, and ideas are likely false."

    Wow. Indistinguishable from real student output!

  27. Binraider Silver badge

    It is absolutely true that writing helps you learn a subject. And then explaining it in person probably the best way to confirm it has sank in...

    Make the evaluation process a combination of essay and interview and the AI problem will come immediately obvious.

    Of course this is not a unique problem to written subjects. Wolfram Alpha makes my life with Physics and Maths infinitely easier. Had I had access in university days my coursework would have been better scored, and my understanding probably lowered. Like a calculator, it's a useful tool IF you know what it's doing first.

    As an employer, I couldn't really give a hoot if you have a first or a third degree, if you have the right background to get an interview we'll talk. Your abilities will be tested there, and no hiding behind AI.

  28. Bartholomew
    Coat

    Garbage in, garbage out

    There is so much Garbage in all training datasets used. It is hard/impossible to attenuate all the Garbage in and still be standing on the legal side of copyright.

    To get this idea across imagine a tiny model that was trained with only data from public domain sources e.g. 60,000 books from "Project Gutenberg" (Started in 1971 when Michael S. Hart was given $100,000,000 worth of computer time on a mainframe of the era). But being out of copyright the language used in nearly all the books is insanely different to the language in common use today. You would still recognise the language used as English, just like you would recognise a 100 year old action/comedy film by Buster Keaton was an action/comedy film. But due to changes in society and technology it would be totally different to a slick modern action/comedy movie with Jason Statham/Michelle Yeoh/Dwayne "The Rock" Johnson/Jackie Chen/Robert Downey Jr. The cinematic language has changed and spoken and written language has changed because the world is a different place.

    Getting access to a large modern dataset that does not totally ignore someone's copyright is nearly impossible.

  29. LateAgain

    If asked to write on "the uses of AI"

    Then getting one to wrote it should gain credit:-)

  30. LybsterRoy Silver badge

    When I was at university (sometime in the last Ice Age) I was studying Chemical Engineering and I can (vaguely) remember having to write an essay for some minor parts of the course but generally we were dealing with facts and calculations. Are these "AI" systems any good at generating mathematical proofs or working out flows through a complex pipe network? If not lets just stop teaching the soft (so called) sciences - eg trans studies - problem solved.

  31. Quinch

    Obligatory SMBC

    https://www.smbc-comics.com/comic/themes

  32. Jason Hindle Silver badge

    As the AI gets better

    Teachers will be forced to lean more heavily on in class, pen/paper assessment. Coursework will become a thing of the past.

  33. ganjia

    Bifurcation

    The bifurcation of humanity is taking place. Soon, we will only have people who interact with AI and people who create AI.

    1. amanfromMars 1 Silver badge

      Re: Bifurcation

      The bifurcation of humanity is taking place. Soon, we will only have people who interact with AI and people who create AI. ... ganjia

      Simply complex revolutionary evolution, ganjia, and rapid progress by virtually remote controlling alien means and hypermanic memes ... and certainly a colossal improvement on all that was being trailed and trialed and failing so spectacularly before.

      And something to start enjoying commands an almighty leading advantage rather than being terrified of and terrorised by the FUD the Main Stream Media would be prone to spewing out about it and everything its controllers/suppliers have lost effective positive control of, brainwashing the masses into accepting and realising their austere and now rumbled and stumbling and crumbling narratives..

  34. fg_swe Silver badge

    Big Time Bullshit

    I have to concede I was impressed for the first few times of using ChatGPT. But then I grew demanding and tested

    A) "tell me the difference between the Osprey V22 and the Dornier 31". The result was catastrophically wrong, as it declared the Do 31 being a HTOL aircraft. That would be a 6 in German grading.

    B) "write the c++ code for Enigma". The result looked promising at first, until I discovered that the most important ideas were completely lacking. Grade 5.

    Given the low complexity of AI systems as compared to "100 billion neurons connected to 10000 other neurons each", I should have known it from the beginning. The human brain is still one of the most powerful information processing systems we know* of. Silicon is on the level of worms at the moment.

    In other words, smart teachers will use chatGPT to set up dangerous traps for lazy students.

    *Elephant brains can compete on numbers, but I maybe not on structure.

    PS: Now the response to the Do31-V22 query seems to be correct. Did they update their model ? Or did I use different wording then ?

    1. fg_swe Silver badge

      Re: Big Time Bullshit

      They either updated the model, or the exact wording of the queries is super important. I would NOW grade

      "tell me the difference between the Osprey V22 and the Dornier 31" - 4. One faulty sentence "The Do 31, on the other hand, used jet engines for VTOL capabilities and did not have the ability to fly like a fixed-wing aircraft."

      "write the c++ code for Enigma" - 5. Text is kind of OK, but the code is completely wrong.

      #include <iostream>

      #include <string>

      const int NUM_WHEELS = 3; // Number of wheels in the Enigma machine

      const int ALPHABET_SIZE = 26; // Size of the alphabet

      // The mapping of each letter of the alphabet to another letter

      // For example, mapping[0] is the mapping for 'A', mapping[1] is the mapping for 'B', etc.

      int mapping[ALPHABET_SIZE];

      // The current positions of each wheel

      int wheel_positions[NUM_WHEELS];

      // Initialize the mapping and wheel positions

      void initialize() {

      // Set up the mapping of each letter to another letter

      // In this example, we just use a simple rotation of the alphabet by 3 letters

      for (int i = 0; i < ALPHABET_SIZE; i++) {

      mapping[i] = (i + 3) % ALPHABET_SIZE;

      }

      // Set the initial positions of the wheels

      for (int i = 0; i < NUM_WHEELS; i++) {

      wheel_positions[i] = 0;

      }

      }

      // Encrypt a single character using the Enigma machine

      char encrypt(char c) {

      // Rotate the wheels

      for (int i = NUM_WHEELS - 1; i >= 0; i--) {

      wheel_positions[i] = (wheel_positions[i] + 1) % ALPHABET_SIZE;

      if (wheel_positions[i] > 0) {

      break; // Only rotate the next wheel if the current one has "carried over"

      }

      }

      // Get the index of the character in the alphabet (e.g. 0 for 'A', 1 for 'B', etc.)

      int index = c - 'A';

      // Use the mapping and the wheel positions to encrypt the character

      int encrypted_index = (index + mapping[index] + wheel_positions[0]) % ALPHABET_SIZE;

      // Convert the encrypted index back to a character and return it

      return 'A' + encrypted_index;

      }

      // Decrypt a single character using the Enigma machine

      char decrypt(char c) {

      // Get the index of the character in the alphabet (e.g. 0 for 'A', 1 for 'B', etc.)

      int index = c - 'A';

      // Reverse the encryption process to decrypt the character

      int decrypted_index = (index - mapping[index] - wheel_positions[0] + ALPHABET_SIZE) % ALPHABET_SIZE;

      // Convert the decrypted index back to a character and return it

      return 'A' + decrypted_index;

      }

      int main() {

      initialize();

      std::string message = "HELLO WORLD";

      std::cout << "Original message: "

    2. Grinning Bandicoot

      Re: Big Time Bullshit

      Easier test (trap) would be normal exercises, papers, projects. The final all encompassing exam would use these and only papers and project notes in the exam. I'm quite sure a correlation will be found between the quality of these notes and the score of the exam.

  35. Anonymous Coward
    Anonymous Coward

    Further evidence…

    That the world is producing too many morons.

  36. Lusty

    AI is the new pocket calculator

    When I was at school in the 80's and 90's old people used to complain about kids not understanding arithmetic. Calculators were starting to catch on but the question "what happens if you forget it" were rife. Well guess what, calculators not only became ubiquitous but we now have supercomputers in our pockets.

    Get used to it. AI is just the tool of the next generation. My generation have not failed due to the calculator, we were freed to achieve amazing feats of science, engineering, medicine, etc. and the next generation will us AI to leap frog anything we could even dream about today. Let them use whatever tools they are comfortable with and judge them on the results of their real work. School isn't exactly a real world challenge anyway, if they want artificial results to artificial problems then why not use AI to give it to them?

    1. BlokeInTejas

      Re: AI is the new pocket calculator

      Lusty

      Forgive me for asking - but what amazing feats of science, engineering etc has your generation created? A few examples would be helpful.

      You appear to misunderstand the idea being universities - it's to teach you to think, and to use that skill along with available tools to do neat stuff in the domain of interest. Artificial problems are a very simple way of checking you do actually understand how to go about problem solving; they're not there as the end goal. Perhaps you never understood that?

      1. fg_swe Silver badge

        Re: AI is the new pocket calculator

        We have created extremely reliable software systems you can entrust your life. ABS, ESP, Airbus flight control, software based railway switchyards, aerodynamically unstable aircraft which become stable due to software.

        1. fg_swe Silver badge

          80s Guys Software In Action

          https://www.youtube.com/watch?v=NJeAjnADHD0

          Extremely reliable, no loss of airframe due to software !

      2. Lusty

        Re: AI is the new pocket calculator

        I mean, it was pretty cool when SpaceX landed that rocket on the ship for reuse, don't you think?

        The breakthroughs in the last 20-30 years kind of speak for themselves, not sure why you'd need a list, unless you're unable to think for yourself?

    2. MachDiamond Silver badge

      Re: AI is the new pocket calculator

      "When I was at school in the 80's and 90's old people used to complain about kids not understanding arithmetic. Calculators were starting to catch on but the question "what happens if you forget it" were rife."

      I may, or may not, have been in school a bit earlier than that. I admit to nothing. We were required to have a calculator for chemistry and not allowed a calculator in math. I pride myself in being a PIA so I asked the math teacher if I could bring a slide rule. That lead to me learning how to use a slide rule that my mom gave me. The next year I brought an abacus, but got tired of it really quick. The look on the teacher's face the first day I pulled it out was priceless.

      Learning arithmetic is very important early on. Past a certain point more emphasis needs to be put on setting up a problem to be solved correctly than the mechanics of + - * \.

      Some of my first jobs had me accepting money and giving change. Since this was before electronic registers that would calculate change, I had to do it in my head. I was at a take away drive through recently and my bill was $8.73. I handed the cashier $23.73 and they had a hard time dealing with even counting the money. They first tried to hand back everything but the $20 bill and then asked me how much I handed them. I told them $23.73 and the change is $15. You'd think I wanted them to write down Boyle's gas law or something. They said that weren't good at math. "It's not math, it's arithmetic". I was taught how to count back to be fast and accurate with change. I also had a very good mentor that taught me how to approximate things quickly. I always thought those skills should be taught in school. While somebody might never have a job as a cashier, they will be buying things and knowing what your change should be is important. I've caught people short-changing me often enough.

  37. BlokeInTejas

    As is to be expected, too many commenters - and the folk in the article - miss the big point.

    The big point is that writing is being de-skilled - or at least, if the stupid soi-disant AI's improve enough, writing will be de-skilled.

    So you won't need university departments to teach you to write, anymore than you need to have university departments teach you spelling and grammar.

    The whole 'writing' thing should be taught at primary and secondary school level, so that young un's can go forth being able to correctly write English (or whatever). But after that, your favorite novel-writing AI will provide you with books to read, perhaps interactive ones - and, later, videos.

    You might even get correct, readable, technical publications which help you understand how to make your VCR work.

    Automation de-skills. It's a Good Thing. It means that stuff that is popular is manufactured at low cost. Niche stuff will still be done by artisans, but there won't be very many artisans making a living from writing. Oh well.

    1. fg_swe Silver badge

      Wrong

      Writing is first and foremost about CORRECT CONTENT, not about lyrical bullshit.

      ChatGPT is good on lyrics and faulty on content. See my other post.

  38. Daedalus

    Automation and AI

    The real truth that automation and AI expose is that most people are a waste of space. This includes Professors of Rhetoric who, amongst other failings, don't understand that the drones in their classes would be happy with a C or a C+. Finding people who care enough to do something well is hard. You can't do it by awarding participation trophies or passing every failing student on to the next mug. In fact, you discourage those who want to do well by passing those who don't care. The fact that the "don't care" crowd vastly outnumber the "give a damn" crowd means that "democratic" education will always produce mediocrity.

  39. captain veg Silver badge

    depend on the assignment

    "Whether students can use these tools to get away with doing less work will depend on the assignment. A biochemistry student claimed on Reddit they got an A when they used an AI model to write "five good and bad things about biotech" in an assignment, Vice reported."

    Biochemistry is science. "Five good and bad things about biotech" is not.

    Was this some kind of humanities exam?

    -A.

  40. Grunchy Silver badge

    Easy

    You just have a proctored exam where all the students enter the examination room with only pencil + paper and 2 hrs and write us up an essay.

    The AI bot is excluded!

    (When I did my PMP exam they took my belt & shoes and made me go through a metal detector! And again if I visited the toilet facility!)

  41. Paul 87

    Saw a good use for ChatGPT via Twitter

    Someone ran a landscape gardening business, but was dylsexic, and often losing out to others based purely on how their written work was percieved.

    The person behind the Twitter account, helped them setup a routine whereby they could send an email, ChatGPT read it and re-wrote it out, and then sent a reply back to deliver a more professional sounding email.

    That's the kind of good that this kind of AI can be used for, not for original thought, but by helping present those thoughts more coherently.

  42. Boring Bob

    Rather than ban new technology, one should accept that new technology renders certain skills obsolete; embrase the technology and concentrate learning efforts elsewhere.

  43. Kimo

    I teach a University writing course aimed primary at Engineering students. My experience is that students who cheat do more work and get lower scores.

    By the time they turn in a draft assignment in my class they need a bibliography and outline of their thesis and supporting evidence. They ave to discuss their topics in class and on discussion boards. Once they have done that, they can flesh out their pre-writings into a complete memo, proposal, or white paper (we don't have them write "college essays").

    Even if they did use an AI, I would be impressed if they can construct a prompt that includes all of the assignment requirements and generates a cohesive piece of writing. Still plagarism under OSU definitions, but I would be impressed.

  44. rodderz

    Oral exams, profs will find out in seconds if the student wrote the paper, not that hard really if the student looks at their toes then hims and haws - automatic fail and expulsion

  45. MarkIDFK

    There has to be a market for marking AI, saving teachers a *lot* of work...

  46. Dave Null

    Mark the discussions, not the essays

    Oxford Uni sets a LOT of essays for students. They are marked, but the marks aren't really important compared to the marks given to sessions with tutors where you are asked to discuss your essay. Kind of hard to cheat in that scenario and also protects the need for trained human educators...

  47. JoeMama2004

    Them kids and their new-fangled technology

    This article provides a nuanced perspective on the use of AI-assisted writing tools by students. It acknowledges the convenience and efficiency of such tools, while also noting their limitations and the importance of careful editing and revision in order to produce high-quality writing. The idea that AI writing tools can be used as a framework to teach revision and editing skills is particularly noteworthy and insightful. Overall, this article makes a valuable contribution to the ongoing discussion about the use of AI in education, and the author provides a well-argued case for using these tools in a manner that prioritizes the development of writing skills.

    This entire comment was generated by ChatGPT.

  48. davidmuray

    This is indeed a problem that cannot be avoided, in my opinion. Artificial intelligence today can write an essay for you, and any text on any topic in general. Therefore, it seems to me that teachers will sooner or later come to the conclusion that the knowledge of students needs to be checked in a different way. For example, speaking or practical exercises. We have never had a problem with this at beauty colleges. The fact is that in order to pass the courses, we must do some kind of work with our own hands, and artificial intelligence will not help here, at least for today.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like