back to article Survey: Over half of undergrads in UK are using AI in university assignments

More than half of undergraduates in the UK are using AI to complete their assignments, according to a study conducted by the Higher Education Policy Institute. The study asked upwards of 1,000 university students whether they turned to tools like ChatGPT to help write essays or solve problems, and 53 percent admitted to using …

  1. Joe-Thunks

    An easy solution

    If caught using AI, fail the student. Students are supposed to do their own work. Copying and pasting from ChatGPT does not count. If it turns out later that a graduate used AI, cancel the degree.

    1. Version 1.0 Silver badge
      Megaphone

      Re: An easy solution

      I agree with the plan to ban students from using AI but wait ...

      Look at how much AI is used in every other environment that we all work in and use, if we're going to ban the students then let's BAN Google and all AI applications.

      1. hedgie Bronze badge

        Re: An easy solution

        Academic standards don’t always reflect real world situations to begin with. In most classes I’ve taken, reusing a paper or other assignment that one researched and wrote entirely on their own, even if it fits the criteria, is still considered plagiarism. The only exceptions (my own experience only) were when, as a photography student, professors allowed students to “reuse” some of their own work, say, making a better print out of something one submitted for a class on composition. In the real world, reuse of work would only matter if there were rights issues..

        1. Anonymous Coward
          Anonymous Coward

          Re: An easy solution

          In the university I have just retired from - and in many others - "self plagiarism" isn't a thing, because plagiarism is defined as the unacknowledged use of the work of another for credit. Reusing material is generally against the rules (you can't submit the same PhD thesis at multiple institutions, for example) bu that's a different matter and not plagiarism.

          1. hedgie Bronze badge

            Re: An easy solution

            Yes, I was bothered by them using that term as well. I suppose that since they used one of those online detection/checking services that they just didn't want to bother with the provenance of anything that got flagged.

      2. Anonymous Coward
        Anonymous Coward

        Re: An easy solution

        why do you want to punish me for using 'AI' sir, when your whole fucking course 9K-a-year online course is written by 'AI'?

        1. Khaptain Silver badge

          Re: An easy solution

          Because you go to university to learn to think for yourself..

          1. SundogUK Silver badge

            Re: An easy solution

            Should. Mostly don't though.

          2. Bebu Silver badge
            Windows

            Re: An easy solution

            《Because you go to university to learn to think for yourself..》

            Even 50 years this wasn't the obvious motivation for enrolling at a university. Certainly not now.

            I valued my time at university as an interregnum between the tedium of school and the reality of the real, brain dead, workaday world during which I could further develop the analytical and problem solving skills I already possessed with the luxury of not being too distracted by coursework demands.

            The poor blighters today are overloaded with continuous assessement of weekly quizzes, essays, presentations and seminars and whatnot on the pretext of preparing them for real world. Don't think "thinking" comes into it which might explain a lot.

            It would a great improvement if Terry Pratchett's Unseen University's Archchancellor's words were applicable here.

            《If they were clever already, they wouldn't need to go to university! No, we'll stick to an intake of 100 per cent young fools, thank you. Bring 'em in stupid, send them away clever, that's the UU way!"》

          3. Michael Wojcik Silver badge

            Re: An easy solution

            Because you go to university to learn to think for yourself.

            Some people certainly may, but that is not, to a first approximation, what the institution was established for, or how it sees itself.

            The histories of the various types of higher education in the European and US traditions are complicated, to put it mildly, and reflect a variety of philosophical positions and sociopolitical programs. But prominent among their goals at various times and places were things like continuity of knowledge and creation of productive citizens. "Thinking for yourself" as a good-in-itself is a relatively recent concept — it is specifically modern, that is it reflects a mindset that the world is in flux and requires new ideas, as well as a commitment to individualism which is certainly not universal in European and European-derived cultures across even the Modern period.

            I'm less familiar with scholarship on the intellectual history of universities (or equivalents) in other cultures, but from my experience of and reading about, say, Japanese universities, I can't say "think for yourself" was a prominent motto there.

    2. heyrick Silver badge

      Re: An easy solution

      I'm okay with them using an AI so long as it isn't obvious. Then, it's just a tool they have used to help them. I mean, we don't complain when it's written on computer rather than by quill, right? Maybe using it can help aid them better organise their thoughts?

      However, when it is obviously the work of an AI (and having played with ChatGPT, there seems to be a repetitive nature to what it outputs even if it uses slightly different words each time) then, yes, hard fail the student if they're not capable of sitting in front of the professor and writing something coherent by hand.

      1. Flocke Kroes Silver badge

        Re: right and wrong

        If an AI gets homework right then that is fine by me - the student must have generated several versions and understood the subject well enough to rejected the bad ones. Rejecting homework because one LLM says another LLM generated it is problematic. There is a high false positive rate which gets even higher when testing work that predates LLMs and has been used as training data.

        (The way to profit from a gold rush is to sell shovels.)

      2. Michael Wojcik Silver badge

        Re: An easy solution

        Then, it's just a tool they have used to help them.

        I'd call that a dangerous error.

        Krakauer distinguishes between "complementary" cognitive technologies and "competitive" ones. LLMs, even when used as support tools, are primarily or exclusively competitive. They absolve the user of thinking. When an LLM is used for research,1 the student often gets at best a shallow answer couched in undeservedly persuasive language. Meanwhile, the student misses out on the intellectual exercise of using research techniques to find sources; of comparing multiple sources to update (confirm/challenge/refute) evaluation of claims; of assessing the quality of sources; of serendipitous discovery of tangential but interesting or useful information; of considering original arguments.

        LLMs are dangerous. They inculcate intellectual laziness and learned helplessness. They provide a terribly narrow view of the world.

        And, of course, schoolwork is supposed to be work. It's paideia. The whole point is to exercise the mind. Often that's tiresome, and often students don't see the value of it.2 Students often won't see the value, because they're students. If they already knew everything about the subject of study, they wouldn't be students. And students often willfully ignore the value, because, well, exercise is often boring. Too fucking bad. It beats digging coal.

        The article quotes someone saying "My primary concern is the significant number of students who are unaware of the potential for 'hallucinations' and inaccuracies in AI". That is not at all my concern. I say the more hallucinations the better; let people learn that this is a bad tool, even if for the wrong reasons.

        1I'm not even considering the use of an LLM to generate actual text of a student's submission, which would very likely constitute plagiarism in the universities I've attended or worked at. I'd consider this true of Grammarly (a software product I loathe) as well, when used to "clean up" or "improve" a student's prose. (Grammarly is now very much on the "AI" bandwagon, so it's a faint distinction at best.)

        2Zvi, in one of his AI roundups, wrote something to the effect that he doesn't mind if students bypass work that they don't think has value. That's an incredibly blinkered view. (Of course, it's largely inflected by Zvi's own experience of school, as a talented and self-motivated learner bored by the inevitable leveling effects of being in a classroom of mixed ability. I'm not an opponent of tracking, either.)

    3. JoeCool Silver badge

      Re: An easy solution

      It's real world job training.

    4. Michael Wojcik Silver badge

      Re: An easy solution

      This is idiotic, to be frank. There's no reliable way of "catching" someone using an LLM, the alleged detectors from snake-oil firms like Turnitin have abysmal false-positive rates, the consequences of a false conviction are far too high, and many students would tie up considerable resources in appeals. And a combative relationship between students and faculty does not encourage learning.

    5. Dabooka

      Re: An easy solution

      It's about how it is implemented, copy and paste is already covered in the usual plagiarism academic regs of the institution.

      We need to be better at how it is implemented and utilised, focus on the research elements it can support but not to rely on output. We've had these discussions before, thirty years ago with the rise of the WWW and even bloody word processing before that,

  2. Dr. G. Freeman

    53% ? Bit low in my opinion. Would have thought at least 80% for Undergrads.

    1. doublelayer Silver badge

      Some of them have to realize that the homework answers generated by an LMM have a decent chance of being wrong. If you're going to cheat, there are ways to cheat that aren't as much of a throw of the dice. Sure, they take longer and may be more difficult, but if you're bothering to cheat, presumably you want to get something out of it and LMM cheating isn't guaranteed to get you anything.

      1. HuBo
        Trollface

        As with Nutella and Reese's pieces, I do find that M&M cheating can be a bit of a crapshoot ... Smarties on the other hand ... works every time!

    2. Michael Wojcik Silver badge

      Zvi recently quoted a tweet by Ethan Mollick (haven't tried to confirm this source, since I refuse to use Twitter) stating that in an informal survey of ~250 undergraduates and grad students in his class, nearly all confirmed using "AI", and that "Many used it as a tutor. The vast majority used AI on assignments at least once".

      Nearly 100% strikes me as a far more plausible statistic. Of course, that's "over half".

      It's just too damned tempting for all but a relative handful of contrarians.

  3. elsergiovolador Silver badge

    Plus ça change, plus c'est la même chose

    When calculators were a novelty, I am sure had ElReg existed then we would have read the same article, just reading "over half of undergrads in UK are using calculators".

    and before that "are using beads"

    and before that "are using fingers"

    Artificial Imbecilence is just a tool. It will normalise and then we will be shouting at the next thing.

    Embrace it. That's what progress looks like.

    1. theOtherJT Silver badge

      Re: Plus ça change, plus c'est la même chose

      No, it isn't.

      If I was a maths student and needed to demonstrate that I understood the material I was supposed to learn, then damn right I'd get failed for using a calculator.

      This is why they don't allow them into a selection of pure maths exams. You're supposed to demonstrate that you are capable of not only solving the problem posed, but - and far more importantly - that you know why the answer is what it is. As every school teacher ever will have said at some point "Did not show working. 0/10"

      1. Version 1.0 Silver badge
        Joke

        Re: Plus ça change, plus c'est la même chose

        When I got a calculator at school I showed my dad, and he gave me a calculation (what's a third of the diameter of a 13.25 inch circle?) to show him the answer while using his slide rule to get the results. He showed me the answer on his slide rule much faster than I could get an answer with the calculator. I laughed when my second calculation was accurate :-)

      2. elsergiovolador Silver badge

        Re: Plus ça change, plus c'est la même chose

        If using calculator you got the wrong result, then most likely you didn't understand it.

        I am sure though there would be people just fat fingering they calculator and writing anything it spits out to their exam paper.

        Unless the test specifically says "divide those two numbers using long division method. To compute intermediate results use grid method for multiplication, partial sums for addition and decomposition for subtraction. You cannot use your fingers, beads or any other device apart from your brain to aid calculations".

        Don't see how it is helpful to force it more than once. Calculator just saves times you spend on hokey cokey.

        It's the same with LLM. Surely you have to read what it spits out, understand it and judge whether it is worth being incorporated in your work. You can then redirect your cognitive resources to parts of the work that would need it more. It's like having an assistant that you can delegate the busy work to. Surely we should be teaching efficiency, resourcefulness and productivity too.

        1. doublelayer Silver badge

          Re: Plus ça change, plus c'est la même chose

          The calculator does a specific task and it is easy to decide whether having that task delegated is acceptable. If it's a child doing arithmetic tests, it is not. If it's a university student doing calculus, a calculator that can automate the insertion of terms into a formula the student derived is fine, but a program that automatically derives it is not. In the workplace, that program is probably fine as well.

          An LLM is sufficiently capable that it could do a number of tasks, nearly all of which are not acceptable. The comparison to an assistant is valuable here: in school, you don't get to have an assistant. I did not get to write my code, then pay someone to write the documentation for me because I couldn't be bothered to do it myself and the graders didn't look too hard at it; I had to write that myself because that's what the assignment was.

        2. Bebu Silver badge
          Windows

          Re: Plus ça change, plus c'est la même chose

          《If using calculator you got the wrong result, then most likely you didn't understand it.》

          I don't know that is entirely true. I once bought a AUD5.00 calculator from a supermarket which reliably gave the wrong answers to the basic +×÷- functions. I should have kept it but I got my $5 back.

          Of course if I couldn't do the sums in my head or estimate (a dying art) I wouldn't have been any the wiser.

          Is anyone now taught to add up the units column (modulo 10) just to check your calculation hasn't missed an entry?

          1. Michael Wojcik Silver badge

            Re: Plus ça change, plus c'est la même chose

            Fun. My initial guess would be a bad electrical connection somewhere, or other power-related failure like a bad capacitor; but of course even with something as simple as a 4-function calculator ALU you can get the occasional bad chip in the yield.

            During some of my teen years I worked at an ice cream and sandwich place that still used a mechanical, total-only cash register at the take-out counter. (Fancier models were available; the manager just didn't see any reason to upgrade that one.) We had to sum, add tax, and count out change in our heads. It was a good exercise.

      3. heyrick Silver badge

        Re: Plus ça change, plus c'est la même chose

        My school allowed the use of calculators. My mother made me do all my homework without it, which was arduous because I suck at maths.

        The reason why was to give me a good enough basic understanding of maths that I can approximate the sort of answer I'm expecting even if I then use the calculator to get the actual result. So 3 times 9 equaling 297, rather than just blindly writing it down as the answer, I can think "hang on, three tens are thirty, no way that's right" and save myself the embarrassment of getting it badly wrong due to not spotting having pressed the 9 key twice.

        I still suck at maths, but the ability to "sort of roughly know" what the answer should be has come in useful a number of times (including in a shop where an amount was entered into the card reader without the decimal point - erk!).

        1. Michael Wojcik Silver badge

          Re: Plus ça change, plus c'est la même chose

          Yes. Even if you don't do full-on Fermi estimation in your head (which really isn't hard), just counting up orders of magnitude is a great way to sanity-check basic arithmetic. "Wait, shouldn't that answer have four digits?"

          Works nicely for binary and hexadecimal too, once you have a bit of experience.

      4. Ian Johnston Silver badge

        Re: Plus ça change, plus c'est la même chose

        I find it hard to imagine any sort of pure maths exam in which a calculator would be of the slightest use for anything except basic arithmetic.

        1. werdsmith Silver badge

          Re: Plus ça change, plus c'est la même chose

          So you’ve never used a CAS calculator.

      5. werdsmith Silver badge

        Re: Plus ça change, plus c'est la même chose

        No, it isn't.

        If I was a maths student and needed to demonstrate that I understood the material I was supposed to learn, then damn right I'd get failed for using a calculator

        If you are solving maths problems then you would still need to understand the material in order to use your calculator. It’s a Labour saving device that, if anything, allows a student to go deeper and demonstrate even more understanding than if they had to waste time doing arithmetic longhand.

        It’s not 9 year olds doing basic numeracy we are

        talking about.

    2. Filippo Silver badge

      Re: Plus ça change, plus c'est la même chose

      But when you are learning how to do basic arithmetic, and your progress gets tested, the teacher doesn't let you use a calculator. Despite having computers, we still teach kids how to make sums with pen and paper. Students are allowed to use calculators only in contexts where basic arithmetic is not the thing being tested.

      Similarly, if the thing being tested is your ability to write an essay, then you should not be using a LLM to do it. You can use a LLM if and when your ability to generate quality text is not the thing being tested.

      This isn't old people screaming at the new thing; it's how the fundamental concept of "testing" works. The tools you're allowed to use isn't a problem of what's new versus what's old; it's a problem of what is being tested versus what is not being tested. You can't use a tool that automates the test's target objective.

      Also, note that delegating writing to LLMs is a lot more problematic than delegating calculation to computers. There are several good reasons for that, not the least of which is that while a calculator is extremely reliable in its outputs, LLMs are anything but. You need to learn how to write, even if you're going to use a LLM to do it for you, because you'll need to be good enough to be able to verify the LLM's output. I may trust an engineer who can't do long division and just uses a calculator, depending on his/her other skills, but I can't trust a lawyer who can't write and just uses a LLM.

      1. elsergiovolador Silver badge

        Re: Plus ça change, plus c'est la même chose

        if the thing being tested is your ability to write an essay, then you should not be using a LLM to do it.

        Why? Taking aside pigeonholing, it's like telling the decorator mate you can't use that fancy brush you have, here, have my grandads brush. Then come back hour later. Oh you poor sod, they didn't teach you how to paint, eh?

        I guess when we teach programming, the first few years students should be using punch cards?

        1. heyrick Silver badge

          Re: Plus ça change, plus c'est la même chose

          "I guess when we teach programming, the first few years students should be using punch cards?"

          Maybe software wouldn't be such a bloated pile of barely interacting bugs if they had to begin with punch cards?

          1. elsergiovolador Silver badge

            Re: Plus ça change, plus c'est la même chose

            ^^^ This

        2. Filippo Silver badge

          Re: Plus ça change, plus c'est la même chose

          I'm not clear on your point here. You seem to be flipping between describing scenarios where someone is learning, scenarios where someone is being tested, and scenarios where someone is working professionally.

          The applicability of automation is extremely different between these scenarios.

          If the decorator is coming to paint my house because I need my house painted, they should use whatever tool gets the job done most efficiently. I am not here to test them, I just want my house painted.

          If the decorator is learning how to use a brush, or being tested on their ability to use a brush, then they should use a brush.

          If the decorator is learning how to use the fancy tool, or being tested on their ability to use the fancy tool, then they should use the fancy tool.

          I don't feel that distinction to be a difficult one to make. A professional and a student are doing two very different things. Someone doing their job is a poor simile for a student learning how to do it.

          Re the punch card example, it seems like another bad simile. Punch cards are obsolete. Writing isn't. Unless you're trying to say that writing is obsolete? Because of LLMs? That seems frankly untenable.

          1. elsergiovolador Silver badge

            Re: Plus ça change, plus c'est la même chose

            I am not here to test them, I just want my house painted.

            You do a test. If they fail to paint your house well, they won't get paid (or get a mark if you will)

            Writing isn't.

            It soon may be though. But let's teach people legacy stuff.

            1. Filippo Silver badge

              Re: Plus ça change, plus c'est la même chose

              >You do a test. If they fail to paint your house well, they won't get paid (or get a mark if you will)

              Sure, I can stretch the definition of "test" that way. However, it's still a bad simile, because then I'm not testing their ability to use a brush, but their ability to paint my house. Not the same thing. The first strictly requires them to use a brush, the second doesn't.

              >It soon may be though. But let's teach people legacy stuff.

              Right now, it isn't. "Soon may be" is not a strong enough base to decide to stop teaching a critically important skill.

              1. elsergiovolador Silver badge

                Re: Plus ça change, plus c'est la même chose

                The first strictly requires them to use a brush, the second doesn't.

                Kind of moving goal posts here. We don't teach decorators how to use brush, but how to paint a house.

                It's like saying a developer is poorly qualified, because they don't know how to use punch cards.

                Come on!

                Right now, it isn't.

                I am not sure if you are aware, but most people don't care if something is brain written or AI aided. It just needs to be good.

                1. doublelayer Silver badge

                  Re: Plus ça change, plus c'est la même chose

                  Job tasks and education tasks are not identical and shouldn't be. Let's stick with painting. There are some painting jobs that can be done with a big sprayer. A certain kind of paint, a certain level of acceptable quality, and the sprayer becomes an option. It's an easy and cheap option when it's acceptable. Yet if we're teaching someone to paint, we can't just let them do every job with the sprayer, because at some point they may be called on to do a job with something else. If you want a painting job that can't be done with the sprayer, you expect that your painter has learned to use other tools. That means that, if the painting teacher says that you have to paint this wall with a brush to demonstrate you know what you're doing, it would not be acceptable to use the sprayer and say "look, the wall got painted, why should I do it the way you said to". The test restricted the available tools for a reason, and the reason is directly applicable to the use of the skills later.

                  The same applied to my example of programming languages. They weren't asking me to write a program because they needed the program for something. They were asking me to write it so I would learn something. That means using a language I'm less familiar with, one where it's harder to write, one where it's more likely there are bugs in the result, but the choice that means I learn a skill because there are times when I will need to apply that skill. If people frequently had to use punch cards in modern industry, and they decided to take a course that taught how to do it, then yes they absolutely should be required to use punch cards and doing the punch card homework using a modern compiler would not be acceptable. We don't teach that because it is not considered useful, but if we did, the students who chose the course would have to do it.

              2. Michael Wojcik Silver badge

                Re: Plus ça change, plus c'est la même chose

                Right now, it isn't.

                Indeed. As someone with degrees in Computer Science, English, and Rhetoric, and who's worked with various ML and NLP algorithms and implementations, I heartily endorse this evaluation. I've yet to see an example of LLM-produced prose which rises further than pedestrian. (And as for verse — yikes. It burns.)

                And it really doesn't matter whether "AI" in some form will become capable of producing actually competent prose.1 The point of learning writing, at the gen-ed level,2 isn't to make students professional writers. Even making them competent college writers is a secondary goal, because frankly that's not as important, and the ones who want to be competent college writers can get there on their own. (It's not a high bar.) The point is to show them something about how written communication works and functions in society. It's to give them some capability in rhetorical critique. It's to help them become less of a mark for every demagogue and con artist that comes along.

                Using a computer to do their writing for them will not achieve that. Or, really, anything other than generating waste heat.

                1I've given that matter quite a bit of thought, going back to some years before I wrote my MA thesis on computational rhetoric, and I think it's perfectly achievable. I'm not convinced further scaling and refinement of deep transformer stacks is going to do it, though. I'd use heterogeneous models competing for "attention" doled out by an evaluation model as a first step, with the evaluation model being recurrent, and some of the contributors dealing with things like perceived chronology (which also requires recurrence, unless it has a huge amount of context; see various papers on emulating time series with transformers) and physical aspects of real-world interactions. Wolfram thinks we might get there through adding capabilities in computational language and semantic grammars.

                2I have taught gen-ed college writing ("First-Year Composition", in US academia-speak), and as preparation for that had to read a decent body of composition theory and research. I've spent a lot of time with writing teachers and in writing departments. This isn't just a pulled-from-my-ass opinion.

            2. doublelayer Silver badge

              Re: Plus ça change, plus c'est la même chose

              No matter how good LLMs get, you will still have to write things. If you need to describe something to someone that doesn't already exist on the internet, you have to actually write down the details. The LLM does not know any of the things that just happened, so at the very least, you need to accurately provide all that information to it for it to rewrite into something that looks nice enough. This is the same reason that calculators don't make mathematics obsolete. They're great at figuring out what the answer is, but they're completely incapable of determining what the question was, so you still have to do that part. I think you already know this.

              1. elsergiovolador Silver badge

                Re: Plus ça change, plus c'est la même chose

                One thing LLM may be decent at, is turning word salad into something coherent and then reshaping it into desired format - for instance an essay.

                1. doublelayer Silver badge

                  Re: Plus ça change, plus c'est la même chose

                  I think you may overestimate what you're getting. It may look nicer, but if it's inaccurate or lacking in detail, it's still not good. Judging from your responses, I'm worried that you might not care.

                2. Michael Wojcik Silver badge

                  Re: Plus ça change, plus c'est la même chose

                  It might achieve that goal, to a certain mediocre extent. That's still not a Good Thing.

        3. doublelayer Silver badge

          Re: Plus ça change, plus c'est la même chose

          If you're being tested on how to write an essay, you need to demonstrate that you can write it. If you're being taught to use a brush, you need to demonstrate that you can use a brush. That is different from later applications of the same. If you're being tested on painting something in general, you may get to choose a tool from a set of different ones to do the job, but if they're specifically testing your ability to use a basic brush, you may not get to use a different tool, even if you otherwise would want to.

          For example, there were a couple occasions in my schooling where I was permitted to select the language in which I'd write a project, but mostly I did not. If I had asked to do so, I'd have likely gotten a response like "Of course you can write this string manipulation program faster and easier in Python than in C, but this class is taught in C and we want to give you something easy so you learn how to use C". It doesn't matter that, if I had a similar task in the workplace, I would almost certainly not use C unless performance was critical, because the point was not to have the program written, but for me to learn something.

    3. Michael Wojcik Silver badge

      Re: Plus ça change, plus c'est la même chose

      Reductive, unhelpful generalizations. That's what people refusing to think critically looks like.

  4. Anonymous Cowherder

    Being done in job applications now too

    I've recently had 150+ applications for a couple of roles. I started shortlisting and spotted some really good applications and was hopeful of recruiting people that seemed to really understand what I was looking for. Then I realised that approx. 30 applications were just rewrites of the job advert and the candidates job history didn't tally with the skillset.

    I admire the chutzpah, and problem solving is part of the role but as a lowly IT manager it is hard enough to recruit the right people, AI is already making changes that we aren't prepared for. I've too many miles on the clock and have seen many changes to the landscape that haven't resulted in the armageddon scenarios predicted for them, but I am very concerned that the AI genie is already too far out of the bottle.

    There are massive benefits to the technology but there are downsides too and we'd better start to get a good handle on managing these downsides soon.

    1. Anonymous Coward
      Anonymous Coward

      Re: Being done in job applications now too

      it's a war, like with spamming. There's a mass-flooding, and there's phishing. And there's spear-phishing too. But hey, why not use 'AI' to automate even spear-phishing and increase your chances even more...

    2. elsergiovolador Silver badge

      Re: Being done in job applications now too

      The concept of job application, apart from CV or candidate's promotional folder is quite funny.

      I mean both candidate and hirer know that candidate wants to earn money and put food on the table and hirer wants the service.

      But you need to have this dance - don't say that you just want to earn a good honest wage, instead, come up with platitudes how you are going to contribute to the company and why you think you will be fulfilled doing it blah blah

      Most hirers don't read it anyway, just skim over. Some may read it while candidate is late to the interview or for laughs.

      1. werdsmith Silver badge

        Re: Being done in job applications now too

        So true. The ability to make a job application is not necessarily the same thing as the ability to do a job.

        I’ve been asked to jump through hoops when considering a role. Stupid presentations and such. Nothing to do with the jobs. I politely declined to do any of their shit. Got offered to me anyway.

    3. Michael Wojcik Silver badge

      Re: Being done in job applications now too

      There are massive benefits to the technology

      I keep seeing this claim. I haven't seen much offered in the way of evidence.

  5. J.G.Harston Silver badge

    If there's an "AI goldrush" we should be in the business of providing shovels whisky and madams, not mining for AI.

    1. codejunky Silver badge

      @J.G.Harston

      That would probably look like fast internet, cheap reliable power and stable and not excessive regulatory system. Our internet is ok generally, power is a serious problem and regulations we will see.

    2. elsergiovolador Silver badge

      nVidia has most of the shovels.

      You may still get on it writing manuals how to use a shovel. You could employ AI to do that for you. Publish your books on Amazing and retire early into the sunset...

  6. tiggity Silver badge

    Bin the lawyers

    We all know there are decent lawyers & others that are just scum of the earth, the latter need to lose their jobs..

    Those lawyers that use "AI" (where everyone knows they hallucinate "facts" that are non existent) & then cannot even do rudimentary fact checking (i.e. find the quoted case, check it is relevant to case they are dealing with) need to be disbarred (I bet they still charged exorbitant fees for inept AI use too)

    1. Andre Carneiro

      Re: Bin the lawyers

      I’m this particular case I would imagine the chap who had his malpractice lawsuit dismissed might have a thing or two to say about his lawyer…

  7. Mike 137 Silver badge

    Professionalism

    [She] "said she relied on the software "to identify precedent that might support her arguments" without bothering to "read or otherwise confirm the validity of the decision she cited"

    She deserves to be disbarred, as regardless of whether AI was involved, that statement shows a fundamental disregard for the protection her client's interests. The problem is her attitude, not her use of a chatbot, which is only symptomatic of that attitude.

    Incidentally, the same applies in principle to students using chatbots to write their essays. They want the degree but aren't willing to make the effort to learn the subject or use their own brains. That turns the diploma into a certificate of ignorance, which is a disservice to those who are prepared to learn but get awarded the same devalued certificate.

  8. Anonymous Coward
    Anonymous Coward

    ChatGPT is good for poems not statistics

    A student of mine in a class where I teach Data Analytics asked "Why I should attend your class when ChatGPT is there?"

    I real time demonstrated how ChatGPT hallucinates even for a simple calculation and have published an article in IEEE preprint:

    https://www.techrxiv.org/doi/full/10.36227/techrxiv.24440440.v1

    1. HuBo
      Headmaster

      Re: ChatGPT is good for poems not statistics

      Nice demo! Whencefrom one might infer that LLMs are currently wronger than normal (Gaussian wrong), on their way to being full-fledgedly flatly wrong, in a state of excedentary platykurtosis, as hallucinated in their normalized 4th moments (cooool!)!

      OTOH: won't get no scoliosis out of this machine learning posture (ingenskola?!)!

    2. I am the liquor

      Re: ChatGPT is good for poems not statistics

      ChatGPT doesn't understand maths, that's clear. What has surprised me, when asking it similar questions, is how good it is at producing an answer that, although wrong, looks plausible: if you didn't bother to check the maths, you could easily take its answer as valid. So it's not doing the calculation - if it was, it would produce the right answer - but it's not churning out random numbers either.

    3. Anonymous Coward
      Anonymous Coward

      Re: ChatGPT is good for poems not statistics

      Fascinating.

      So, in the last example you told ChapGPT that Excel calculates K as 0.2 and it agreed and said Excel was correct and the K value was indeed 0.2.

      If you lied to it and told ChapGPT that Excel calculated K as 0.5 would ChapGPT have still agreed with you and now said that K was 0.5 or would it double down on the wrong answer it gave earlier?

      1. Anonymous Coward
        Anonymous Coward

        Re: ChatGPT is good for poems not statistics

        Interesting - yes, as you suggested, told K is 0.5 as per Excel. The answer I got back - "there might be a mistake in Excel's calculation" and stll insists it's answer is correct. ROFL

  9. JoeCool Silver badge

    Wouldn't the call for transparency help with Risks ?

    Or do they mean transparency between the companies and not with the public ?

  10. Charles Ghose

    "The Ethical Dilemma: Lawyers, AI, and Legal Research"

    It is one thing not being able to pay for the assistance of a lawyer and to use AI programs like ChatGPT to find case law. But for established lawyers who have a law degree to rely on AI for searching case law, that is not only a disservice to their client but blatant laziness.

    Lawyers have access to law libraries, law websites like LexisNexis, law periodicals, and more that someone representing themselves pro se does not readily have access to. Therefore, it is understandable for a person going through the pro se process to try AI programs like ChatGPT to search for cases on a legal topic or issue. Can AI be dependable? No, it can't. As mentioned in the article, there have been lawyers from New York on separate occasions who relied on cases searched by ChatGPT, which turned out not to exist.

    In my opinion, the published cases from websites like Justia and Casetext, along with published cases from circuit courts, the US Supreme Court, and courts of appeal, should be made available to AI programs like ChatGPT to make information on legal matters easier to search and sift through when trying to locate cases on a particular matter.

    By utilizing reliable sources and platforms like LexisNexis, lawyers can demonstrate their commitment to thorough research and provide the best representation for their clients. It's not only about fulfilling professional duties but also about upholding the integrity of the legal profession. Let's raise the bar and embrace technology responsibly to serve justice effectively.

    1. Nematode

      Re: "The Ethical Dilemma: Lawyers, AI, and Legal Research"

      Same problem with medicine. I have asked ChatGPT for citations for what it's just told me. On checking tbose with, say, PubMed, they mostly don't exist or are themselves a melange of citations (typical AI trick). Not long before it starts apologizing. Just like a politician.

  11. Nematode

    It seems to me that educators, from secondary school level up to Uni level, are missing the obvious opportunity to both defang AI as a means of supplying "cheat" answers or content in their assignments, but also to educate about the strengths and weaknesses of AI.

    To me it's obvious to *require* students to use AI. Set the question/assignment. Require that students ask one or preferably more LLMs the question, or ask the LLMs to provide the assignment output texts. Then allow say 5 follow-up questions/refinements. Then require the students to copy and paste the entire trail of input/output into their academic submission. Then, require them to provide a critique of the exchanges they have created and come up with their own conclusion/final text, in their own words. This will sort the wheat from the chaff and indicate the students' true abilities. It is also exactly what everyone should be doing in real life to use and verify what AI says. I've always used this method when I use AI and it's remarkable how soon the LLMs start apologising for giving wrong answers and nonexistent citations.

    1. Nematode

      It seems Australia is doing much the same as this https://www.theguardian.com/australia-news/2024/jan/23/chatgpt-in-australian-schools-what-you-need-to-know-law-changes

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like